
Work Samples
Here on my work samples page, you will find a large selection of all of my work samples arranged by the LeADERS Category they fall under.

L - Phil355E Reflective Essay
Throughout this ethics course, I have taken away new views that have changed how I perceive the world. Moreover, the opinions I have developed will greatly benefit whichever path I choose to follow in the future. Throughout this reflection, I would like to discuss my three most significant takeaways from the course: the danger of future elections due to information warfare, whistleblowing’s ability to be rooted in loyalty, and the need for America to adopt a set of privacy laws. These impactful lessons have shaped my view of ethics in cybersecurity, and I hope this reflection will clarify why. Information warfare is a relatively new concept, referring to an operation to gain an information advantage over an opponent. This is an important definition to understand as it proves that Facebook’s targeted marketing used to influence the 2016 election was an act of information warfare. In many cases, it took a long time to realize this as people simply considered it a form of advertising. Still, Facebook was using citizens’ private data to pick specific ads more likely to influence their votes to gain an advantage and support their candidate unbiasedly. Disregarding that a social media platform should not take such an influential and biased role in politics, this also shows issues with the future of American politics as Facebook was able to get away with having such an impact in the president’s election. People allowed them to get away with it, as advertising is not often considered unethical. Still, it is a matter of how they used specific information to target ads effectively. This has changed the ethical implications of the future of American political races, and it will require all of us to try and distance ourselves from the influence of advertisements. It was a valuable lesson to learn that even everyday ads have ethical implications, which must be considered when making decisions. When I first learned about whistleblowing, I thought it was a means of undermining your company. However, the story of Chelsea Manning taught me that sometimes whistleblowing is required to help a group that you are incredibly loyal to. Manning was the one who leaked the video of the disrespect and dangerous actions of specific American soldiers in Iraq. Manning had gained access to this video but quickly learned that the government was trying to cover this video up internally so they could quietly deal with it. However, Manning’s concern that a lack of punishment wouldn’t send the message that American troops can’t be acting like this as it will ruin how people view the United States. After learning that it was being covered up, Chelsea realized that she would not be able to influence the opinions of her higher-ups alone, so she chose to leak the video to the public in hopes of causing a public outcry which would force the military to take strict action and ensure that he would not happen again. While this ultimately didn’t influence the military’s disciplinary actions, this did prove to me that whistleblowing is not an act of malice to hurt your organization. While I disagree with Manning’s choice of not trying to handle the matter internally before taking it public, it taught me that we could not consider whistleblowing ethically wrong just because it goes against the organization’s wishes. Instead, whistleblowing can still be morally correct as people may do it out of a deep-seated obligation to the company, showing not disrespect but ultimately extreme loyalty. This taught me the value of understanding the perspective of the culprit behind any potentially unethical action as they may be doing what they think is right, which causes us to take a different approach when understanding them. After learning about the EU’s General Data Protection Regulations, it has become evident that the United States needs to adopt a similar approach to privacy laws. A consistent note I have noticed throughout all of the cases I’ve analyzed in this course is that privacy, specifically data privacy, is among the most critical and valued things in nearly every society. This is evident from our first case analysis, where people want their homes removed from google maps, and the last, where people are concerned that Facebook is using their private information to change their political views. The most excellent solution I have learned about that would solve people’s fears over data privacy is a plan similar to the GDPR. This system allows people to regain control of their data and how it is used. It implemented a system that allowed people to track exactly how their data was used, afford it a greater level of privacy to begin with, and even allow people to have that private data removed from the internet if it caused them concern. Throughout the rest of our cases in this course, I consistently wondered how having a system like this would have improved the outcome of these courses. This has influenced me to use the GDPR plan as a blueprint for how data privacy should be approached and even inspired me to try finding ways to help implement a similar plan in our own country. This reflection focuses on the most important point I learned during this ethics course, there is no one correct answer to any ethical question. For example, information warfare proved that new political actions aren’t always ethically correct, whistleblowing proved that not all issues are black and white, and the EU’s GDPR proves that no single country has all the right answers. I believe that learning this lesson about ethics has taught me that ethics is all about focusing on every aspect of the situation, no matter what occurs. This will help me make the most informed ethical decisions, and I’m grateful for this course for teaching me that.

L - Phil355E User Data Case Analysis
The need to regulate data consistently becomes a more pressing issue in the ever-evolving digital age. While many countries are currently working on plans to create better data privacy, the European Union’s GDPR plan is the most well-known and influential plan established by any governing body. In Palmer’s explanation of the General Data Protection Regulation, we learn how the plan works. It was set as a means of giving members of the EU greater data privacy and more influence over what happens with their data. The plan ensures that companies can only harvest data legally in a limited sense but also holds the companies that harvested the data accountable for any data breaches or misuse of the gathered data. The GDPR applies to any company which does business with the EU in some way, which means it spreads the influence of the plan across the entire world as a means of better-protecting data security. While the plan cannot stop data leaks, it is set up to penalize the misuse of any form of leaked data significantly. In addition, it provides a safer internet experience for EU citizens and lets internet users take control of their data back into their own hands. In this case analysis. Furthermore, the plan’s most significant benefits are the right to be informed, data erasure, rectification, and restriction processing. I will argue that the consequentialist tool shows us that the United States should follow Europe’s lead because the protection provided to everyday citizens and better business practices should be commonplace in the digital age considering its positive benefits to users. Zimmer’s article “But the data is already public” examines the extensive amount of information that can be pulled from public profile data scraping. However, the more significant flaw came when the researchers attempted to remove identifying information. The argument was made that due to the data collected, sure students in the study could not retain anonymity and, therefore, could be negatively impacted by the study. The way this data was released and clearly cannot protect anonymity shows benefits at the immediate surface of the GDPR with its intentions to counter this problem with its right to data erasure. While it is undeniable that the research team acted in good faith to try and remove the identifying data, they could not accomplish this. At this point, the GDPR would allow students who felt that a dataset had an unmistakable identifying feature pointing to them to call upon the erasure of that data and have it removed from the study. The inherent issue in this is the dataset becoming incomplete when an individual is released. However, the partial dataset still provides excellent data to many researchers while refraining from harming individuals. This also would have allowed some data to remain intact and usable instead of completely removing the entire dataset from the public. Palmer’s discussion of the case pointed to the GDPR’s idea of data protection by design. While this was intended for new products and technologies, it clearly would have benefitted the designers of the “Tastes, Ties, and Times.” The T3 team would have set out from the beginning with the experiment needing to be designed in a way that would protect specifically identifiable data, such as categorizing any participant who had one clearly defining feature in a category labeled other or classified. While it can be argued that the dataset was incredibly beneficial to researchers, this claim needs to be viewed through the lens of consequentialism. When looking at the negative impacts of T3’s release, there is a disproportionate amount of harm done compared to its benefits. The dataset can reveal specific individuals’ political preferences and sexual orientations, which could result in the ostracization of that individual from certain social circles or even mistreatment by others in some cases. While the dataset’s potential could be beneficial, the other aspect to consider is that this is only one instance of a test like this. For the dataset to become completely scientifically valid, this data-scraping experiment would need to be replicated with multiple other college classes in other states across the country. With the already existing concern that a small group of people could be negatively affected at one college, that number rises dramatically once the dataset is gathered at enough colleges for the data to be scientifically accurate. Consequentialism argues that every action is inherently good or bad based on whether or not its outcome is good or bad. While it is clearly up for debate, it is fair to reason that risking the safety of any group of individuals by not protecting their right to data privacy is a bad outcome meaning the action itself is terrible. Suppose the United States were to adopt something like Europe’s privacy laws. In that case, even if the experiment failed to protect the anonymity of its participants, they could still call for the data to be removed, giving them their privacy back while leaving the rest of the results intact. By removing the potentially damaging data from the survey, whether through the team’s actions or the participant’s, the experiment’s outcome becomes positive, thus making the investigation beneficial, thanks to the introduction of privacy laws like the EU’s. Buchanan’s article “Considering the ethics of big data research: A case of Twitter and ISIS/ISIL” analyzes the impacts of the IVCC model monitoring for ISIS supporters on Twitter, does make a positive case for data mining. At the same time, it calls forward the ethics of gaining this data in the first place and presents that as the most significant concern in this case. The IVCC model uses data collected from a secondary data mining company and puts it into the model to determine if a profile matches that of a potential ISIS supporter. The goal is an attempt to understand which communities fall victim to believing extremist groups and beliefs while also attempting to come up with an answer as to why. While it is true that this monitoring system is beneficial in helping to identify ISIS supporters before they can spread their beliefs, Buchanan also calls into question the issue with the data mining companies which provide the data that helps run the IVCC. As stated by Buchanan, these technologies “can be used to identify to ISIS supporters as readily as they can identify WalMart shoppers or political dissidents.” This shows the underlying issue lies not with the IVCC model but with the fact that the data-mined content is not controlled by those looking to profile ISIS supporters. Instead, they collect data on anyone online, and outside sources can use that information however they please. Buchanan’s case shows the impressive and beneficial uses of harvested data proving that not all of it is used negatively. Still, it argues that the technologies created are one small part of the outcome of data harvesting and should be viewed as one positive when weighing the benefits and negative impacts of data mining. This is where the tool of consequentialism comes into play to determine whether or not the overall benefits outweigh the problems. If the data gathered only impacted ISIS supporters, it could be argued that by supporting or even joining a terrorist group, they would become void of their data privacy; however, this is not the case as this information is gathered from all individuals. At the same time, in a situation where the United States implemented data privacy laws, it would allow these supporters to either hide their identities or continue to spread their dangerous beliefs without being identified. When considering this issue through the lens of consequentialism, it makes it more difficult to gauge whether or not data privacy laws in the United States would then become a less beneficial tool. Everything considered, this is an ethical debate, and in the USA, there are specific individuals who give up certain liberties when their decisions threaten the country, such as terrorist groups and their supporters. When these privacy protection laws become void to people such as those identified by the IVCC model, it is made more evident through consequentialism that privacy models similar to the GDPR would benefit the country extensively as the negative impacts would be inferior compared to its benefits. As mentioned earlier, data-mined information can target anyone. It can sway people in decisions such as where to shop and who to vote for, and in a free country, it only makes logical sense to remove anything that breaches privacy and then furthermore illusions people into thinking they’re making their own decisions. Consequentialism proves that Europe’s introduction of data privacy protection laws was a favorable decision as it improved the everyday life of most law-abiding citizens. The GDPR plan’s impact on the daily internet usage of EU citizens gives clear evidence of the need to implement similar laws in the United States. While it’s true that the United States is a different country with a different culture and a population that acts uniquely, the GDPR has already worked its way into all American businesses used in the EU. There have been little to no reports of adverse impacts due to this. The only difference is that this plan would truly impact giving these benefits to the individual by giving them more rights and causing all US-exclusive companies to conform to the same laws. At the same time, more extensive data privacy rights may make it more challenging to identify dangerous individuals online; however, we do not need to apply this plan in the same way the EU did. Law enforcement could still have the right to data mine individuals given probable cause, and the only difference would be barring private corporations from having access to that same information for personal gain. All in all, the benefits of a set of data privacy laws show clear positive impacts for the future of a digital world, proving that the implementation is a good choice that the country should consider.

L - Phil355E Professional Ethics Case Analysis
In Bill Sourour’s article, “The code I’m still ashamed of,” we see a programmer still struggling with the moral implications of his code years later. At the time, a young Sourour worked for a marketing firm that would make websites for pharmaceutical companies. For the particular client in question, he was asked to make a website with a quiz targeted towards teenage girls, believing that the quiz would then recommend different helpful medications to whoever took the quiz. In reality, the quiz would always direct the user to the client’s drug unless they were allergic to it or already taking it. Sourour knew this but was unphased by it, as it was his job to fill client orders. Later he learned that the drug had directly contributed to the suicide of one of its users, as the side effects included severe depression and suicidal thoughts. Sourour would warn his sister to stop taking the drug, but he did little else past that. He gave the company its website, went to the celebratory dinner, and never warned anyone else. Due to his guilt over this, Sourour would later resign. In this case analysis, I will argue that Ubuntuism shows us that the code was morally problematic because it was created to trick people into taking a drug with side effects of severe depression, and Sourour should have done anything differently because he could have helped to stop the drug from being marketed to so many other people. The central focus of almost any code of ethics is maintaining the public’s safety and protecting individuals from harm. This is listed in each code of ethics as the first and foremost rule while also being listed as the first and second general moral imperatives of the ACM code of ethics. This puts a clear focus on the health of the public being placed above all else. The issue that arises from the code of ethics is that the ACM code then gives a set of professional ethics, with the first being to apply the highest quality work to the product you have been hired to create. When these codes are taken and applied to Bill Sourour’s case, we can see where the complications that caused him to move forward with the website may have arisen. On the one hand, the code of ethics tells him that he must do what is best for the public. Without knowing the side effects, he may have believed this was a beneficial drug that may help many people. At the same time, he felt he needed to uphold his professional ethics and deliver a high-quality product precisely as the pharmaceutical company had requested. The problem with Sourour’s decision comes once he learns about the drug’s side effects, having directly contributed to the suicide of one patient. At this point, it seems he has not considered the code of ethics. While it may seem as if Sourour is trying to uphold his professional ethics, he is more obligated to uphold his general moral imperatives as that comes first and foremost as a human. While the codes are not in an order ranking them from most to least important, general ethics would imply that human life is more valuable than the quality of a website made for a pharmaceutical company. This is seen most prominently in Ubuntuism, which is rooted in the belief that “a person is a person through other persons.” Ubuntuism puts its focus on community and society over the individual. Sourour’s issue here is that he did not want to cause any trouble for the company as he believed this could reflect negatively on him and cause him personal trouble in his career or personal life. Through this, we can see that Bill focused on himself as an individual rather than trying to help others by slowing the company somehow. Bill allowing his code to be used even after learning about the suicide of the young woman taking the prescription shows a moral failure to try and protect society. Arguments can, of course, be made that Bill was only doing his job, that the moral shortcomings fall on the pharmaceutical company, or that Bill could no longer access the website after completing it for the company. Still, Ubuntuism shows that there were ways around this that would have helped Bill to make a moral decision. Sourour ended up leaving the company, which is the most disappointing part as he was concerned that he may lose his job by doing something like this, but even after deciding to leave, he still did nothing. Sourour could have warned the public about the drug’s side effects or the website’s deception. He was willing to do this as he convinced his sister to get off the medication but then held onto that knowledge and only used that for personal benefit, another clear disregard for Ubuntu morals. Even when safety has not been involved, these codes of ethics are intended to protect society. Still, Bill had no problem creating a website that would intentionally deceive young women into taking drugs without presenting other real options. With all of this considered, the right steps for Bill to take would have been refusing to create the website and trying to warn people about whatever website they eventually published by trying to check its code or test its quiz. Even if Bill didn’t do this and only became concerned once he learned about the suicide, the next ethical step would have been to spread the information of the false quiz and the negative side effects to as many people as he could and not only his sister. Ubuntu philosophy shows that humans aren’t acting ethically if they’re not helping their fellow humans. While Bill may have helped his sister, his actions only directly helped his life, showing his ethical shortcomings. Mary Beth Armstrong’s article “Confidentiality: A Comparison across the Professions of Medicine, Engineering and Accounting” highlights the importance of upholding professional confidentiality as a professional ethics requirement. The argument is that breaking professional confidentiality is a slippery slope because you need clients to trust you in fields such as doctors or lawyers. Still, at the same time, if you ever feel you have to bring that confidentiality, even if you feel that you are making the right choice, you may lose that trust in the future, which may cause your community not to want to work with you, potentially costing you your job. Some people may be able to use this logic to justify Sourour’s choice to stay silent about the side effects of the drug and why he completed and gave the website to the pharmaceutical company. However, when considering the concepts of prima facie, that argument quickly falls apart. The four requirements for infringing upon prima facie are a moral objective justifying the breaking of confidentiality, it is a necessary option as there are no morally preferable actions, it must constitute the least infringement possible, and it must seek to minimize the effects of the infringement. Considering these four requirements, it can be concluded that the best course of action for Bill Sourour would have been attempting to tell the pharmaceutical company that their actions were wrong. If that did not work, he would have needed to go into the website’s code and fix the quiz so that it could give a variety of prescriptions instead of the one. The first option would follow the requirements perfectly, as he felt morally obligated to talk to the company, resulting in the least backlash as no one outside the company would need to know. The issue is considering the pharmaceutical company asked for that quiz, they already likely knew they were making a morally corrupt decision in the first place and would not consider what Sourour had to say. While the second option would require much more infringement as he would be breaking the contract between his employer and the pharmaceutical company, the Ubuntu moral philosophy would argue that Sourour was within his right to do this as the pharmaceutical company had already made a morally wrong decision against society. Suppose the pharma group’s decision was already going against Ubuntuism by deceiving and bringing harm to humanity. In that case, they should fall outside the boundaries of professional confidentiality, meaning that whatever choice Bill could have made would have been morally correct as he was looking out for his fellow humans, which means Ubuntu ethics view his choice as the correct one. While this could have resulted in Bill losing his job, he still could have helped countless people. Even still, as he struggles with the moral ramifications of his actions, he refuses to disclose the name of the drug, which could still be on the market and cause people to struggle with severe depression. Sourour’s actions show us that it is not enough to struggle with the moral implications of your actions. You need to do what you can to try and help people. Otherwise, your actions are just as wrong morally. Sourour was morally wrong in writing the code for the pharmaceutical quiz because it was through his actions and assistance that the company could put a dangerous, life-threatening drug into the hands of young women who likely would not have taken the drug otherwise. He refused to do anything because he feared the legal ramifications of breaking professional confidentiality. However, his actions still weighed on him to the point where he chose to resign from his company and thought about his choices every day. This is proof alone that Sourour should have tried to talk to the pharmaceutical company, and if that didn’t work, he should have tried to inform the public about the deception of the company and the danger of the drug. While it is fair to say this would have ethically been wrong as he had a contract, the most important part of ethics is that it is subjective, meaning that we as a society decide what is most valuable. I agree with Ubuntuism in the belief that humans are the most valuable thing in the world, meaning that it was Sourour’s moral duty to protect human lives over the company’s contract. Sourour’s experiences are an excellent cautionary tale of the ethical dilemmas we may all face one day in the workplace.

W - 425W Policy Analysis of the GDPR
The topic of how personal data is used has been a serious issue since before the widespread adoption of the internet. As far back as 1984, the United Kingdom has been passing data privacy acts, with the most notable being the 1995 Data Protection Directive, which put restrictions on how members of the European Union’s data could be used in the hope of better protecting privacy. However, as the internet continued to grow and evolve and personal data became the lifeblood of the internet, it became clear that these protections weren’t enough to cover how personal data was becoming most commonly used. This is why the EU developed the General Data Protection Regulation. Developed in 2016 and implemented in 2018, the GDPR sought to expand the coverage of the Data Protection Directive to align more accurately with the current uses of personal data in the digital age. The GDPR is a set of “constitutional commitments, ones that are deep and occupy a central place in the self-conception of a new, information age political body” (Hoognagle et al.) It was developed not only to protect the EU’s citizens’ data better, but to give them more power and information over what happens to their personal data to ensure that it is appropriately used or sufficiently protected if a citizen feels it is not being treated with care. In the case of what is considered personal data and what the GDPR applies to, the EU broadly defines both of these means as a way to ensure broader coverage. This includes any personally identifiable information linked to a person, with some exceptions that are not allowed to be processed, such as political leanings, race, sexual orientation, and religion. Notably, it is explicitly stated that GDPR applies to any company processing personal data if it is based in the EU and any company that processes the personal data of individuals in the EU (The European Union). The use of customers' personal data must be clearly laid out and consented to for companies to continue to use them under GDPR. Even after this initial consent, the data owner can revoke this data, access it at any time, ensure it is accurate, and stop it from being processed. This expansion of data owner rights also expands to legal action if this data isn’t properly handled. As the GDPR is an EU law, some may think it exclusively affects the EU. This is only partially true as while the GDPR only extends its benefits to citizens of the EU, anyone in the world dealing with the personal data of a resident of the EU has to adhere to these regulations. Because of this, the GDPR fits less into international cybersecurity policy as, instead, it majorly dictates a lot of how it operates. A significant change this includes is more transparent data processing needs, as anyone protected under the GDPR has “the right to access information held on them; and may object to the processing of their data where there are legitimate grounds for doing so” (Tankard). This is an adjustment a lot of companies outside the EU had to make as this was not common practice. Tankard also notes that “perceived as controversial by some, … the right to be forgotten has been solidified.” This means that if objected against, companies can no longer keep all an individual’s data, completely removing them from the customer database and disabling things such as targeted ads. This does not come quickly to all businesses, as even if they don’t mind this loss of customer data, complying with GDPR “requires the implementation of complex technological solutions, as well as new organizational duties and extensive changes in the organization’s business model” (Almeida Teixeira et al.). Clearly, the law has been considered troublesome, with some scholars stating that the barriers to GDPR implementation are “the regulation itself as it is complex and extensive and involves subjectivity.” (Almeida Teixeira et al.), and “the main disadvantage of the GDPR is its length and complexity” (Hoofnagle et al.), but others remain optimistic about its benefit and ease of compliance, stating “With the right precautions in place, organisattions should have little to fear.” (Tankard) Nearly a decade later, the GDPR is widely hailed as a step in the right direction. However, these scholars’ beliefs still ring true, as it is still considered troublesome for some businesses. Nonetheless, the benefits it has provided to the people of the EU are undeniable. References Almeida Teixeira, G., Mira da Silva, M., & Pereira, R. (2019, June 3). The critical success factors of GDPR implementation: a systematic literature review. Digital Policy, Regulation and Governance, 21(4), 402-418. https://doi.org/10.1108/DPRG-01-2019-0007 The European Union. (2024, October 14). Data protection under GDPR. Your Europe. Retrieved 1 31, 2025, from https://europa.eu/youreurope/business/dealing-with-customers/data-protection/data-protection-gdpr/index_en.htm Hoognagle, C. J., van der Sloot, B., & Borgesius, F. Z. (2019, February 10). The European Union general data protection regulation: what it is and what it means*. Information & Communications Technology Law, 28(1), 65-98. https://doi.org/10.1080/13600834.2019.1573501 Tankard, C. (2016, June). What the GDPR means for Business. Network Security, 2016(6), 5-8. https://doi.org/10.1016/S1353-4858(16)30056-3

W- 425W The Political Implications of the EU AI Act
It seems that for every positive use of AI, a negative one pops up right alongside it. It is hard to consider AI the tool that moves us into the future as it is hailed to be if it continues to create so many problems. That is where the EU’s Artificial Intelligence Act comes into play, setting the first set of regulations in place for the improvement and use of AI moving forward. Naturally, trying to regulate one of the most significant tools in the modern age comes with equally significant political implications. Of these, one commonly noted in acts such as the GDPR and the Cyber Resilience Act is the goal of the EU to strengthen the Union and its products without a reliance on outside technologies. This implies the country is trying to strengthen its standing in the world without relying on and thus supporting superpowers such as China and the US, at least in the AI industry. Furthermore, because of how many companies do business with citizens of the EU or publicly release their AI, this act has political implications for a large portion of the world. While some of these are positive, such as creating a precedence of proper AI protocols that protect users and original copyright holders of the information the AI is learning from, some of these implications are negative, like the idea that countries with enough influence can impose their regulations on AI worldwide as AI companies need the citizens of those countries as users. Clearly, politicians and policymakers have considered this idea as they have responded to the EU’s implementation of the AI Act. Recently, Paris hosted an Artificial Intelligence Action Summit, where world and industry leaders met to discuss the future of AI across the globe. At this event, Vice President JD Vance spoke to the EU about their AI act, pushing for deregulation instead of all of the regulatory actions put in place. He stated that the AI act could “kill a transformative industry just as it’s taking off” (Rinaldi). This stands in stark contrast to the opinions of European Parliament Member Eva Maydell, who was excited upon the bill's original approval to be voted upon, stating the AI Act “encourages social trust in AI while still giving companies the space to move and create.” (Al Jazeera). While Vance and Maydell’s beliefs clash with each other, another Parliament Member, Brando Benifei of Italy, highlights one of the more concerning political implications of the AI Act. After the passing of the Act, Benifei said, “We ensured that human beings and European values are at the very center of AI’s development.” (European Parliament). It is clear to see how Maydell and Benifei have come to their conclusions that the act is better for the individual rights of the people. The act was aimed at protecting individuals’ information, altering how AI is developed and used, and giving individuals and smarter companies a legal framework and more opportunities to create AI. At the same time, Vance’s concerns on the constriction of the creation of AI can’t be denied, as popular systems like ChatGPT wouldn’t have advanced as far as they have if they had been started under complete compliance with the EU AI Act. While the AI Act remains very new and its consequences are still being explored, the most notable of these is regulation enforcement, mostly leading to fines on AI companies rather than changing their development process. While it is too soon to tell as its compliance has yet to be tested, it is likely the new AI giant Deepseek will fail to comply with the AI Act, likely leading to its ban in the EU if it can’t comply. As countries such as the US argue against the AI Act, the most significant long-term consequence waiting to be tested is whether AI worldwide will begin to strictly comply with the EU AI Act or if they will simply restrict their platforms so that members of the EU can’t use them. If the words of the U.S. Vice President are anything to go by, I expect it will be the latter. Works Cited Al Jazeera. “EU politicians back new rules on AI ahead of landmark vote.” Al Jazeera, 13 February 2024, https://www.aljazeera.com/news/2024/2/13/eu-politicians-back-new-rules-on-ai-ahead-of-landmark-vote. Accessed 13 February 2025. European Parliament. “Artificial Intelligence Act: MEPs adopt landmark law | News.” European Parliament, 13 March 2024, https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law. Accessed 13 February 2025. Rinaldi, Olivia. “Vance warns EU against AI overregulation at summit in Paris.” CBS News, CBS, 11 February 2025, https://www.cbsnews.com/news/vance-warns-eu-ai-overregulation-paris/. Accessed 13 February 2025.

W - 425W The Ethical Implications of ZTA
With cybersecurity standards constantly evolving, the field consistently finds new ways to strengthen security, even in some of its most challenging areas, such as the human element. One of the most popular new methods that help handle this issue is the implementation of Zero Trust Architecture (ZTA). Zero Trust Architecture is a way of creating a network in which users are given the least possible privileges in a network. This is achieved through traditional cybersecurity practices such as multifactor authentication and heavy data encryption, as well as less common practices like temporary permissions granting, active monitoring of all users on the network, and having to allow every individual user and device onto the network. This architecture aims to give every user as little access as possible while still allowing them to do their jobs. Of course, with ZTA giving employees incredibly restricted access and monitoring everything they do, potentially negative ethical implications must be considered. The reason that ZTA is so widely recognized, even with all of its ethical implications, is because of how great the benefits provided are in comparison to the costs. With its ability to be applied across multiple types of networks, vastly improve the security of remote and hybrid workers, and dramatically cut back on the number of external threats to a network, ZTA shows clear benefits and leaves no doubt as to why even Google has adopted the network architecture type (Kang et al.). Even still, the costs are important to consider when thinking of implementing this plan as they directly impact the employees and how they interact within the organization. While ZTA has many benefits, it can also be very restrictive in the rights it provides for people using a system with zero trust architecture. This can include a user's right to privacy. To account for this, some systems implement AI monitoring so users don’t feel that their privacy is being breached, and issues are only escalated to human monitors if a security flaw is detected. This can still make users feel uneasy, necessitating the right to include transparency in how these monitoring systems act (Kang et al.) and what information can be disclosed to employers as a result. It can slow users' work progress by restricting their access to information while the user is still expected to meet the same deadlines. In an almost dystopian sense, the user also loses the right to make mistakes. In other architectures, a cyber department actively works to train its team on taking more secure actions, understanding when errors occur, and, more commonly, using the issue as a learning experience and improving the system. Contrarily, with constant monitoring and minimal privileges, ZTA is more likely to shift blame to the individual employee, creating a workplace where employees feel both untrusted and scared to mess up. One of zero trust architectures' most significant issues is its failure to address individuals' rights. The architecture itself doesn’t include it, and it leaves the decision of how individual rights are addressed to those implementing ZTA. While, arguably, the users give up certain rights when agreeing to use a ZTA network, there are certain rights they hold intrinsically, such as the right to be treated equally. Unfortunately, this right goes unaddressed by AI algorithms, which create unintended biases if the architecture is designed to include behavioral analysis. This is why researchers consider the “absence of human intervention in managing permissions” to be a serious threat in ZTA (Johnson), as the system has no perfect solution where it doesn't accidentally become discriminatory or further infringe on the little privacy that the users in the system are left with. Zero trust architecture, while useful, creates a potentially dangerous workplace culture where employees feel their individual rights and, therefore, they themselves are not respected, brewing workplace contempt and causing them to give less respect to their employers. Works Cited Hunsche, Martijn. “Ethics and Zero Trust: Striking a balance between security and privacy.” Highberg, Highberg, 2024, https://highberg.com/insights/ethics-and-zero-trust-striking-a-balance-between-security-and-privacy. Accessed 19 March 2025. Johnson, Grace F. “The Unintended Consequences of Zero Trust on Enterprise Culture.” ISACA Journal, vol. 6, no. 2020, 2020, p. NA. ISACA, https://www.isaca.org/resources/isaca-journal/issues/2020/volume-6/the-unintended-consequences-of-zero-trust-on-enterprise-culture? Accessed 20 March 2025. Kang, Hongzhaoning, et al. “Theory and Application of Zero Trust Security: A Brief Survey.” Entropy (Basel), vol. MDPI, no. 25(12), 2023, p. 1595. PubMed Central, https://pmc.ncbi.nlm.nih.gov/articles/PMC10742574/.


