top of page

Fall 2024 Articles List

The impacts of modernized food production are distributed unevenly, disproportionately impacting certain communities and regions. Marginalized communities, often those with limited access to fresh nutritious foods, experience a higher prevalence of food allergies and their associated challenges [7]. These communities frequently face systemic barriers, some of which are food deserts, inadequate access to healthcare, and lower awareness of food allergy management. By looking at the intersection of industrial food practices and socio-economic inequalities, this paper explores how the current food system fuels the rise in allergies and perpetuates health disparities. 
	
Modern food production techniques, although efficient at meeting the demands of a growing global population, have unintentionally contributed to the rising prevalence of food allergies [2]. Agriculture industrialization in particular has profoundly affected the composition, diversity, and safety of the foods we consume every day. Three major aspects of modern food production—industrial farming and monoculture, the widespread use of food additives and preservatives, and pesticide exposure—all play critical roles in increasing the risk of food allergies. 
	
Monoculture is the cultivation of a single crop in a particular area, focusing on the large-scale cultivation of crops like wheat, corn, or soy, which dominate modern diets [4]. This lack of dietary diversity deprives individuals of exposure to a broader range of nutrients and bioactive compounds that can support a healthy immune function. Studies suggest that early exposure to diverse foods helps to train the immune system to tolerate a wide range of proteins, thereby reducing the likelihood of allergic reactions [4]. On the contrary, diets dominated by processed foods derived from monoculture crops may exacerbate food allergies by limiting this early exposure. Additionally, monocultures are more likely to be affected by pests and diseases, leading to heavy reliance on chemical inputs like pesticides and fertilizers, further impacting human health [4].

Hidden Triggers: How Modern Food Production Fuels the Rise in Food Allergies and Inequities

By Alexander J. Adams

Food deserts can arise in rural areas without supermarkets for miles and urban areas with neighborhoods needing access to groceries or healthy food options, even in the presence of gas stations, fast food, and other processed food options (Tulane). Many of these areas are also highly impoverished, meaning that price markups and transport costs make it unrealistic for individuals to afford food. The limited access to healthy food options in these areas compounds existing challenges, making it nearly impossible to find nutritious food locally and often too difficult to travel elsewhere for healthy alternatives. For people in food deserts with dietary restrictions, the situation is even more restricted due to their specific nutritional needs, and the fact that nutritious food that fits those needs is even harder to acquire than it can be outside of food deserts. 

Numerous studies have analyzed this phenomenon’s devastating effects on public health. From weight gain to metabolic disease (e.g., diabetes and obesity) and decreased life expectancies,  low-income regions face many medical issues linked to poor diets. These areas also lack access to medical care that tackles these issues, amplifying these communities' perpetual cycle of poor health characteristics. In an era where produce and fresh food are more widely available than in history, the inequities that create these food– and health– deserts stand out more than ever. Why have these regions been left behind? What drives the manifestation of these inequalities, both in rural America and the heart of urban areas? The contrast these deserts create raises ethical questions that challenge the current food system.

Ethical Questions

Several ethical questions surround the existence of food deserts and food insecurity, challenging the roots of these phenomena. In particular, racist practices such as redlining that have driven other zip code-based inequalities have also played a significant role in the creation and persistence of food deserts. Medicine also needs help to address the realities of living in a food desert and how best to help patients facing these issues.

The very existence of food deserts and food insecurity poses significant ethical concerns. The United Nations recognizes food as a fundamental human right due to its necessity for survival and well-being (Fanzo). Access to nutritious food as a more specific human right emerges as a focal dimension of this defined human right. While communities often have gas stations and fast food restaurants with plenty of “food,” these communities lack access to healthier options. Many researchers have argued that denying access to food necessary for a healthy lifestyle violates the human right to food (Murrell).

The broader systems that produce and provide food, from farms to commercial supermarkets and in between, are known as food systems (Fanzo). These food systems, while efficient at providing food in theory, are a primary source of food disparity and the creation of food deserts, such as the concentration of healthy food within populated, high-income areas and significant environmental strain as a major greenhouse gas producer and the largest water user. These systems tend only to exacerbate issues with food scarcity, expanding food deserts through profit-driven decisions. 

Another central ethical question surrounding food deserts is the role of discriminatory practices in their creation. Food deserts are often subject to a practice known as supermarket redlining. The low average income level of food deserts causes supermarkets and grocery chains to cite failure to turn a profit, leading to the reuse of a term used to describe racial discrimination by banks in financing purchases of homes by African American families. Local leaders in these communities have pointed to the original practice, which often separated families by race and income, as having enabled the creation of “high-income” and “low-income” zones. This fundamental distinction has allowed for “supermarket redlining” to exist in the name of profit. This has led to cases like those in West Oakland, CA, which had no full-service grocery stores until 2019 (Stephens). Food insecurity is resultantly rampant in these areas, leading to adverse health impacts on residents and often disproportionately affecting residents of certain income levels and ethnicities. 

Legal researchers have even argued that food deserts are an “antitrust problem” (Leslie). An article in the California Law Review establishes that food deserts are deserts in both the noun and verb sense–deserts with no food and deserted by companies. In addition, the authors also note how supermarkets employed restrictive land covenants to prevent other supermarkets from opening in the location, driving prices up and restricting supply in food deserts, and how antitrust rulings have often incorrectly assumed people have access to cars for transport (something that isn’t the case for many low-income families within these food deserts). These authors then note antitrust law should be used to target the use of these covenants and promote the return of supermarkets to food deserts.

Food Deserts: Ethical and Medical Concerns

By Ahilan Eraniyan

While this system has been established in Europe for the past 7 years, it has yet to reach the United States, where the U.S. FDA’s Standard Ingredient Label lacks clarity in effectively informing buyers of healthy food options. In a country where obesity impacts almost half of Americans and continues to rise, the U.S. could, in theory, benefit from such a nutritional label system. The dietary recommendations that Nutri-Score is founded on are quite similar to the food guidelines provided by the U.S. Department of Agriculture [4]. Surveys have even highlighted citizens' desire for such a system where “64% of buyers are willing to switch to brands that offer clearer nutritional insights” [1]. This demand for food transparency is noticeable and growing, becoming a force that can potentially drive significant changes in America’s food industry.
 
The big question is why hasn't the U.S. made this movement? Or more so, should the U.S. implement this system? By diving into the advantages but also flaws in the Nutri-Score system, more insight can be gained towards answering these questions and even activism for an improved system of Nutri-Score in the U.S.

The Nutri-Score system can be an effective tool to promote healthier food choices among customers by simplifying nutritional information in an appealing, efficient manner. According to a 2019 study, Nutri-Score resulted in a 21% increase in spending on better-rated, healthier products [4]. More specifically, certain food groups have benefited the most from this system. A study with three different brands of crackers each with varying scores depicted that consumers bought the healthier B-rated cracker significantly more than lower-rating crackers [2]. Studies like these show that healthier food choices are far easier with a quick glance at the front-of-package label. Overall, front-of-package labels encourage healthier food choices but specifically Nutri-Score outshines other labeling systems. A study with 2,530 British consumers compared five different front-of-package labels including Nutri-score, Multiple Traffic Lights, Warning Label, Positive Choice tick, and no-label. The food categories with these different label systems ranged from pizza, drinks, cakes, crackers, yogurts, and cereals. Consumers were asked to rank items within each food category amongst the different labels. Across all groups, the probability of correctly ranking the healthiest product was significantly greater for Nutri-Score. A similar study was done in Italy comparing, specifically Nutri-Score with NutrInform, which confidently confirmed the success of Nutri-Score’s promotion of healthier foods over other labeling systems [3]. Scientists have gathered substantial data that demonstrates the effectiveness of the Nutri-Score system over other systems, allowing it to be considered “the consistent tool for dietary recommendations” [4].

Nutri-Score in the U.S.--A Game-Changer for Healthy Eating or a Recipe for Controversy?

By Julia Williams

The fast-food industry thrives on maximizing profitability, often at the expense of public health. In 2010, McDonald’s Corporation had an average stock price of $48.25. However, over a 14-year period, this number has increased to $298.56 as of November 14th, 2024, marking a substantial increase of 618.78% in stock price.[2]

 

Other fast food companies have experienced the same large growth seen by McDonalds, largely due to their advertising efforts. Companies invest billions of dollars annually in advertising campaigns designed to target vulnerable populations, including children and low-income families. A 2021 study by the Rudd Center for Food Policy and Obesity detailed that in 2019 alone, fast food advertising expenditures in the United States exceeded $5 billion [3]. These advertisements are deliberately designed to be visually appealing and emotionally resonant, using bright colors, recognizable songs, and offering incentives to foster brand loyalty from an early age. Children are particularly vulnerable to such tactics, as their cognitive development limits their ability to assess the persuasive intent behind these ads critically. The Rudd Center’s research underscores how these marketing efforts not only increase short-term consumption but also establish lifelong dietary patterns that prioritize convenience over nutrition. This cyclical relationship is a full display of the systemic nature of the public health challenges posed by the fast-food industry.

 

These advertisements often promote calorie-dense, ultra-processed foods with low nutritional value, contributing significantly to the rising prevalence of obesity in the United States. The CDC reports that over 42% of adults are classified as obese, a condition linked to increased risks of heart disease, diabetes, and certain cancers [4]. Obesity not only shortens life expectancy but also imposes substantial economic burdens on healthcare systems, with estimated costs exceeding $173 billion in the United States per year. The CDC’s findings illustrate how dietary habits shaped by fast-food consumption have long-term consequences that extend beyond individual health, affecting societal structures and economic stability. Furthermore, obesity disproportionately impacts marginalized populations, exacerbating existing health disparities and raising ethical questions about the responsibility of corporations in addressing these inequities.

 

The proximity of fast-food establishments to schools is a prime example; a study published in the American Journal of Public Health found that schools located closer to fast-food restaurants reported higher adolescent obesity rates [5]. The study’s authors noted that the convenience and affordability of these establishments make them attractive options for students, particularly those from low-income families. This accessibility not only normalizes the consumption of unhealthy foods but also limits exposure to healthier dietary choices, reinforcing poor eating habits that persist into adulthood. The spatial dynamics of fast-food placement reveals a calculated strategy to capitalize on the dietary vulnerabilities of youth, further entrenching the public health challenges associated with obesity and related illnesses.

 

Efforts to counteract these trends have faced significant resistance from the food industry. Regulatory measures, such as taxes on sugary drinks or restrictions on advertising to children, have shown promise in curbing consumption. However, these initiatives are often met with well-funded legal challenges and public relations campaigns. For example, New York City’s 2012 proposal to ban large sugary drinks was struck down after intense lobbying by the beverage industry [6]. The beverage industry’s campaign framed the regulation as an infringement on personal freedom, successfully diverting attention from the public health benefits of reducing sugary drink consumption. The immense influence of corporate lobbying efforts in shaping public discourse and undermining health initiatives is deftly shown in the fast-food industries' efforts. It also underscores the ethical dilemmas policymakers face in balancing individual freedoms with the collective need to address the public health crisis linked to unhealthy dietary habits.

Is Convenience Worth the Cost?  
Examining Corporate Responsibility in the Brain-Gut Axis Crisis

By Avneesh Saravanapavan

Still, beneath the appeal of kale smoothies and Mediterranean meal plans lies a web of ethical questions. Is the science behind these recommendations sound and unbiased? What about patients already burdened by managing their conditions—are they equipped to embrace this approach?

The promises and pitfalls of nutritional pharmacology reveal a complex ethical labyrinth. Food may indeed be medicine—but who gets to dine at this therapeutic table? And how can its integration be balanced with the role of traditional pharmaceuticals?

Food has long been regarded as central to human health, but its role as medicine is gaining renewed attention in contemporary healthcare. Nutritional pharmacology—the study of how nutrients and diet influence health outcomes—embodies this concept, emphasizing the potential of food as a therapeutic tool (Fardet, 2014). The notion of “food as medicine” advocates using specific dietary strategies to manage, prevent, and even treat chronic illnesses such as diabetes, cardiovascular disease, and autoimmune disorders (Bogan, 2021).

As chronic diseases become increasingly prevalent, diet-based interventions are emerging as compelling alternatives to conventional pharmacological treatments. However, this promising paradigm raises critical ethical concerns. Questions of accessibility, scientific rigor, and patient autonomy challenge the equitable and evidence-based application of food as medicine. While nutritional pharmacology holds immense potential, its integration into chronic illness management must address these ethical complexities to ensure fair and effective outcomes.

Food as Medicine: A Modern Take on an Ancient Practice

Food has long been central to human health. Ayurveda and Traditional Chinese Medicine, for example, emphasize the therapeutic potential of food, using it to harmonize the body's systems and treat ailments (Goyal, 2024). In these systems, food is not just nourishment – it is a foundational tool for health and healing.

Modern medicine, however, sidelined this holistic perspective. The rise of pharmaceuticals in the 19th and 20th centuries shifted the focus to chemically synthesized drugs as the cornerstone of treatment (Gittelman, 2016). Food, once central to healing, became an afterthought. 

Today, "food as medicine" is experiencing a resurgence, supported by growing research into how specific diets can prevent and manage chronic diseases. For example:
Low-carbohydrate diets stabilize blood sugar in diabetes management (Arora, 2005). 
The Mediterranean diet, rich in fruits, vegetables, healthy fats, and lean proteins, reduces cardiovascular risks and promotes longevity (Pérez-López, 2009). 

Yet, one key question persists: Can food replace pharmaceuticals, or is it best seen as a complement? While dietary changes may reduce medication needs for some, others require both food and medication to achieve optimal health. Thoughtful integration of these approaches can help create a more personalized healthcare model.

The Ethics of Food as Medicine: Navigating Nutritional Pharmacology

By Kunyu (Kimi) Du

While industrialization has increased food access and choice for many Americans, making complex and specific nutritional recommendations like ones from the CDC reasonably attainable, one in ten Americans still face food insecurity. Even with the support of social programs such as the Supplemental Nutrition Assistance Program (SNAP), Temporary Assistance for Needy Families (TANF), and the Special Supplemental Nutrition Program for Women Infants, and Children (WIC), 10.2% of the US population in 2021 struggled with food insecurity [10]. Not only does food insecurity increase an individual’s risk of chronic health issues, it drives up healthcare utilization and cost; these increased expenses deepen financial hardship and perpetuate a cycle of poor health and poverty. As a result, for many low-income Americans, the concern around planning a meal is less about nutritional balance and more about ensuring there is food on the table. 

The unfortunate reality is that Americans must be mindful of the food they consume. In 2017, the United States recorded the highest obesity rate among OECD nations—approximately four times higher than the country with the lowest rate—and the highest adult chronic disease burden, which was double that of the nation with the lowest burden [13]. Simultaneously, American healthcare expenditures have reached unprecedented levels. In 2022, “U.S. health care spending grew 4.1%, reaching $4.5 trillion or $13,493 per person” [8]. This is over $4,000 more than any other high-income nation [12]. In spite of these massive expenditures, the nation’s health outcomes remain poor. Yet, with such high volumes of illness burdening the healthcare system, reducing spending is not a viable option. If treatment of illness isn’t enough, a greater emphasis on preventive measures is essential to alleviate the strain on the healthcare system and improve public health. 

Recognizing the disconnect between healthcare investments and outcomes, the United States has begun to shift its focus towards preventative healthcare measures. In 2018, the United States passed the Agriculture Improvement Act, which allocated $4 million of funding each year to support Produce Prescription program pilots between 2019 and 2023 [7]. These programs allow medical professionals to provide “at risk patients,” identified on the basis of a medical diagnosis or qualifying income level, with “prescriptions” for improving their diet. These prescriptions provide incentives for social and behavioral change that take the form of nutrition education programs and vouchers redeemable at select markets for fresh produce. By addressing the financial and social barriers to healthy eating, Produce Prescription programs aim to alleviate food insecurity and reduce the risk of chronic illness among low income families.

Produce Rx: Can Produce Prescriptions Tackle Food Inequality in the United States?

By Yurika Sakai

One major concern is the lack of strict regulatory oversight, which compromises consumer safety and informed decision-making. Unlike pharmaceutical drugs, dietary supplements are not required to go through rigorous approval processes set by the Food and Drug Administration (FDA). Consequently, they can be produced without proof of their safety or efficacy for users, leading to issues regarding impurities, hidden ingredients, and inconsistent amounts of active substances because manufacturers may not always follow good manufacturing practices [4]. For example, some supplements have been found to contain prescription drugs or other substances not listed on the label, posing serious health risks to certain populations.

On a similar note, this lack of regulation has real-world consequences. Each year, approximately 23,000 emergency room visits in the U.S. are linked to dietary supplements [3]. Weight-loss and energy products are primary culprits, especially among young adults aged 20 to 34 [3]. These supplements often contain stimulants and other ingredients that, in high doses, can cause serious issues including heart problems and high blood pressure. For instance, products containing the stimulant ephedra were linked to adverse cardiovascular events before being banned by the FDA. Young adults are not the only ones affected by this misregulation. Children accidentally ingesting supplements account for over 20% of these ER visits because vitamins and similar products do not employ child-resistant packaging [3]. The risk of a toddler accessing such supplements could lead to potential complications, and in extreme cases, death. Older adults face risks as well—large supplement pills are choking hazards for such populations [3]. These considerations highlight serious failures in product labeling and design, which consequently fails to protect such vulnerable demographics.

Misleading labels and advertisements further exacerbate matters by exaggerating benefits and downplaying risks. The Dietary Supplement Health and Education Act of 1994 affords manufacturers much freedom in establishing vague health claims without providing robust evidence that their products are safe or effective [2]. They may include phrases like "supports immune health" or "boosts energy," which sound appealing but are not necessarily supported by clinical evidence. Some companies have even gone to the extent of making illegal claims that their products can treat or cure diseases [6]. Such baseless claims undermine the public's ability to make fully informed choices about their health and only mislead them into purchasing faulty products. Moreover, this supposed regulatory environment contributes to the public’s false sense of security. People may assume that if a product is available for purchase, it has been tested and approved for safety and effectiveness, which is not the case for most supplements [6]. Those looking for affordable health solutions may waste money on ineffective or even harmful products instead of investing in proper medical care.

Exploring the Ethical Issues Surrounding Dietary Supplements

By Dayaal Singh

The current recommendation to treat obesity emphasizes lifestyle modifications which include dietary changes, increased physical activity, and behavior therapy [3]. The prevalence of obesity is continuously going up along with the adverse health outcomes associated with it [4]. With more than 1 billion people in the world with obesity, alternate research to treat the obesity epidemic has been a top priority [5]. This has led to a surge in research efforts to combat obesity, including the development of pharmaceutical medications like Ozempic, a brand name for semaglutide. While it was originally developed for patients managing type 2 diabetes, semaglutide has demonstrated promising results in aiding weight loss [6]. It aids patients with diabetes as more insulin is released which in turn leads to lower blood sugar levels and have shown promise in clinical trials to facilitate weight loss [7].

Ozempic (semaglutide) belongs to a class of medications known as GLP-1 receptor agonists. GLP-1, or glucagon-like peptide-1, is a hormone naturally produced in the body that plays a key role in regulating blood sugar levels and appetite. By mimicking the effects of this hormone, Ozempic enhances the secretion of insulin in response to meals, and suppresses the release of glucagon. These actions not only help manage blood sugar levels in patients with type 2 diabetes but also promote feelings of fullness, making it effective for weight loss in individuals with obesity.

Despite its intended use for diabetes management, Ozempic gained widespread attention through social media where it was often hailed as a “miracle drug” for weight loss. Many influencers promoted the use of these drugs as short-term methods to lose weight. It has garnered the attention of many people to use off-label for cosmetic purposes leading to a low supply of a highly demanded medication. This shift in usage underscores the ethical, social, and economic dilemmas surrounding the allocation of such a limited and impactful medication.

Gatekeeping Health: Ethical Challenges in Allocating Ozempic

By Raphael Lee

The term “food desert,” though commonly used, is problematic and regressive as it masks the structural issues that prevented over 9 million Black Americans from accessing enough food to lead a healthy, active life in 2023 [1]. The word “desert” insinuates that these areas are small, insignificant, and barren–home to a select few unfortunates–when in fact, they are the result of decades of institutional neglect and racist policies. According to a study by the Fordham Urban Law Journal, over 150 American towns found that zoning ordinances in low-income communities were much less likely to incentivise grocery stores and supermarkets from cultivating business there [2]. These ordinances had the tendency to create concentrated areas of poverty unappealing to businesses as the property value and leasing directly next to these pockets of poverty skyrocketed, ultimately causing stores to turn to more affluent areas where a net gain and maximum profit was guaranteed.  This consequently created a cyclical pattern because when towns failed to accumulate wealth over time, they became designated areas of avoidance for grocery chains. 

As activist Karen Washington argues, the term "food desert" is an outsider’s label that reduces people living in these communities to a statistic, instead of recognizing the injustices that give rise to food inequity in the first place. Washington's disdain for the desert label is clear considering during an interview with The Guardian she asserted, “deserts are natural and have food, food deserts are man made—not natural” [3]. Additionally, when one thinks of the word desert their imagination immediately turns to tumbleweed rolling across a sandy, arid landscape absent of life.With this Saharan image in mind, sociological  “deserts” suggest that conditions of food inequity are inherent to black and brown neighborhoods or that grocery chains have simply naturally declined, completely ignoring the history of economic segregation in low-income areas of color. It is in this description of a dry, “dying” wasteland that we preemptively bury citizens lacking equitable food access while they are alive and well.

Instead, the term "food apartheid" is gaining traction among food justice leaders, as it more accurately reflects the racialized nature of food inequality. Apartheid, a system of institutionalized segregation and discrimination, is precisely the framework needed to understand how these communities were intentionally excluded from access to healthy food through racist zoning laws, redlining, and the disinvestment in Black and Brown neighborhoods. In fact, “food apartheid” rightfully shifts the conversation from the physical location of supposed “food deserts,” as no particular neighborhood is safe from systemic discrimination. Linguistically changing how we describe food apartheid can subsequently change the broader attitudes and perception to victims of inequity.

The experience of food apartheid, much like the broader experience of racial oppression, is one of disempowerment and detachment from one's own body.In his touching work Between the World and Me, Ta-Nehisi Coates uses the term "disembodiment" to describe the devastating effects of racial oppression [4]. He applies this term not only in the context of the physical brutality of police violence, but also in a more metaphysical sense, referring to how racism robs individuals of their agency and humanity. When a community is denied access to healthy food, it is a reflection of a larger system that has been structured to view these individuals as less than human,thus less deserving of proper, nutritious food in comparison to their white counterparts.

Food Apartheid: Recontextualizing Food Inequity in Black America

By Shameema Imam

“Disembodiment is a kind of terrorism, and the threat of it alters the orbit of our lives, and like terrorism, this distortion is intentional. Disembodiment.”

- Ta-Nehesi Coates, Between the World and Me

Interestingly, almost every country in the world bans direct-to-consumer (DTC) ads for health products, such as medical procedures or prescription medication, except two—New Zealand and America—which allowed pharmaceutical companies to advertise directly to consumers through an FDA policy change in 1997. From online video ads to painted panels on buses, Americans everyday are barraged with a plethora of ads for prescription medications for just about everything: diabetes (the ozempic jingle), depression medication, insomnia or even prescription nasal spray. And ever since its inception, the American prescription drug market saw $2.7 billion then to nearly $10 billion now spent annually on DTC ads by pharmaceutical companies, raking in profits more than four times over for every dollar spent on DTC ads [3,4]. DTC ads have drastically increased both patient requests for prescriptions and clinician prescriptions [5], its effects on consumer wellbeing has been less obvious. While limited, FDA regulation of prescription drug DTC ads still contains certain guidelines, like providing the generic name of the drug in addition to brand name, advertising the FDA approved use of the drug and sharing the most significant risk of the drug. However, liberties can be taken as seen with the last clause: DTC print advertising of drugs has to include a full summary of the drug’s adverse effects while broadcast advertising has only to allow for “adequate information” of adverse drug effects [6].While these limitations are all to ensure the FDA’s goal that prescription drug advertisement and information is “truthful, balanced, and accurately communicated", even with the FDA regulations and sometimes as a result of the FDA regulations themselves, DTC drug advertisements can intentionally or unintentionally play on consumer biases and cause them to make uninformed decisions. DCT marketing techniques that mislead viewers, engage in predatory strategies targeting vulnerable consumers desperate for a treatment, create false claims, exaggerate benefits or minimize harms all warrant ethical concern. The ethical concern raised here, one that arises from the complication of the purpose of drug ads to educate and inform with their inherent financial incentive, is also one of controversy.

For example, the ubiquitous 60-70 second DTC drug ads on television often have the same formats: catchy jingles and vivid visuals driving home the benefit of the drug with repeated motifs, and then around the 50 second mark, a fast litany of side effects all chunked together in a monotonous tone. Although these ads serve to inform and educate, it's not a coincidence what information we are made more likely to remember and what information we are made more likely to forget. Cognition scientists tell us that the 40-50 sec mark is when we are most likely to forget information, and a chunking of the information additionally makes it hard to retain[7]. 

The challenge to regulate DTC drug ads for consumer safety is one that often calls into question the then inherently ethical issue of a commercial approach to prescription drugs, which begs the question: should DTC drug ads be legal in the first place? Research shows that DCTA(direct to consumer advertisements) actually does not contain enough information to enable consumers to make an informed decision, and patients commonly have dangerous misconceptions about DCTA such as the belief that “only safe medications are allowed to be advertised [6].” In one study, scholar of medical sociology Jeffrey Lacasse counters the educational value of DCTA, instead finding a “substantial disconnect” between scientific literature and DTC advertisements [8].

Yet, many surveys report that DCTA is effective in encouraging patients to initiate more conversation with their physicians about their medication treatment, and have helpful discussions about drugs that might benefit them. A national survey done in 2002 by the FDA saw that 43% of the participants sought additional information about a product seen on DCTA, with 89% of those who sought information reaching out to their doctors. Although medical appointments scheduled solely because of a DCTA were rare, 35% of respondents in another nationally representative survey said DCTA prompted discussions regarding health concerns and treatment options at  appointments. One locally sampled survey even found that around 11% of participants were motivated by DCTA to seek medical care [9].

Truth, Lies, and Prescription Pills: The Ethical Dilemma of Pharmaceutical Advertising

By Yujie Sun

Our gut microbiome consists of microscopic organisms—bacteria, viruses, etc.—in our intestines that provide services for our body such as digesting the food we eat, training our immune system, and even interacting with our nervous and endocrine systems in exchange for nutrients and shelter. In addition to this, the gut microbiome is unique to each person, and forms from microbes given from the mother, diet, and environmental exposures [1]. In terms of food, fibers cannot be absorbed as nutrients for our use, but can feed the microbes in our bodies so that they can carry out their functions; our food choices also allow us to introduce bacteria to our microbiomes through fermented foods such as yogurts and kimchi [2]. When we think about the gut microbiome, the popular terminology in diet culture when talking about healthy eating, “you are what you eat,” applies.

The first topic that relates to the gut microbiome is fermented foods. Fermented foods are one aspect of modern day diet culture, but, unlike trends that emphasize weight loss (such as calorie counting), the focus around these foods is more around healthy, balanced eating, which may eventually lead to a healthy weight loss. Further looking into the biological effects, fermentation is a process that occurs when microorganisms break down sugars, which increases bacteria that are beneficial to the gut microbiome [3]. These microorganisms can produce a certain type of fatty acid that can be used as energy by colon bacteria. This helps the body with various functions, such as regulating metabolic pathways. The role of microorganisms in maintaining a balanced microbiome can help individuals who are looking to lose weight by ensuring that functions in the body are working properly, which explains their value in diet culture today. Therefore, unlike some diet culture trends that arise without scientific basis, there is science behind the beneficial effects of fermented foods on the body.

The Relationship Between Diet Culture And The Microbiome

By Jiyu Hong

In the early twentieth century, the upper Midwest was known as the “goiter belt”, attributed to a lack of Iodine in the soil [5]. The crisis was so severe that 30% of registered draftees in the region were disqualified from service in World War 1 due to enlarged thyroids [5]. To combat this, the United States began a comprehensive salt iodization program [6]. As a result of this effort, Iodine deficiency related health issues drastically decreased, eliminating the goiter belt [7]. At first, iodized salt was introduced alongside its unaltered counterpart. Today, although companies are not required to add it to their products, the vast majority of table salt in the United States contains Iodine [6]. Most Americans have the luxury to overlook the complex chemical processes behind their seemingly innocuous table salt.

As a wider trend, the process of bolstering consumable goods with micronutrients is known as food fortification [8]. This has become a mainstay of the United States nutritional market, with companies regularly fortifying products from cow’s milk to breakfast cereal [9]. Because food fortification is voluntary on the part of food companies, consuming fortified food is a matter of economics rather than healthcare. To move this decision to the healthcare field would require government action, limiting the right of a consumer to choose their food options in exchange for a guarantee of health benefit. This raises a moral dilemma: Is it the government’s responsibility to manage an individual’s nutritional health? Further, should consumers have the right to reject fortification of their food?

The Food and Drug Administration (FDA) has a long history of federal food regulation. While the agency’s original stated purpose in the 1906 Food and Drugs Act was to protect the consumer populace from harmful or mislabeled substances, the scope of federal regulation has increased over time [10]. For example, the FDA now regulates food additives (such as coloring and preservatives) to dictate which substances, and at what quantities, companies may add to their products [11]. 

If the government aims to protect the wellbeing of the populace, it is clearly worthwhile to enact preventative regulations, such as ensuring that poison is not sold as cough medicine in grocery stores. However, is it ethical to mandate the addition of Iodine to table salt, even for a noble cause such as fighting goiter? To resolve this question, the FDA requires that every fortified food has an unfortified alternative on the market [6]. But considering that one of the most significant contributors to disease in the United States is nutritional deficiency [12], would it actually be immoral not to subsidize an individual’s chemical wellbeing through their food?

These questions point to a deeper debate in medical ethics: whether the government should act paternalistically regarding healthcare issues. This question is unlike any other in the American cultural zeitgeist. For example, while nearly 88% of surveyed Americans reported support of the Social Security system [13], only 45% would approve of a single government universal healthcare program financed by taxpayers [14]. Social security, while in many ways distinct from a public healthcare system, removes agency on the part of the citizen in exchange for incorporating savings into their lives. Similarly, public healthcare is managed by the government rather than the individual, freeing up mental space for the insured. Mandatory food fortification offers the same tradeoffs. Customers lose the agency to choose non fortified food options, but gain the assurance of nutritionally rich food.

Food Fortification as Healthcare Policy: The Ethics of Removing Choice

By Charlie Rubsamen

Although not the template for the male experience, many aspects of this story are likely relatable to most men reading this. Yet, the presence of eating disorders in men is too often dismissed due to modern stereotypes, male comorbidity, and a lack of proper treatment methods (Harvey & Robinson, 2003, p. 304). Framing my approach to these issues of diet culture, body standards, and stereotypes, I’d like to address each of these three areas of dismissal to provide a more comprehensive view of the major roadblocks that lead to eating disorders being so pervasive in male populations. Further, it would be remiss to not immediately qualify my previous statement by stating that women experience the same severity and widespread nature of eating disorders throughout their populations. Rather, my focus on male populations stems from the often more surreptitious nature of male eating disorders since men rarely express, let alone notice, that they have an eating disorder. Yet, men account for over a tenth of those struggling with anorexia and bulimia and over a fourth of those dealing with some form of binge eating disorder (Weltzin et al., 2005, p. 188). These percentages are clearly dire, so why does society consistently dismiss over three million men spanning adolescence to adulthood struggling with disordered eating?

To address why so little can answer the question above, we can begin with our first pertinent roadblock: toxic masculinity. The modern stereotype for men is that any form of reactivity, vulnerability, or emotion is generally shunned since men are, even in modern society, intended to be apathetic and pragmatic. Now, this concept has been well-discussed in modern research, but there is an ideological gap in its connection to eating behaviors and diet culture; the concepts are often mutually inclusive (Rotundi, 2020, p. 54). This toxic masculinity works on the front end since younger men are often pushed to strictly follow “healthy” protein diets that can lead to malnourishment and an excess of physical exercise which can lead to expressive suppression (aka bottling their emotions). However, I am more concerned with its work on the back end since once men develop these eating disorders, toxic masculinity’s notions of internalization and a de-emphasis of therapeutic recourse mean that these struggling men choose then not to reach out for help–an especially troubling thought when the suicidal ideation rate for male adolescents is over 45% (Patel et al., 2021, p. 78). Still, the subject of toxic masculinity is difficult in modern society since framing men as the victim is tricky from two standpoints: (1) they are the victim of their own system and (2) it often detracts from the struggles of the main victims in a patriarchal society. Still, I intend to establish a clear, coherent distinction between these two points by addressing that while they are victims of their own system, we can only move past this by working to destigmatize psychological support and outreach for men. Second, it is this same system of toxic masculinity that has disproportionately led to the struggle of so many women for equality, opportunity, and free expression, and thus, it is imperative to acknowledge that women face similar issues relating to toxic femininity; simply put, men are not victims within the patriarchal system, but rather, victims within the system of toxic masculinity as it relates to body standards.

Toxic Masculinity in Nutrition: Targets of Diet Culture, Body Standards, and Stereotypes

By Gage Gruett

The ‘aversive stimulus’ drug that Alex used is an example of a behavior-modifying medication: a drug that affects the central nervous system to influence behaviors, thoughts, mental processes, or mood [2]. The dispensing of certain behavior-modifying medications has increased in recent years; for example, a 2024 cross-sectional study by FDA researchers revealed an 81% increase in prescriptions for nonstimulant ADHD medications among individuals aged 20 to 39, from 2018 to 2022 [3]. 

Such pharmaceuticals include mental health medications like antipsychotics, antidepressants, mood stabilizers, and stimulants. These drugs can be used to lessen the behavioral symptoms associated with psychiatric disorders, such as bipolar disorder, ADHD, and depression [4]. Many individuals opt to take these substances to manage their disorders, as several of them have been shown to be highly successful in reducing behavioral symptoms; for example, a 2024 meta-analysis conducted by several research institutions suggests that Vyvanse, an ADHD medication, is effective in alleviating the symptoms associated with the disorder [5]. However, as presented in A Clockwork Orange, others also take these medications as a result of recommendations or requirements from legal organizations or psychiatric professionals, whether to improve their behavior, stabilize moods, or conform to social expectations. This brings up complex ethical questions regarding the role of behavior-modifying drugs in shaping or controlling behavior. Are these medications a means for personal improvement, or just a means of conformity? Furthermore, is it ethical to recommend or mandate behavior-modifying drugs to people who do not seek them? 

Behavior-modifying drugs serve to address two complex forces: society and oneself. Those who aim for self-improvement for their mental illnesses, such as depression, can benefit greatly from seeking help and being prescribed medications. Historically, around a third of depressed individuals seek help to manage their symptoms, likely to improve their quality of life [6]. Thus, the prescribing of antidepressants such as SSRIs can be a beneficial tool for said patients, as research suggests that they can be effective in improving depressive symptoms [7]. Therefore, the use of antidepressants can be empowering and used as a form of self-care.

However, as illustrated by the state convincing Alex to partake in the Ludovico Technique in A Clockwork Orange, the use of these medications is sometimes used to benefit the lives of others, not simply the person who is prescribed them. This is also demonstrated in educational settings. For example, in public schools, while it is prohibited to mandate that a child take medication, teachers are permitted to suggest that a child undergo evaluation if they show signs of struggling in class or display symptoms of disorders like ADHD [8]. To help these students, schools could invest in hiring more special education specialists and extending their practices beyond general education. Instead, some children are simply prescribed behavior-modifying medications, such as Ritalin or Adderall, to help them meet classroom behavioral expectations. This raises crucial ethical questions: are people genuinely giving voluntary consent to these medications, or are they being subtly coerced by our society’s norms? Also, should we be asking individuals to ‘normalize’ their behaviors, or should we work towards a society that more readily accommodates individual differences? 

A second key issue regarding the use of behavior- reforming drugs is the potential stripping of one's autonomy. In A Clockwork Orange, Alex is stripped of his autonomy due to the drugs used in the Ludovico Technique, restraining his ability to act on his true desires and impulses. This makes his improvement of behavior considerably insincere, as they are simply a result of his loss of free will. While this example may be based on a fictional story, similar events can occur in real life; for individuals over the age of 18, the court can order an individual to comply with medication administrations, even if they have been involuntarily evaluated [9]. While dispensing of said drugs may be an effort to benefit the well-being of these patients, their autonomy is being lost, and their ‘improvements’ are likely more caused by the drug’s effects than their personal changes.

The Ethics of Behavior Modifying Drugs: From A Clockwork Orange to Modern Medicine

By Carlota Hermer

Diet Culture and Celiac Disease 
Diet culture often promotes restrictive eating for excessive weight loss. In the current media, many celebrities and magazines endorse a gluten-free diet as being “healthier,” and helping achieve the goal of thinness. For example, celebrity and influencer Kourtney Kardashian recently put her family on a gluten-free diet, saying that it significantly “improved their quality of life” [5]. Watching these stars choose a celiac diet influences the average person to do the same, especially if those celebrities have a desirable body type. Diet culture has turned gluten-free eating into a trend rather than a medical necessity. While this trend has driven market growth and expanded the availability of gluten-free options, it has also trivialized the condition, leading to misconceptions and stigma, and it has increased the price of goods, making them less accessible for those with Celiac diseases. As an article by Georgia State University points out, “With an increase in demand, there is also an increase in price [and] Gluten-free products from mass-market producers have been shown to be 139% more expensive than the same gluten-containing product” [6]. With this knowledge, let's better understand how finances impact a gluten-free diet. 

How Socioeconomic Factors Shape Access to Gluten-Free Food
Living with Celiac disease comes with a cost. According to the Celiac Disease Foundation, “gluten-free cereals, pasta, and snacks in the United States can be up to 139% more expensive than their gluten-containing counterparts [2].” Additionally, traditionally gluten-free foods such as quinoa or special types of bread and flour also tend to be expensive as well, so those with Celiac disease have a substantial financial burden to carry. A study published by the National Library of Medicine found that Celiac disease patients tend to have higher out-patient costs than non-Celiac patients, spending upwards of $4000 more per month on medical necessities and proper food [3]. This situation raises many ethical questions. Why should individuals be penalized for a situation they have no control over?  This inequity is common with broader medical disparities, disproportionately affecting people of color, who often face the worst outcomes due to higher poverty rates in the United States [4]. Like many other medical expenses there is a clear link between financial stability and access to care, but is this link fair? Most would argue no, but what can be done to create equity? Unlike conditions that can be managed with insurance-covered medications, celiac disease relies on food—an everyday necessity—as the primary form of treatment. Without systemic intervention, this inequity perpetuates a cycle where economic status determines health outcome

The Inequities of Celiac Disease

By Aditi Avinash

What is Food Security?
 
Food security is a simple concept: making sure everyone, everywhere, has enough nutritious food to eat. It’s about creating a global safety net, often through large-scale solutions. A classic example is the Green Revolution, which used new farming techniques—like high-yield crops and chemical fertilizers—to increase food production around the world. In countries like India and Mexico, these innovations helped avoid widespread famine and fed millions [2].
 
Fast forward to today, and science continues to drive food security efforts. One promising example is Golden Rice, a genetically modified strain designed to combat Vitamin A deficiency, which is a leading cause of blindness in children, especially in poorer countries [3]. Studies show that this rice could prevent thousands of deaths and disabilities, providing a powerful tool in the fight against malnutrition. But with all its potential, the ethical question lingers: is it okay to rely on genetically modified crops for feeding the hungry, or does this come at a cost to local farming methods and the environment?

 
What is Food Sovereignty?
 
On the flip side, food sovereignty takes a more grassroots approach. It’s the belief that local communities should have control over their own food systems, determining what they grow, how they grow it, and how they distribute it. Food sovereignty emphasizes cultural relevance, sustainability, and self-sufficiency [4].
 
A great example is Mexico’s traditional maize farming. In rural parts of the country, farmers continue to cultivate ancient varieties of maize, using methods passed down through generations. This type of farming is not just about food; it’s about preserving a way of life. Studies have shown that these traditional farming techniques are often more resilient to climate change and less damaging to the environment compared to industrial farming practices. Similarly, the Slow Food Movement, which started in Italy, champions local, sustainable, and culturally relevant food choices, helping to connect consumers with the farmers who produce their food [5].

But food sovereignty isn’t just about nostalgia for the past—it’s about making food systems work for the future. It’s about maintaining biodiversity, empowering local economies, and protecting ecological sustainability in the face of climate change.

The Ethics of Food Security vs. Food Sovereignty: Whose Nutritional Priorities Matter Most?

By Muskaan Toshniwal

bottom of page