Aidian Sasenarine
Professor Nochomovitz and Professor Patrick
FIQWS 10013
24 November 2023
Helpful or Harmful: Artificial Intelligence Usage Within Healthcare
Part 1: What I Have Heard About Artificial Intelligence Effect on Humanity
The topic that I want to delve into is how the narrative of Artificial Intelligence has
shifted between the many generations of life. AI initially started as tools that were crafted in the
to further human problem solving and merely was viewed as a tool that could aid human in
cognitive abilities, however, AI has grown much more, as a vast majority of the public has come
to rely on such technology. This resulted in a deeper dive into core memories that helped shape this idea into a
narratable research proposal. I drew memories of my mother discussing the implications that
recent technology will have on future occupations. My mom used to detail stories of her deciding
a career path when she moved from her home country of Guyana and attending college in
America. She would narrative how she struggled to choose between a path in business or in
science. She said that she felt enthusiastic about both and that she had the free will to follow her
dreams. However, when I was applying for college the previous year, she would stress the
importance of choosing a career that cannot be overtaken by artificial intelligence. These ideas
are reinforced in many aspects of life, data analysts are replaced, automatic cash registers –
human interactions have diminished by the increase of technological advancements. And the
greatest problem society faces are that we are not sure where AI will end or whether we can save
humanity. I even saw ideas about the drawbacks of technology and how people view them in
media culture. I was recently watching Avengers: Age of Ultron, in which one of the
protagonists, Tony Stark, creates a robot to help protect civilians when he is unavailable.
However, the robot decided that humans were the problem that was plaguing the world and that
they need to be erased. Such problems portray the problem of most people today. They believe
that people have created AI to overpower human intelligence and eventually humans will be
under the rule of such Artificial Intelligence. Another major problem is that AI is taking over the
occupations of many, running many people jobless. This again flips the story on how helpful
technology is being. Even the article Is Artificial Intelligence Going Too Far? addresses the same
issue. The article discusses the benefits for entertainment as AI can make realistic looking
artwork out of a few pressed keys on a keyboard. However, the author argues that this is a
double-sided sword as these AI generated art can be used to false advertise and give out false
information. The article also highlighted how such misinformation can be incorporated into
criminal trials causing injustices. Another popular novel that discusses this concern is Fahrenheit
This novel expresses the idea of robotic surgeons and how that might affect communication.
Questions such as trust, comfortability and safety come into question when patients must
put their life in the hands of not another person but a robot. This heavily relates to health care as
the cornerstone of narrative medicine is the idea that there can be seamless communication
between patients and physicians. As a result, creative expression of the human mind enables a
smooth flow of ideas from an informative doctor to an attentive patient and technology can either
enhance or disrupt communication. The overall story change I am trying to highlight is how AI
has started as something helpful to aiding human life and now the technology has switched to a
tool that may effectively ruin humanity. There are a few different ideas that I want to discover
throughout this research paper. I would like to uncover the specific usages that AI has not only in
the medical field, but also in narrative medicine. I want to see scientists and physicians’ belief on
how well AI could be integrated, and I would also like to see if media portrayal of AI is in any
way accurate to what could happen in life in the future.
Part 2: Research Paper
The advancement of Artificial Intelligence has started to weave itself into the fabric of
human life. From emerging as a cocoon of ideas to assess machine learning, AI has now
metamorphosed into a kaleidoscope of complexities, embracing thought processing, image
detection and language translation. Furthermore, much research and resources have been
invested in adapting AI to many occupations to help improve efficiency within the workforce.
However, a growing area of interest between the mesh of AI and profession is within the medical
field. Great debates have divided opposing groups, as supporters advocate for more AI usage to
relieve some stress from physicians, while dissenting opinions believe that the complex roles a
physician performs should not be entrusted in technology, especially when a patient’s life is at
stake. This essay primarily seeks to unveil how the changing reliance on AI overtime shifts the
narrative of how it should be implemented within a medical environment to the benefit of both
the compassionate physician and hopeful patient.
Artificial technology, in its primitive form, resembled rudimentary tools set out to solve
simple tasks and conduct basic language processing. One such AI program was a Rogerian
Psychotherapist Chatbot, ELIZA. This chatbot contained a simple set of codes, developed by
Joseph Weizenbaum in 1966, that allowed users to ask basic questions and would result in the
program spewing out a response. Akin to a limited oracle who would give slight insight into a
query someone may have, this program searched for keywords when users input their questions
and utilized “pattern matching” to pair key words to answers within their database.
A high school professor in the late 1970’s narrated his first encounter with this program
stating that this coded machine delighted him and his students. He stated that this program
played the role of a therapist, as the chatbot “reflects on questions by turning the questions back
at the patient.” This narrative during the 1970’s displayed that the public did initiate a correlation
between AI to narrative medicine ideas, as the author of this specific journal mentioned how
ELIZA’s listened to the “patients” concern and attempted to address their problems. As a result,
there was interest in implementing AI to the medical field, however, the public seemed more
concerned with AI’s limitation to a finite number of answers. Moreover, society did not have
much reliance on AI at this time, however, interest of AI’s implementation into the medical field
was established, only tethered by skepticism.
A decade later the narrative would shift to more people being uncomfortable with the
idea that AI could be implemented in medicine. This unease was due to stronger AI processing
and recognition as more powerful programs arose. One such AI program that was popularized
during the 1980s was MYCIN. Acting as a digital musician producing a symphony with only a
human conductor, this specific program would attempt to conjure a diagnosis based on patient
reports of symptoms, however, it primarily focused on antibiotic treatment for bacterial
infections. The expanding database of information allowed for such a rise of artificial intellect,
but the public started to disagree on whether it should be interjected into real practices.
Dr. Jeevanandam, in his article exploring the skills of MYCIN, expressed that the public
not only “raised moral and legal questions about using computers in medicine” (Jeevanandam,
2023), but also stated that the program had a success rate of about 60%. The public argued that if
the MYCIN bot made an incorrect diagnosis, who would take responsibility for its actions. If the
robotic spoiled the melodic symphony who would be to blame? As a result, the shift in narrative
from initial interest in adding AI to the medical field a decade ago flipped to doubt and
skepticism.
Human’s desire for perfection may be the source of these worries. During both periods of
time, the public focused more on the potential limitations, instead of focusing on the benefits.
People agreed that it could be a benefit for physicians when the blueprints were first laid out, but
as reliance on AI to guide physicians was starting to grow, the public did not want AI to have a
hand in their medical treatment. The growing relationship between reliance on AI and the
public’s support seems to be inversed, as the more AI could be helping physicians, the less the
public wants to see them implemented. Moreover, if the conductor did not feel the pressure of
perfecting the composition, he would be less compelled to exceed expectations in his profession.
From the 1980’s to modern day AI has grown exponentially, and AI has already been
implemented as a guide for physicians. One way that AI has been added to the medical field is
through AI Diagnostic Imaging. This metric aid includes image interpretations that detect
abnormalities in medical photos, image enhancement to make photos clearer, and making images
more portable through accessing photos on smaller devices. And although these programs have
already helped radiologists in examining medical photos, many people still have concerns and
stated that there still are areas that need to be improved upon.
A study by Dr. Xiaoli Tang on The Role of Artificial Intelligence in Medical Imaging
Research illustrates that there are two main problems that prevent such imaging AI from
widespread use in healthcare. He states that lack of organization between institutions using these
AI programs as well as “HIPAA complaints” (Tang, 2020), could pose a serious threat to the
usability of these AI programs. Again, the narrative seems to focus on the limitations of these
programs and the narrative in this regard has yet to change. The other complaint came in the
form of lack of clinical execution of care. The images depict solely general solutions to a disease.
The images give no further story or narrative of the illness despite the overarching condition.
Tang cites this as a major concern as although this information produces correct information
without fail (unlike MYCIN), it fails to soak in the entire story. However, what is changing is the
people’s acceptance to implementing AI technology. There was no major outcry when AI was
implemented but there was still voiced concern about their utilization. This technology is already
synonymous with everyday life, so much so that the public accepts the use of AI.
(Tang, 2018, 249-258)
This picture further encapsulates the changes AI is producing towards healthcare. The
images depict areas of concern within a patient for common diseases, however, these areas only
give a general basis to the problem and fail to close read further to make the correct diagnosis.
What if the illness a patient felt initiated from a unique incident? How would this be factored
into what the images produce? Physicians are still needed to fill in the holes of the stories, but
they get a basis right in front of them of a potential solution based off prior results. This is
extremely dangerous however as this could lead to improper diagnoses if a physician relies more
on these images. The guide may start to become a sacred text that doctors may source instead of
their own studies. When coupling this with the sentiment of MYCIN how physicians evaluated
the ideas of AI before, and now the narrative has changed to relying on AI imaging to be a
“guide,” it is clear to see that AI is only becoming more and more impactful. Like the theory of
entropy, the order and structure of life tends to be more unorganized over time. When looking in
the lens of AI and narrative medicine, the structure of physicians becomes less ordered, and
meticulous as time progresses.
This still does not change the fact that critics of AI in medicine protest for limited use of
such technology. One of the greatest concerns is AI’s inability to process emotions, and this is
one of the staples of narrative medicine. Being able to read is by far one of the most imperative
skills a physician can have when treating a patient. Charon defines close reading as “the narrative
competency to recognize, absorb, interpret and be moved by the stories of illness” (Charon,
2005, p. 262). The formulating argument is that AI technology currently does not possess the
ability to comprehend such emotions and is incapable of completing close reading which is
necessary to fully treat their patient. Critics cite ELIZA’s core components as a fundamental
reason for why AI should not be incorporated into medicine. They state that although AI has
gotten more defined and knowledgeable, they require tangible evidence to compute results.
Furthermore, much of what is developed under close reading is what cannot be perceived from
the untrained eye. AI can only take what is displayed in front of them and lacks the ability to
account for untruths and holes within a patient’s stories. Even worse, close reading relies on the
physician’s actual investment into the patient’s life and being “moved” by their story. If AI
cannot be “moved” by the story of illness, is the precision of their treatment better than the
meticulous care of a human physician?
A greater level of complexity is layered on to this argument as people argue that AI’s
guide under the scope of narrative medicine is not energy efficient. Humans must proof-read the
AI’s result regardless of if their diagnosis is correct or incorrect, therefore energy is still
expended to ensure proper care. As a result, some question whether AI’s guide is even beneficial
overall. Much like the problem that the program MYCIN faced, medicine and sciences are
forever changing; this means that although processing software may become skilled in medical
treatment, it will never be perfect. However, others may argue that human care is not perfect
either and overall, AI may have an edge. Furthermore, if someone were to average out all the
health care from physicians would the average care be much greater than the care AI would give.
The perfect physicians would be able to closely read every detail of every patient and have a full
treatment plan of 100% for everyone that fell under their care. However, this idea may be rare
and even non-existent while AI technology may not be able to fully treat everyone to full health
but may be able to treat more patients to a healthy level. Despite all this information, it depends
on who oversees making these decisions. Is the government going to favor AI in the future
because it is more cost efficient not in terms of energy saving of physicians, but for monetary
gains?
Implementation of AI can be seen in many different professions besides healthcare;
therefore, it may not matter what the public’s opinion is, it may be inevitable. One example is AI
fading jobs in grocery stores. This is eye-opening for critics of AI as this artificial technology
already has the power to replace humans in a multitude of professions. In an everyday example,
grocery shoppers casually run into robots that “scan” the aisle for hazards to clean up. Moving
past the aisles, technology has already faded out cashiers as self-checkout lines start to extend
further than normal human to human checkout lines. This administration of robots already faded
out certain jobs of store employees and could potentially fade out more roles that these
employees possess, yet humans do not seem to mind these changes.
When crossing a toll many drivers travel with an EZ pass that bypasses human contact.
Even other forms of technology that require some minimal effort without human interactions are
fading. McDaniel in his article states “over 96% of our customers are electronic with their
transponders like E-pass…with less than 4% using the exact change basket” (McDaniel, 2022).
Outside of healthcare, humans prefer occupations that require the least effort and least human
interactions even if that results in the complete eradication of humanity from those occupations.
Moreover, those in charge are complying and adding these changes, meaning that they are cost
effective. As a result, this ties back to the argument made as those who are not completely
invested in patient care may attempt to cut corners and implement this AI technology for
monetary concerns.
Another web is spun when we circle back to the idea of this forward-motion is inevitable.
The government may have already started changing the amount of AI that is being added, but
would AI still be steamrolling through regardless of what the government wanted. AGI,
Artificial General Intelligence, is expected to be on par with human intelligence in all aspects in
the next one hundred years. Moreover, at least 90% of AI researchers believe this notion to be
true while half of these researchers claim it will be accomplished in less than 50 years. The
research being done in this field is not slowing down, and temptation to use AI will grow as AI
could help gain a competitive edge in every industry (Clark 2015). This completely negates any
hope of critics to abstain from AI usage as once one party uses it, the entirety of the AI
sweepstake will be gone.
Another more nuanced aspect of life that has been meshed with AI is chess. For example,
chess engines are now a vital component to chess players’ lives. Chess engines are
fundamentally AI, which calculates all outcomes to a given chess game and all its future
outcomes. Jacob Yothment writes that chess engines help “many players learn the game”
(Yothment, 2023). The idea is that amateur players play games against various levels of chess AI
until they become more acquainted. However, on a professional scale chess engines play a much
more involved role. For top grandmasters chess engines help compute the outcome of chess
game’s opening to the game’s end. In this manner, grandmasters can devise a chess position they
want to reach in an actual game, and then implement chess engines to determine the moves the
grandmaster must make to win that game or determine if the position will result in a loss. Chess
players now are much stronger than chess players centuries ago and much is attributed to the
effect of chess engines. Therefore, if the outcome of the chess player’s strengths is attributed to
physicians and their use of AI, there may be a general increase in productivity if AI helps guide
physicians.
This has already been implemented with the use of diagnostic imaging as AI helps
physicians when an abnormality is detected. Furthermore, Rubeis, in his article Strange
Bedfellows, The Unlikely Alliance Between Artificial Intelligence and Narrative Medicine,
highlights how Artificial Intelligence could be used to cover the analytical portion “of collecting
and processing data, (so that the) physicians could spend more time with their patients” (Rubeis,
2020, p. 1). Much like how chess engines allow grandmasters to advance their own game,
physicians could use AI to manage the analytical portion of medical treatment, which would be
analogous to chess engines computing all positions. In addition, this manner of utilizing AI is
free of making a medical diagnosis or implication which is still a responsibility of the doctor.
This would exhilarate patient treatment as physicians could spend more time close reading and
connecting to the patient’s story of illness, instead of wasting a great supply of energy in
processing data.
This becomes the primary argument for supporters of AI in medical practice. They cite
that AI could be a tool that conserves the energy of physicians. Much like the imaging that
radiologists practice, AI can be used to the advantage of the physicians. An even greater
advancement that could help physicians is through the realm of communication. Supporters of AI
in the medical field underscore that modern day technology possesses skills to analyze
metaphors and relay them back to patients. Furthermore, advocates believe that such skills
contain creative expression, which is imperative within narrative medicine and therefore AI
could be implemented into patient care soon.
Analogous to close reading, metaphors are excruciatingly important in narrative medicine
as they play a strong role in explaining complex ideas into similar, more relatable, concepts. In
addition, they provide comfortability as clear communication is established between physician
and patient. In his article, Barnden mentions that AI is familiar with metaphor recognition and
may soon master this pathway of creative expression. He explains that copious research has been
invested into the “advances, problem identifications and emphasis shifts’ ‘ (Barnden, p. 38) of
metaphors and that utilization of metaphors in the “real world’ ‘ is not far removed. This would
be ground-breaking, as AI could effectively communicate ideas to patients in a way that is not
stagnant and uninventive. Moreover, physicians that have a language barrier with their patients
would not have to expend the energy of other physicians having them translate, but instead have
AI translate for them, while explaining in terms of metaphors. Modern technology being able to
creatively express ideas to patients would be a major piece of evidence for supporters of AI
usage, as a major component of narrative medicine can be guided with AI technology, removing
a great point of stress for physicians, and adding comfortability to patients.
There will always be two sides to the coin when dealing with AI in medicine practice
because of the grave effect it could have on healthcare regardless of how the narrative has
shifted. Whether AI can listen to patients’ concerns, supporters will say it could save energy for
physicians to close read into more detailed problems and leave the broader questions to AI.
While critics would claim that the fundamental questions are of the same importance as the more
in depth studying as all is included in close reading. Even more, The Relativity of Humanity
Equation states that the best outcome for treatment is equal to narrative medicine divided by
patient care multiplied by the ideas of concerned, centered and clear squared. Now, the duality of
this situation becomes what is best for a physician to be in peak condition. The reason this
equation is squared is because both the physician and the patient need to be in prime condition to
achieve the best outcome. Is more rest given with the aid of AI better for physicians, or is the
complete disregard of AI to focus in on being concerned centered and clear on the patient
treatment necessary? Can a physician be centered if they are running on low energy? These are
all questions with differing opinions but once AI is introduced it is difficult to navigate a
definitive solution. In addition, AI as a tool and not the primary focus has shown promise in
other professions, but skepticism still lingers as human touch through all the five senses is
something that is irreplaceable with a robot. However, patient care is limited, and every second is
valuable, so is giving AI programming some of the busy work to save time to reach other
patients worth it?
AI shows promise in metaphor analysis and although it may be some time before it can
be utilized, that does not mean that its contributions should be totally neglected. Perfection can
never be attained in the scientific field, however, that is what the public demands if AI is going
to be expanded into healthcare. This is mainly because AI cannot be held responsible for their
actions if malpractice were to happen. There are endless avenues that AI can be utilized within
the medical field, yet it is hard to determine which is successful and which will be harmful. Even
more, it is impossible to achieve a standardized model of AI for everyone even within one of
these avenues. Outside factors such as media perception influence even more people’s
perceptions of AI. In the novel, Fahrenheit 451, the main character called for doctors to treat his
wife who overdosed on sleeping pills. The doctors arrived immediately; however, Bradbury
gives them a negative connotation by describing them as snake-like, callous, and uncaring.
Furthermore, in the TV show Young-Sheldon, Sheldon’s family purchased a computer with
ELIZA’s programming, but made the program look extremely outdated and incapable of simple
functions even for the period the show was situated in. Overall, it may be too late to decide how
much the public wants AI to be integrated into medicine. Other professions have already faded
out human interactions completely, and due to how tangled AI is in these aspects of life, medical
applications may soon be to follow. Life is already stuck in cobwebs around AI, so it is
impossible to straighten us away from it or organize and control it.
Shifts of AI capabilities through the past seven decades have made people more aware of
how much AI can be implemented into medical practices. From a simple chatbot, to a faulty
diagnosis program, and advancing to systematic and reliable imaging aid technology, AI has
grown extensively. However, growing with AI is a palpable fear of change, and this is the source
of hesitancy within people and that is what holds them from fully supporting AI. Moreover,
nobody knows which turn AI will take. It seems inevitable that AI will be utilized in narrative
medicine, so physicians and patients alike should attempt to harness its capabilities while
ensuring that the touch of humanity is not lost along the way. This journey into the future of
healthcare is not a solo expedition of technology but a collaborative dance, where AI and human
empathy intertwine; thus, creating a harmonious narrative that prioritizes both innovation and
human experience.
Part 3: What I Have Learned
Originally, it was erroneous to believe that Artificial Intelligence could be utilized in the
medical field, let alone narrative medicine. However, through this research paper I have realized
that not only has the medical field been impacted by artificial intelligence but has adopted this
technology into their practice. One specific example I remember reading about is the
introduction of AI in radiology and helping radiologists’ line up images. They state that the
problem is not with AI’s capabilities, but the capabilities of humans to create a unified AI system
that different institutions could use, something analogous to a database. Furthermore, in narrative
medicine, scientists mentioned points that I did not even think of when wondering about AI’s
implementation in this aspect of health care. A specific academic resource explained that AI
could be utilized to crunch analytics while the physician could spend quality time close reading
and bonding with their patient. As a result, my perception of AI has changed, however, there are
still some faucets of AI that need improvement. The misinformation, and sometimes lack of
truthfulness within AI suggests that this technology is still a few decades removed from being
completely functional within a medical setting. Therefore, my perception did shift in certain
aspects but stayed constant in other regards.
To conclude, the research done on this paper further complicated the situation as AI is not
at a definitive point of progression. The progression of AI in radiology is at a different point than
the progression in narrative medicine. Furthermore, the results and future of AI and their
capabilities are unknown. The truth is, it is impossible to determine its effect, yet society needs
to preserve humanity. It is a difficult term to discuss and think about, however, the most
imperative idea that must be kept in mind is that society needs to control AI and harness their
capability before AI takes control.
Literature Cited
Barnden, J. A. (2008). Metaphor and artificial intelligence: Why they matter to each other. The
Cambridge handbook of metaphor and thought, 311-338.
Bradbury, Ray. Fahrenheit 451, Ballantine Books, New York, 1981.
Clark, Jack (2015a). “Musk-Backed Group Probes Risks Behind Artificial
Intelligence”. Bloomberg.com. Archived from the original on 30 October 2015. Retrieved 30
October 2015.
Charon, R. (2005). Narrative Medicine: Attention, Representation, Affiliation. Narrative, 13(3),
261–270. http://www.jstor.org/stable/20079651
ELIZA: A Very Basic Rogerian Psychotherapist Chatbot. (n.d.).
https;//web.njit.edu/~ronkowitz/eliza.html
Jeevanandem N. (2023, September 6). Exploring MYCIN – An Early Backward Chaining Expert
System. INDIAai. https://indiaai.gov.in/article/exploring-mycin-an-early-backward-chainingexpert-system
McDaniel D. (2022, August 17) Technology advancements leading to toll machine changes,
removal https://www.wesh.com/article/technology-toll-machine-changes/.
Rubeis, G. (2020). Strange bedfellows. The unlikely alliance between artificial intelligence and
narrative medicine. Dilemata, (32), 49-58.
Tang, X. (2020). The role of artificial intelligence in medical imaging research. BJR|Open, 2(1),
- https://doi.org/10.1259/bjro.20190031
Tang Y, Wang X, Harrison AP, Lu L, Xiao J, Summers RM. Attention-guided curriculum
learning for weakly supervised classification and localization of thoracic diseases on chest
radiographs. In: Shi Y, Suk H-I, Liu M editors. Machine learning in medical imaging, lecture
notes in computer science. Cham: Springer International Publishing; 2018b. p. 249–58.
Yothment, J. (2023, September 1). How AI powers chess engines and creates grandmasters.
Pure Storage Blog. https://blog.purestorage.com/perspectives/how-ai-powers-chess-engines-andcreates-grandmasters/

