https://www.selleckchem.com/MEK.html Artificial intelligence (AI) could improve the efficiency and accuracy of health care delivery, but how will AI influence the patient-clinician relationship? While many suggest that AI might improve the patient-clinician relationship, various underlying assumptions will need to be addressed to bring these potential benefits to fruition. Will off-loading tedious work result in less time spent on administrative burden during patient visits? If so, will clinicians use this extra time to engage relationally with their patients? Moreover, given the desire and opportunity, will clinicians have the ability to engage in effective relationship building with their patients? In order for the best-case scenario to become a reality, clinicians and technology developers must recognize and address these assumptions during the development of AI and its implementation in health care.As the field of medicine shifts from a paternalistic to a more patient-centered orientation, the dynamics of shared decision making become increasingly complicated. International globalization and national socioeconomic differences have added unintended difficulties to culturally sensitive communication between physician and patient, which can contribute to the growing erosion of clinician empathy. This article offers a strategy for teaching students how to enter into conversations about shared decision making by bolstering their empathy as a result of exposing them to the many variables outside of their patients' control. Patients' historical and cultural context, gender identity, sexual orientation, and common assumptions about clinicians as well as institutional biases can severely limit students' ability to integrate patients' value-laden preferences into shared decision making about health care.Illness and injury often entail lasting health and social consequences beyond the acute event. During the immediate and long-term recovery period, consequences of illness