The COVID-19 pandemic put a large burden on many healthcare systems, causing fears about resource scarcity and triage. Several COVID-19 guidelines included age as an explicit factor and practices of both triage and 'anticipatory triage' likely limited access to hospital care for elderly patients, especially those in care homes. To ensure the legitimacy of triage guidelines, which affect the public, it is important to engage the public's moral intuitions. Our study aimed to explore general public views in the UK on the role of age, and related factors like frailty and quality of life, in triage during the COVID-19 pandemic. We held online deliberative workshops with members of the general public (n=22). Participants were guided through a deliberative process to maximise eliciting informed and considered preferences. Participants generally accepted the need for triage but strongly rejected 'fair innings' and 'life projects' principles as justifications for age-based allocation. They were also wary of the 'maximise life-years' principle, preferring to maximise the number of lives rather than life years saved. Although they did not arrive at a unified recommendation of one principle, a concern for three core principles and values eventually emerged equality, efficiency and vulnerability. While these remain difficult to fully respect at once, they captured a considered, multifaceted consensus utilitarian considerations of efficiency should be tempered with a concern for equality and vulnerability. https://www.selleckchem.com/products/icec0942-hydrochloride.html This 'triad' of ethical principles may be a useful structure to guide ethical deliberation as societies negotiate the conflicting ethical demands of triage.Artificial intelligence (AI) systems are increasingly being used in healthcare, thanks to the high level of performance that these systems have proven to deliver. So far, clinical applications have focused on diagnosis and on prediction of outcomes. It is less clear in what way AI can or should support complex clinical decisions that crucially depend on patient preferences. In this paper, we focus on the ethical questions arising from the design, development and deployment of AI systems to support decision-making around cardiopulmonary resuscitation and the determination of a patient's Do Not Attempt to Resuscitate status (also known as code status). The COVID-19 pandemic has made us keenly aware of the difficulties physicians encounter when they have to act quickly in stressful situations without knowing what their patient would have wanted. We discuss the results of an interview study conducted with healthcare professionals in a university hospital aimed at understanding the status quo of resuscitation decision processes while exploring a potential role for AI systems in decision-making around code status. Our data suggest that (1) current practices are fraught with challenges such as insufficient knowledge regarding patient preferences, time pressure and personal bias guiding care considerations and (2) there is considerable openness among clinicians to consider the use of AI-based decision support. We suggest a model for how AI can contribute to improve decision-making around resuscitation and propose a set of ethically relevant preconditions-conceptual, methodological and procedural-that need to be considered in further development and implementation efforts.Many healthcare agencies are producing evidence-based guidance and policy that may determine the availability of particular healthcare products and procedures, effectively rationing aspects of healthcare. They claim legitimacy for their decisions through reference to evidence-based scientific method and the implementation of just decision-making procedures, often citing the criteria of 'accountability for reasonableness'; publicity, relevance, challenge and revision, and regulation. Central to most decision methods are estimates of gains in quality-adjusted life-years (QALY), a measure that combines the length and quality of survival. However, all agree that the QALY alone is not a sufficient measure of all relevant aspects of potential healthcare benefits, and a number of value assessment frameworks have been suggested. I argue that the practical implementation of these procedures has the potential to lead to a distorted assessment of value. Undue weight may be ascribed to certain attributes, particularly those that favour commercial or political interests, while other attributes that are highly valued by society, particularly those related to care processes, may be omitted or undervalued. This may be compounded by a lack of transparency to relevant stakeholders, resulting in an inability for them to participate in, or challenge, the decisions. The makes it likely that costly new technologies, for which inflated prices can be justified by the current value frameworks, are displacing aspects of healthcare that are highly valued by society.With Perry Hendricks, I recently outlined a strengthened version of the impairment argument (SIA) for the immorality of abortion. Alex Gillham has argued that our use of Don Marquis' deprivation of a 'future-like ours' account entails we were merely restating Marquis' argument for the immorality of abortion. Here, I explain why SIA is more than just a reframing of Marquis.Lack of vaccine confidence can contribute to drops in vaccination coverage and subsequent outbreaks of diseases like measles and polio. Low trust in vaccines is attributed to a combination of factors, including lack of understanding, vaccine scares, flawed policies, social media and mistrust of vaccine manufacturers, scientists and decision-makers. The COVID-19 crisis has laid bare societies' vulnerability to new pathogens and the critical role of vaccines (and their acceptability) in containing this and future pandemics. It has also put science at the forefront of the response, with several governments relying on academics to help shape policy and communicate with the public. Against this backdrop, protecting public trust in scientists and scientific output is arguably more important than ever. Yet, conflicts of interest (CoI) in biomedical research remain ubiquitous and harmful, and measures to curb them have had limited success. There is also evidence of bias in industry-sponsored vaccine studies and academics are voicing concerns about the risks of working in a CoI prevalent research area.