AI Has the Potential to Support Hematology's Ethical Principles—but There's a Downside - carehealth

health is providing all latest online health information which is published in various newspapers i.e daily jang express nawa waqt the news dawn & the nation daily dunya health information from this website no need to visit any newspaper website you can find all latest health ad on worlds one pla

Ticker

test

Saturday, June 11, 2022

AI Has the Potential to Support Hematology's Ethical Principles—but There's a Downside

AI Has the Potential to Support Hematology's Ethical Principles—but There's a Downside
AI Has the Potential to Support Hematology's Ethical Principles—but There's a Downside


EHA (European Hematology Association) Conferences

Speakers at the 2022 European Hematology Association Congress explored how artificial intelligence (AI) might help promote ethical medicine principles, as well as how new technologies are being utilised to harm scientific research integrity.

Speakers at the 2022 European Hematology Association (EHA) Congress explored how artificial intelligence (AI) might help promote ethical medicine principles, as well as how new technologies are being utilised to harm scientific research integrity.

The event was part of the YoungEHA track at EHA2022, which is aimed at early-career scientists and clinicians and strives to go beyond scientific facts by providing a venue for debate on the developing field of haematology.

Elisabeth Bik, PhD, was the first to speak, explaining how her consulting job has developed from microbiology to suspicions of scientific misconduct—in other words, she's now a sleuth on the lookout for suspected study flaws or fraud.

"If such papers contain inaccuracies, it would be like building a brick wall where one of the bricks is not very stable," Bik said, adding that "it would be like building a brick wall where one of the bricks is not very stable."

The scientific wall will crumble."

Plagiarism, falsification, or fabrication are all examples of science fraud, according to Bik, but it does not include honest mistakes. Pressure to publish, a sense of urgency to live up to high expectations after a taste of research success, or even a "power play" in which a professor threatens a postdoctoral researcher with visa revocation if an experiment fails are all causes for cheating.

Bik's area of expertise is inappropriate picture duplication in research articles, which can fall into one of three categories: simple duplication, repositioning, or change. Even if a simple duplicate is an honest mistake rather than deliberate misbehaviour, she believes it is still inappropriate and should be remedied.

The problem is that these occurrences are difficult to detect. Bik asked the audience to see any duplicate on a presentation featuring eight photos. Until Bik demonstrated that there were actually two instances of duplication, this writer felt proud of identifying one set of identical photographs.

Beyond simple duplication, Bik highlighted examples of hematology-related images, such as Western blots and bone marrow flow cytometry, being relocated, flipped, and manipulated to hide their similarity, implying more deliberate deceit on the authors' side.

The problem is exacerbated by the fact that when Bik has raised her concerns with journals, they frequently do not take steps to retract or correct the manuscript. She cautioned the audience to be cautious of research numbers, especially while serving as a peer reviewer—if data "appear too lovely," it is one's job to privately discuss the issue with the editor.

"If you see something, tell something," Bik advised, "because this does happen."

Artificially created Western blot images in studies published by "paper mills" is one devious use of AI to promote scientific deception. Over 600 articles have been detected by Bik as employing these bogus images, which are created using generative adversarial networks. Unfortunately, because the photos are one-of-a-kind, they cannot be duplicated.

with software that detects duplicates

The cost of scientific misconduct to society extends beyond the risk that readers would unintentionally base their study on papers that contain errors or fraud, according to Bik. The prevalence of fraud jeopardises scientific integrity and can be exploited by people with a political goal to imply that all science is defective.

"We must believe in science, and we must work harder to improve science," Bik said.

The second speaker, Amin Turki, MD, PhD, of the Universitätsklinikum Essen in Germany, discussed how artificial intelligence (AI) can be both an opportunity and a threat for medical ethics, particularly in haematology.

Turki explained that the use of AI in haematology has advanced exponentially in recent years, as prediction tasks have evolved from risk prediction to bone marrow diagnostics, which is time consuming and difficult due to the complexity of bone marrow with its abundance of cells and structures, but has the potential to transform clinical practise.

Turki and colleagues are working on predicting mortality after allogeneic hematopoietic cell transplantation, as well as in patients with acute GVHD, using machine learning.

Still, the application of AI in health raises ethical concerns, as evidenced by the exponential growth in the number of PubMed-indexed works on AI and ethics in recent years. Turki looked at AI through the perspective of medical moral standards based on the 1948 United Nations Declaration of Human Rights:

Patient autonomy can be aided by AI through the use of digital agents such as wearable devices.

Beneficence: AI can improve health by overcoming human cognition's limits, such as better risk prediction or tailored treatment.

Nonmaleficence: Through AI-defined dose and therapy algorithms, toxicity can be minimised.

Researchers want to employ AI to lessen the impact of health inequities, but if done incorrectly, it can exacerbate or perpetuate them (e.g.,

For example, if AI interventions are only available in wealthy countries).

The developers, deployers, users, and regulators are all stakeholders in ethical AI in medicine, and each has their own set of obligations, according to Turki. He proposed numerous solutions to ethical issues, including incorporating ethics into the AI development process, prioritising human-centered AI, and assuring equitable representation.

Turki stated that the current digital change has the potential to transform medical care, but it also demands us to "never forget the human situation is the foundation of our thinking."

Bik's area of expertise is inappropriate picture duplication in research articles, which can fall into one of three categories: simple duplication, repositioning, or change. Even if a simple duplicate is an honest mistake rather than deliberate misbehaviour, she believes it is still inappropriate and should be remedied.

The problem is that these occurrences are difficult to detect. Bik asked the audience to see any duplicate on a presentation featuring eight photos. Until Bik demonstrated that there were actually two instances of duplication, this writer felt proud of identifying one set of identical photographs.

Beyond simple duplication, Bik displayed examples of hematology-related images, such as Western blots and bone marrow flow cytometry, being relocated, flipped, and manipulated to hide their similarity, which he claimed was intentional.

No comments:

Post a Comment