Concerning Questions about Artificial Intelligence Use in Healthcare

What are AI and Industry 4.0?

The 4th industrial revolution-Industry 4.0 means technological advancement, and the digital process — moving towards AI — Artificial Intelligence (AI) implies intelligence of the machine.

AI has been increasingly integrated across the healthcare sector in recent years. High-tech computers can now provide mental health support, monitor patient safety, and even predict heart arrest, seizures, or sepsis.

AI and Healthcare

AI can deliver diagnostics and therapies, provide medication reminders, provide precise picture analysis, and forecast overall health based on electronic health records (EHR) and patient history — all while reducing some of the physicians' pressures.

Google Health — has developed a system that can predict the possibility of acute kidney injury up to two-three days before the damage occurs; compare this with traditional medical practice, where the injury is not yet known before it happens later. These algorithms may increase care beyond the boundaries of current human development.

There is, however, little doubt that we are only at the starting to see how it will affect patient care. Not surprisingly, the pace of development in the commercial sector has surpassed traditional health care providers' progress – to a large extent. It was due to the tremendous financial rewards to be received.

AI's idea is to make human lives easier — however, a hot topic of debate nowadays — whether AI is a boon or bane for future human existence. Here are some concerning questions about AI in the healthcare sector.

  • Can a doctor be expected to act on decisions made by an AI 'black box' algorithm? In deep neural networks, the reasons and processes underlying AI's findings, including by qualified developers, may be challenging to establish. Do the doctors need to explain this to the patients?
  • If AI and the doctor disagree, who will be perceived as 'right?' The degree of relative confidence in technology and healthcare professionals can vary from individual to generation.
  • There are no nationally agreed quality standards. Should it be there? And if so, who's going to set them? Does the introduction of standards inevitably undermine the opportunities for innovation?
  • How can a patient or clinician distinguish 'good' AI from 'bad' AI? For example, a mental health app with a large user interface can be based on insufficient data.
  • If AI is 'over-sold by developers and politicians and fails to deliver the promised benefits, is there a real risk that the public will be able to reject the use of AI in healthcare altogether?
  • Do the patients always have a choice as to whether a doctor or an algorithm makes a diagnosis?
  • Does the public understand the concept of accountability sufficiently? Would the public be able to understand the (probably nuanced) question of machine accountability?
  • Inadequate input would lead to poor results – does the quality of the data need to be standardized, or does it need to be kite-marked?
  • Transparency of decisions may be vital to empowering patients and gaining trust – but would insistence on removing the 'black box' jeopardize the opportunity to realize the full potential of machine learning?
  • In addition to clinical judgment, AI-generated recommendations may change patients' views on who or indeed what to trust. Could this lead to new ideas about what constitutes clinical negligence?

Final Thought

AI should be used to reduce health inequalities, not increase them – geographically, economically, and socially. The most significant risk is a future in which only the rich will access the best AI healthcare provided because such providers will be the only ones with pockets large enough to access the best data and grow the best AIs. It is up to leaders, politicians, regulators, physicians, and ethicists to determine how to allow and improve the broader healthcare system for future generations.

Post a Comment