By Jeffery Daigrepont, Senior Vice President | Coker Group
The adoption of artificial intelligence (AI) in healthcare is on the rise and solving a variety of problems for patients and providers. While we do not see AI taking the place of humans, the ethics and risk of machine involvement in patient care over traditional methods have yet to catch up to the technology.
Consider this scenario.
While your physicians is treating you, an AI listening device in the exam room making thousands of calculations/predictions based on the conversation it is picking up between you and the physician. After the exam, the AI device may support the physicians’ decision making and/or trigger additional action items, such as ordering the test the physician discussed with you. The AI may also warn the physician of a drug-to-drug interaction because the AI device already knows what you have purchased over the counter, though you have forgotten to mention it to your physician. Google reports that it already has access to 70% of our credit card transactions, so the AI knows about that fish oil you purchased a week ago. Did you know fish oil can be an anti-clotting agent and may negatively impact some treatment plans? Further, the AI already knows you just booked a trip to South America, so it triggers a reminder for travel shots. The possibilities are endless.
So, does this make you feel a little creepy? A little nervous about the data being picked up? Does it make you feel more confident knowing there is a second set of ears, albeit electronic ears, helping with decisions?
How would the above scenario be possible today? Most everyone has used GPS apps such as Google Maps or Waze. Have you ever seen the apps make real-time suggestions, such as offer new routes, based on a set of circumstances like a construction or traffic jam? Do you ever notice how Facebook and other social media apps seem to know your hobbies, travel preferences, and even political affiliations and will make recommendations based on these preferences? Have you seen an ad pop up in your newsfeed after visiting a new place or restaurant even though you did not disclose your location? These are all examples of AI based on crowdsourcing data and/or accessing existing databases, but are our devices listing to our conversations?
How about this? Have you ever been having a random conversation and soon after some information related to this same discussion appears on a newsfeed or as a pop-up ad? The tech companies deny eavesdropping through your phone’s microphone, but are they? As noted, Google claims they already have access to 70 percent of credit and debit card transactions in the United States. Facebook and others monitor much of what we’re doing across the web. These companies by using hidden tracking technologies can see many of the pages you and others are visiting, allowing them to tailor their ads accordingly. Devices like Amazon Echo and Alexa and Google Home are increasingly popular. They DO LISTEN with the consent to the device owner. However, what if devices like this are in exam rooms? Or, at your place of work? It is one thing for a consumer to knowingly invite an “always-listening” artificial intelligence device into their home, but should we have a right to know when we are being recorded? Under the wire-tap laws, most states prohibit one-party recording, but many companies get around this by having people waive their rights, or they might place a “notice” of advisement. As an example, have you ever heard this statement? “This call may be recorded or monitored for educational purposes.” There is your notification.
So, will AI find its way into exam rooms? Today, most healthcare provider organizations have EHRs. However, these tools are static databases with algorithms that complement and support humans to complete simple tasks. They do not think for the users; they just store data and serve as repositories of information. Now, with the advancement of AA, that factor is changing rapidly. AI today can augment human activity with the ability to sense, comprehend, and learn. AI in healthcare represents a collection of technologies enabling machines to comprehend and learn so they can perform administrative and clinical healthcare functions.
The primary aim of health-related AI applications should be to analyze relationships between prevention or treatment techniques and patient outcomes. The privacy policies must catch up to AI to ensure there is no overstepping of boundaries. With that said, the most obvious application of AI in healthcare is data management and its compatibility with our existing EHRs. As with all innovation driven by data, collecting, storing, normalizing, and tracing its lineage is the first step in developing an AI strategy. Today, AI programs have already been developed and applied to aid in the diagnosis processes, treatment protocol development, drug development, personalized medicine, and patient monitoring and care. We are now expecting AI to make its way into eliminating repetitive jobs and allow for predictive automation.
As AI advances, we will need to consider the creation of ethical standards that are applicable anytime patient data is used, with a specific emphasis on patient privacy and accountability for data usage. Now would be a good time to review your patient privacy policies to see how they may need to be updated for changes in your technology, which should also include patient messaging, patient portal, text messaging, and others. For a complimentary consultation on how to modernize your policies for the future, please contact us today.