Engineering Clinical Intuition: Programming A Physician's Mind



Currently, every major data and analytics institution and corporation is trying to achieve the inevitable. That is, how to analyze medical information and interpret results in order to develop standard of care, evidence-based care plans and possibly even predict outcomes, from individual patients to population groups. However, the key element in all of this must not be overlooked – that is the physician-patient relationship.


Data analytics has become much simpler than it used to be. We have so much data at our fingertips, that we can predict flight arrival time as well as order products in advance of depleting shelves of stock. ‘Big Data’, as everyone has coined the term, has arrived. From behemoths like Google and Apple, to startups and consulting companies, everyone seeks this holy grail of health-care decision-making. Every venture has its own approach. Larger companies use a shotgun approach, believing that the more data you have, the more you can interpret; whereas smaller teams are using clustering techniques to allow one to understand data relationships and pathways of care; whereas other sophisticated teams are redefining analyses.


We have advanced computing horsepower from all the big players, such as IBM and Intel, including advanced quantum computer chips; to evolving experimental biologic processing through ‘organic computing’. The horizons are flooded with these and other breakthrough technologies piercing new barriers of infinity on a mass scale that is revolutionizing not only how computers are built, but also the concept of the computer itself; that is, as an archaic form of software implementation. Wearable’s are the current fashion but implantable bionics and biogenetic engineering are not far behind in reaching their tipping point. The future is exciting to say the least.


What is our limit then? I mean, we can compute faster with more data in novel multi-tangential ways that must exceed the memory of most physicians. So, why is medical decision-making so difficult? Simply put, it is not just about direct route memory and indirect relationships or even associations and risk factors. It is something so much more intangible that it will likely continue to give real live human doctors thinking and working for some time to come. That is, clinical intuition and instinct. This goes beyond smarts and acumen. It is an understanding of the human body, health and disease that spans from implicit understanding of care to treatment whose code has yet to be cracked – a Sherlock or Columbus of medicine, as it were. Admittedly, not all physicians may realize they exhibit this unique trait but I am sure, many practitioners recall suspected cases of disease where they ‘followed their nose’ instead of just data – and then their due diligence paid off with a correct diagnosis. This fact made me wonder what part of physicians’ brains guides them? And inculcated an awareness of the dynamic freeform nature of our brain processing, that clusters and connects links in such a manner as to hone in on a diagnosis by instinct, rather than just interpreting lab tests. It is this very gut instinct that drives clinical intuition. It is a feeling, a suspicion, a subconscious awareness of ‘something wrong with this picture’ that guides an astute clinician.


The fact remains that clinical intuition cannot be simply inferred through traditional analytics that rely on yes/no flow algorithms and binary processing. Medicine is clearly in the grey zone of diagnosis and care with fluid differentials that may overlap and mimic right over each other. The solution may seem simple but the math is extremely complex. It will require grey-band processing that only quantum computing, possibly even through organic processing, may hope to approximate. Maybe, through consequential advanced clustering and stochastic reasoning with chaos underpinnings, it will be possible that clinical intuition will become programmable but how do you program instinct – that gut feeling I mentioned earlier that drives intuition? One could argue that instinct and intuition are one in the same but I beg to differ. Instinct is a tangibly intangible feeling; and intuition is a tangibly tangible analysis. Without instinct there is no driver to lead the intuitive analysis of events and data streams. A ship without its captain.


In addition, there is the physician-patient relationship whose integral embedding within the spectrum of ‘calculations’ is yet another grey variable of computation. The physician-patient relationship is a HIPPA protected sanctuary of history and physical examination, data review and interpretation. Sure, one can take the medical chart and dump it in to a fancy quasar and output a diagnosis but what is that value worth if the diagnosis is not reliable or reasonable? One could also send IBM Watson to medical school and residency for a few years and develop instantaneous recall capacity of every disease and every evidence-based flowchart of care. However, absent of some human connection, a trust relationship may not be established between the care provider and the care recipient. One could argue that deciphering information and weighted reasoning of various symptoms may help differentiate suspected disease. However, how do you program relative importance of possibly disconnected symptoms; and then of course not forget emotions? As a simple example, does everyone with a headache need a brain MRI to rule out a tumour? That is the complex science of medicine. A patient most often needs to establish some faith in the practitioner before trusting their opinion and plan. An abused woman may present with symptoms of a urinary tract infection but may actually be seeking emotional help from her abuser partner – how can you read that from her urinalysis? Human factors must be engineered in to the process of processing processor power.


How much faith can we simply place in our computers alone without human checks and balances? The Navy has already elected to bring back fundamental star-mapping education for cadets, as navigation with GPS has proven itself unreliable at times. By this example I mean, Big data is valuable to a point but more data does not necessarily imply better or more quality, particularly when the system or cloud goes down. The concept of ‘dirty data’ can be used as an example here. There is a belief that the more data that you have, the more ‘dirty’ the data will be and it is this inclusion of scattered outlying random phenomenon that makes overall integration in to analytics perfectly comprehensible. However, ‘dirt in is dirt out’. If the majority of the data is dirty, then it loses value. If very little is dirty, then we wonder how generalizable it can be. A randomized controlled clinical trial of North American women on osteoporosis may provide evidence for diagnosis and treatment in Iowa, however, how applicable is the same data in Japan, China or India? There are so many genetic, environmental, gender, cultural and socioeconomic differences amongst people, that studies in one part of the world in subsets of population segments may have no applicability to other parts of the world. In the same manner, can you imagine determining how many genetic, environmental, gender, cultural and socioeconomic variations of human thought there are? A diagnostic treatment plan in one country may very well be different from another, not just in regional variations but also in approach and resources. Would you exclude data from different physicians across different spectrums of lifestyle and socio-geo-political states? Who is to say that treatment standards are much different for certain diseases across borders, even taking in to account cost variations and resources available? A flu shot is a simple example, but a revision joint replacement or heart transplant may very well be different. In any case, how would you make that determination? Evidence-based medicine has its own regional and spatial limitations and constraints. Might you recall a saying about the goose and the gander?


Of course a discussion of care can never conclude without including a rebuttal of outcome metrics. Do we actually know the quality of the metrics being measured before we interpret them? My belief is that we are far from that scenario, particularly with the growing trend to include outcomes from the patient and the caregiver’s perspective. Various researchers have made numerous attempts to define clinical outcomes through clinician reported and patient reported outcome instruments (CROs and PROs); and gone further to calculate a term, the minimum important difference (MID), by which some change becomes clinically meaningful to have an impact. The beauty of a well-designed PRO is that it can be developed specifically for a disease and target population of interest and can truly master the art of outcome interpretation and analysis. Yet, our outcome tools are not validated enough for every disorder and some of the same PROs are used across different disease states; for example one instrument developed for measuring total knee outcomes is instead, to be used for evaluating biologic intra-articular injections. Comprehensive psychometric analysis and validation of instruments must be completed before their use – and yet, sadly, that is lacking. The point here is that even if you have all this data from comprehensive registries, what does it all mean if they are essentially not very meaningful and not comparable? Hence, you must gauge value by relevant evidence and interpretation of the body of literature – clinical interpretation as well as intuition. Would you measure sphericity of a ceramic head implant with a ruler in 1mm increments? I think, therefore I cannot.

Even if we assume that all this type of data is perfect, how do we gather everything together in to a nice cohesive lump? With the advent of electronic medical record (EMR) systems, the acquisition of large datasets is both enviable and inevitable. However, very few EMR systems communicate amongst each other and even fewer have standardized data entry and outputs that can be shared on equal footings. It is not just the language of the data and the database structure in which the data is stored but its accessibility from other systems. We already know that data from one EMR cannot be simply transferred to another database (that would be too ideal) as most use proprietary software in order to assure customer service and support. In addition, the charting style amongst end-users differs, not just within the same EMR but also amongst the various different EMR systems. What I mean to say here, is that a malar rash and cough as written in one medical chart by one physician is not likely to have been documented in the same manner as another physician, even at the same center or trained at the same institution. Now, multiply that variance by millions of physicians with their own nuances of recording data and interpreting results crossed over by the time of day and fatigue factor of each provider, and multiplied by the number of different staff and data interpretations within each system and across the country, and the globe. We already know that different labs dong the same blood test have so much variability that one cannot presume that a result at one lab is equivalent to another, statistically, or clinically; even if both labs were done on the same day at the same time in the same city. That is a ton of variable data.


Now imagine a scenario where you have trillions of data points each with their own errors and variances of reliability, sensitivity, specificity and standards of thresholds, reporting and interpretations. Now, imagine all the missing data. Yes, many times, what is not said or documented is just as vital, or sometimes more important than what is recorded – absence does not equate a negative result, however. Imputation will not necessarily stabilize that formula. When there is absence of information in the chart, what is often left is the physician’s overall assessment of the patient and the subsequent treatment plan based on that gestalt – sometimes more complex than what a simple SOAP note may exhibit. It may all come down to how that practitioner’s charting behavior, which maybe as valuable (or more so) to understand, as the analysis of what is charted. Dirt may be included but if you exclude any data, then you have no idea whether it was dirt or not, which begs the question of whether you have mud or troubled waters?


The essence of all of this is simple. Programming of clinical intuition by replicating instinct, will not be easy. I am not excluding the possibility however, as one must never say never. However, any software or artificial intelligence (AI) platform developed for this purpose must be carefully constructed with the guidance of experienced clinicians that help develop curious flexible algorithms of variable data driven interfaces of converging divergence that inherently learn, adapt and adjust as needed as patient perspectives, caregiver input, ancillary feedback and most importantly combined decision-making approaches are utilized with a backbone of emotional intelligence.


You need to think like a physician before you can analyze like a doc. Intuition can be built, but not simply - it must be inculcated from the origin of computing bytes and evolving virtual behavior models. Physician instinct, on the other hand, is a much tougher nut to crack. That may require emotion, empathy and the rare polynomial conjugate of compassion to fully engage itself in a chip.


Programming the physician’s mind will require sophisticated and intuitive engineering that can only come from humane instinct.


DocMirza

www.orthosynthesis.com


P.S. I wrote this from a physician’s perspective but every single comment is applicable to the instinct and intuition of every type of health care practitioner; from the nurse to the therapist, to the pharmacist to the midwife and chiropractor to the alternative healing arts.

© 2018 by OrthoSynthesis. Proudly created by MediaMedia

  • YouTube Social  Icon
  • Instagram Social Icon
  • LinkedIn Social Icon
  • Yelp Social Icon
  • Facebook Social Icon