Healthcare 5.0 deploys the best smart manufacturing tech | Smart Manufacturing

Healthcare 5.0 deploys the best smart manufacturing tech | Smart Manufacturing







Healthcare 5.0 stock

“If you’re operating on an infant’s heart that’s the size of a walnut, and you’re planning to send a surgical instrument inside a blood vessel measuring three millimeters across, accuracy becomes critical.” Those are the words of Dr. Jonathan Morris, a neuroradiologist at Mayo Clinic in Rochester, Minn., with whom SME last spoke in 2017 for its Humans of Manufacturing series.

A great deal has changed since then, although Morris and his team are still saving lives, one 3D-printed surgical guide, anatomical model or patient-matched prosthetic at a time. His new job title is executive medical director of Immersive and Experiential Learning. He, along with his administrative partner Bob Morreale, are in charge of a department that uses simulation and immersive experiences in augmented and virtual reality to “give students authentic hands-on experiences that represent the challenges of surgical procedures and patient care.”

SME caught up with Morris recently and asked him to tell us about his new position, what he and Mayo Clinic have been working on, and the state of healthcare technology in the United States and beyond. As you’ll see, the medical industry—just like manufacturing—is undergoing a dramatic technological shift.

Smart Manufacturing (SM): When we last spoke, you mentioned the need to “get what’s in the radiologist’s head” and make it accessible to the surgeon, and that additive manufacturing (AM) was one of the tools you used to accomplish this. What’s changed?

Jonathan Morris (JM): Nothing has changed except that additive is more widely used than ever. In fact, our 3D printing lab at that time was somewhere around 3,000 square feet, and it has since tripled in size. But with that, we’ve begun to utilize a variety of other tools to create patient specific digital twins for visualization, including augmented, virtual and mixed reality, as well as artificial intelligence.







Mayo Clinic AR in action

Using augmented reality (AR) in the operating room with Medivis, said to be “a new standard of surgery.” (Provided by Mayo Clinic) 


SM: We’ll come back to AI. To close off the 3D printing discussion, are you making any actual implants yet, or is it still primarily surgical guides and models?

JM: We continue to produce large numbers of anatomical models and sterilizable Class II guides that screw into tissue and bone, but in order to make implants that remain in the body for extended periods, you need to either have compassionate use exemption or 510K clearance from the FDA in the U.S. The first of these is more of an emergency approval for patients with a life-threatening disease and custom devices that may be one of a kind, while the latter calls for extensive validation and process development. Both require significant investment into infrastructure and staff locally, and what’s known in the industry as a medical device production system. Nonetheless, we’re working in that direction.

SM: Tell us about the move to AR/VR/XR technology and its use in medical training and surgical procedures. Is it effective?

JM: Very much so. Mayo Clinic has somewhere around 5,000 healthcare learners at any given time—70% of whom will become our staff. Traditional training methods have steep learning curves and limited capacity due to the required colocation of the expert, the knowledge and the capital equipment, making it time-consuming, expensive, and inefficient. We have been exploring ways to digital-twin these three key areas and build immersive experiences in virtual reality to simulate the procedures, environment, patient, and staff. For example, when we wanted to teach people radiation safety in X-ray procedure rooms, we were able to digitally-twin the environment, nurses, technologists, capital equipment and expert information. By making an interactive virtual world, we could asynchronously teach our own staff and not take down the room for educational purposes, while also allowing people from anywhere on the globe to learn from us.







Mayo Clinic skull base course practice on virtual patient

A VR Endoscopic Skull Base course participant practicing on a virtual patient. (Provided by Mayo Clinic)


SM: Are these virtual worlds as good as the real thing?

JM: In some ways it’s better than reality. For instance, we can show radiation in virtual worlds and teach students something they otherwise can’t see, feel or touch, but is dangerous nonetheless. Since we, like other health care institutions. are hurting for staff right now and the cost of training and retaining them is astronomical, we are looking toward these technologies to help us democratize our knowledge. Similarly, we’ve developed simulations for ECMO, where we have to connect a patient’s blood vessels up to a machine that will oxygenate their blood when their lungs don’t work. This has to work in VR but also on tablets and smart phones that trainees can access from a smartphone anywhere with an Internet connection. In one instance, we had people learning ECMO procedures from our trained anesthesia and vascular surgery staff while inside an airplane hangar in New Zealand. Ultimately, this kind of immersive training leads them to become better clinicians, better equipment operators and most importantly, better caregivers.







Post traumatic face deformity 3D printed Mayo Clinic

A post-traumatic face deformity, 3D-printed for surgical planning. (Provided by Mayo Clinic)


SM: The medical community has a new term: Healthcare 5.0. What does it mean to you?

JM: Several things. One is the use of global population data and large language models to make more informed and accurate individual health decisions. For instance, Mayo Clinic has developed a platform created and managed by Dr. John Halamka and Maneesh Goyal. They have created the largest collection of de-identified medical records across multiple countries to be used to guide health care decision in the future based on millions of previous interactions. So let’s say a patient comes to you with a health problem and has a certain family history, a comorbidity, regional risk factors, workplace risks or genetic makeup, you could make predictive decisions based on AI models that are trained on non-biased data from across the world. This gives us the opportunity to mine data about that condition across a broad spectrum of cultures and ethnicities, whether it’s that of a Singapore native or a Minnesotan whose grandparents immigrated here from Norway.

SM: Why is that important?

JM: There are regional, genetic and cultural differences that can have a dramatic effect on someone’s risk factor. Multiple sclerosis, for example, occurs at a rate much less frequently the closer you get to the equator, so if someone in Brazil or Ecuador presents with these neurological symptoms, their doctor is less likely to recognize it. Similarly, India has a fairly high rate of tuberculosis, just as malaria is common in some African nations. The U.S., however, doesn’t see too many cases of these diseases, and a physician here might not spot them as readily as one from those countries … by democratizing the available information through a universal database and then developing “medical chatbots” that leverage AI to make sense of the massive amounts of information, we have a much better chance of identifying and treating conditions that might otherwise go undetected.

SM: The stakes high. Are you worried about AI hallucinations and it making life-threatening mistakes?

JM: AI is a tool like any other, and properly used, it will make healthcare more efficient and cost-effective. One example of this are the training simulations I described earlier, which rely heavily on AI. And in my field, radiology, it might take a skilled technician several hours to segment a spine, whereas cloud-based AI segmentation can do it in under a minute. When you’re doing 1.2 million CT scans a year just at this location, this has some serious financial ramifications. But it also opens the door to other capabilities; if you can quickly and cost-effectively auto-segment organs and tumors and the like, virtual flythroughs with a patient become possible. Further, we’ll soon be able to leverage AI for help with patient-specific medical device design, the planning of certain surgical approaches, and then one day, use it for expert assistance in the operating room. Together with advanced photogrammetry, cloud-based ecosystems, HIPAA-compliant data transfer, 3D texture mapping and other AI-enabled technologies, the industry is evolving in a very positive manner. That’s what Healthcare 5.0 is all about.

Subscribe to our twice-weekly, free eNewsletter for the latest manufacturing news and technical information, including new technologies, educational webinars, podcasts and more.

link