From Pokemon to saving lives: using augmented reality in the operating room could usher in a new age of surgery.
The surgeon’s vision — elements of the patient’s foot were digitized and then fed into a 3D model. Image credits: Philip Pratt et al. Eur Radiol Exp, 2018 / Microsoft HoloLens (c) Microsoft.
Augmented Reality, the technique popularized last year by Pokemon Go, overlays real-life elements with “augmented” bits — most often, computer-generated information. It’s a world where real life as we know it interacts with holograms.
Now, for the first time, doctors have used augmented reality as an aid for surgery. Specifically, they’ve used Microsoft HoloLens headsets to overlay CT scans, indicating the position of bones and key blood vessels, over each of the patient’s legs. Basically, they were able to ‘see’ through the patient’s skin.
The technology helped with a very delicate procedure: the reconnecting of blood vessels, an essential part of reconstructive surgery.
“We are one of the first groups in the world to use the HoloLens successfully in the operating theatre,” said Dr. Philip Pratt, a Research Fellow in the Department of Surgery & Cancer and lead author of the study, published in European Radiology Experimental.
“Through this initial series of patient cases we have shown that the technology is practical, and that it can provide a benefit to the surgical team. With the HoloLens, you look at the leg and essentially see inside of it. You see the bones, the course of the blood vessels, and can identify exactly where the targets are located.”
So far, the technology has only been used in reconstructive limb surgery, but there’s no reason why it couldn’t be adapted to other types of surgery. Image credits: Philip Pratt et al. Eur Radiol Exp, 2018 / Microsoft HoloLens (c) Microsoft.
Doctors carried out five surgeries using the technology. Prior to the surgery, CT scans mapped the structure of the limb. The elements revealed by the CT scan were then split into bone, muscle, fatty tissue and blood vessels by Dr. Dimitri Amiras, a consultant radiologist at Imperial College Healthcare NHS Trust. Amiras used the data to develop a 3D model of the patients’ legs. The models were fed into the HoloLens, allowing surgeons to see them as they were carrying out the procedure. The surgeons also fine-tuned the model — with a simple hand gesture, they made sure that the model lined up with real life.
The procedure is time-consuming but in the future, algorithms could greatly simplify and reduce the work volume.
“The application of AR technology in the operating theatre has some really exciting possibilities,” said Jon Simmons, a plastic and reconstructive surgeon who led the team. “It could help to simplify and improve the accuracy of some elements of reconstructive procedures.
“While the technology can’t replace the skill and experience of the clinical team, it could potentially help to reduce the time a patient spends under anaesthetic and reduce the margin for error. We hope that it will allow us to provide more tailored surgical solutions for individual patients.”
Right now, the technique has only been used for lower limb reconstructive surgery, but the proof of concept is there. This study shows that the technology is practical, accurate, and safe to use. There’s no reason why a similar approach couldn’t be used in different types of surgery.
Augmented reality does nothing to replace the skill and experience of the operating team, but it does complement and amplify it, significantly reducing the margin for error.
Virtual and augmented reality seem to be on everybody’s lips nowadays, both promising to revamp the tech scene and change the way consumers interact in the digital space. Despite the hype and media attention, the two often get confused as some people use the terms interchangeably. While there are many similarities between virtual reality (VR) and augmented reality (AR), the two are definitely distinguishable. Let’s dive into these differences.
What’s Virtual Reality?
Credit: Silicon Beat
Virtual reality is a computer simulated reality in which a user can interact with replicated real or imaginary environments. The experience is totally immersive by means of visual, auditive and haptic (touch) stimulation so the constructed reality is almost indistinguishable from the real deal. You’re completely inside it.
Marked by clunky beginnings, the idea of an alternate simulated reality took off in the late ’80s and early ’90s, a time when personal computer development exploded and a lot of people became excited about what technology had to offer. These attempts, like the disastrous Nintendo Virtual Boy which shut down after only one year, were marked by failure after failure, so everyone seemed to lose faith in VR.
Then came Palmer Luckey, who is undoubtedly the father of contemporary VR thanks to his Oculus Rift. Luckey built his first prototype in 2011, when he was barely 18, and quickly raised $2 million with Kickstarter. In 2014, Facebook bought Oculus Rift for $2 billion. Other popular VR headsets include Samsung Gear VR or Google Cardboard.
What’s Augmented Reality?
Credit: Syrus Gold
While VR completely immerses the user in a simulated reality, AR blends the virtual and real. Like VR, an AR experience typically involves some sort of goggles through which you can view a physical reality whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data. In augmented reality, the real and the non-real or virtual can be easily told apart.
Wearing Google Glass — the biggest effort a company ever made to bring AR to mass consumers — you can walk through a conference hall and see things ‘pop to life’ around the booths, such as animated 3D graphics of an architecture model if the technology is supported. The goggles aren’t even necessary since you can do this via mobile apps which use a smartphone’s or tablet’s camera to scan the environment while augmented elements will show on the display. There are other creative means, as well.
Unfortunately, Google Glass didn’t take off and the company discontinued the product in 2015. Instead, AR apps on smartphones are much more popular, possibly because they’re less creepy than a pair of glasses with cameras.
Pokemon Go in action. Credit: New Yorker
Perhaps the most revealing example of AR is Pokemon Go, a viral phenomenon which amassing more than 100 million downloads in a few week. In Pokemon Go, you use your smartphone to find pokemons lurking in your vicinity with the help of a map that’s build based on your real-life GPS signal. To catch the pokemon you have to throw a pokeball at it by swiping on your mobile’s screen and when you toggle AR on, you can see the pokemon with the real world in the background.
Despite the hype, Pokemon GO has a minimal and basic AR interface. Some more revealing examples include:
Sky Map — a mobile app that lets you point your phone towards the sky and ‘see’ all the constellations you’re facing in relation to your position.
Word Lens — A Google app that allows you to point your phone to a sign and have it translated in your target language, instantly.
Project Tango – another Google project which aims to create a sensor-laden smartphone that can map the real world and project an accurate 3D picture of it.
“I’m excited about Augmented Reality because unlike Virtual Reality which closes the world out, AR allows individuals to be present in the world but hopefully allows an improvement on what’s happening presently… That has resonance.”
Tim Cook, CEO, Apple
Virtual reality vs augmented reality
Credit: David Amerland
enrich the experience of a user by offering deeper layers of interactions;
have the potential to transform how people engage with technology. Entertainment, engineering or medicine are just a couple of sectors where the two technologies might have a lasting impact;
However, the two stand apart because:
virtual reality creates a completely new environment which is completely computer generated. Augmented reality, on the other hand, enhances experiences through digital means that offer a new layer of interaction with reality, but does not seek to replace it.
AR offers a limited field of view, while VR is totally immersive.
Another way to look at it is once you strap those VR goggles, you’re essentially disconnected from the outside world. Unlike VR, an AR user is constantly aware of the physical surroundings while actively engaged with simulated ones.
virtual reality typically requires a headed mount such as the Oculus Rift goggles while augmented reality is far less demanding — you just need a smartphone or tablet.
What’s sure is we’re just barely scratching the surface of what AR and VR can do. In a report earlier this year, BCC Research estimated the global market for both virtual reality and AR will reach more than $105 billion by 2020, up from a mere $8 billion last year.
If you’re still confused, you can always use a cinematic analogy. For instance, the world of The Matrix corresponds to virtual reality while augment reality is akin to The Terminator. Another way to look at this is to think about scuba diving versus going to the aquarium. In virtual reality, you can swim with sharks and with augment reality you can have shark pop out of your business card through the lens of a smartphone. Each has its own pros and cons, so you be the judge which of the two is better.
Bonus: What’s Mixed Reality?
Credit: Frontiers of Science
We just made things pretty clear on what VR and AR are and where the boundary between the two lies. It’s Following innovation in the two fields, a third distinct medium has surfaced: mixed reality (MR).
What MR does is mix the best of augmented and virtual reality to create a … hybrid reality. I confess it gets confusing, partly because the technology is very new and it might innovate itself into something different, but the best explanation I can offer is that mixed reality overlays synthetic content over the real world. If that sounds familiar, it’s because MR is very similar to AR. The key difference here is that in MR the virtual content and the real-world content are able to react to one another in real time. The interaction is facilitated by tools you’d normally see in VR like special goggles and motion sensors used to control.
For the sake of clarity, perhaps the best way to explain MR is to see it in action — enter Microsoft’s Hololens as demoed with Minecraft.
Instinctively, we know a lot about the surrounding world by touching it, poking it and getting a feel for it. But that kind of information is extremely difficult to convey to computers, which is why augmented reality is still in its infancy. But a team of researchers from MIT believe they have a solution for that, and this means – among many other applications – that everyone’s favorite game could get a beautiful revamp.
It’s hard to believe that Pokemon Go came about only a few weeks ago, it seems that everyone has been chasing Pokemon for ages. Of course, nostalgia and our love for the franchise was a decisive factor in the game’s success, but players were delighted by the augmented reality. Augmented reality (AR) is a live direct or indirect view of a physical environment which includes computer-generated elements – such as Pokemon.
Image via NPR.
However, the Pokemon don’t interact with the environment around them – and this is where the new technology kicks in. Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have recently done just that, developing an imaging technique called Interactive Dynamic Video (IDV) that lets you reach in and “touch” objects in videos.
“This technique lets us capture the physical behavior of objects, which gives us a way to play with them in virtual space,” says CSAIL PhD student Abe Davis, who will be publishing the work this month for his final dissertation. “By making videos interactive, we can predict how objects will respond to unknown forces and explore new ways to engage with videos.”
Of course, IDV has many more, and far less trivial practical applications. For instance, it could produce visual cues to help architects and engineer assess the structural stability and condition of a building, and it might even help with practice for invasive surgeries.
“The ability to put real-world objects into virtual models is valuable for not just the obvious entertainment applications, but also for being able to test the stress in a safe virtual environment, in a way that doesn’t harm the real-world counterpart,” says Davis.
Or of course, we could use it for games.
How it works
Typically, if you want to model something in the real world you must first build a 3D model for it. That’s time and resource consuming, and can be borderline impossible for many objects. With Davis’ work, even five seconds of video can have enough information to create realistic simulations, at least in simple environments.
In order to create simulations, he analyzed video clips to find “vibration modes” at different frequencies. These vibration modes represent the way in which an object can move, and by understanding these modes, researchers can predict how the object will move in a new environment.
“Computer graphics allows us to use 3-D models to build interactive simulations, but the techniques can be complicated,” says Doug James, a professor of computer science at Stanford University who was not involved in the research. “Davis and his colleagues have provided a simple and clever way to extract a useful dynamics model from very tiny vibrations in video, and shown how to use it to animate an image.”
In order to test the technology, Davis used IDV on things such as a bridge, a jungle gym, and a ukelele. With just a few mouse clicks, he was able to make the image bend and stretch in different directions, as well as making his own hand telekinetically control the leaves of a bush.
“If you want to model how an object behaves and responds to different forces, we show that you can observe the object respond to existing forces and assume that it will respond in a consistent way to new ones,” says Davis, who also found that the technique even works on some existing videos on YouTube.
Before this technology starts affecting a Pokemon near us, we can just feast on these videos. I for one, am looking forward to seeing the technology live.
What augmented reality tech does is combine the real and not-real in a way that immerses the user into a new world. Microsoft’s HoloLens is a prime example of what you can do with augmented reality, from turning walls into displays, to X-ray vision that allows you to see through things.
The problem with augmented reality is that you need special hardware that’s often annoying or cumbersome, like VR goggles. Some ingenuous fellows, however, posted a tutorial on Instructables explaining step by step how make your own augmented reality book. The whole setup involves a projector, a Kinect 360, video mapping and tracking software, and absolutely no goggles, special hardware or soldering. It does involve some serious coding, though.
“We always thought augmented reality to be a great technology however it is always required to experience it while looking through a device. We wanted to try to use it in combination with projection-mapping to create a seamless and magical experience,” the authors wrote on
Show of hands, who here doesn’t sometimes long for the good old days when you would play in the sandbox or at the beach, building mighty castles, sculpting awesome cities and raising mounds that would make the Misty Mountains look like mellow hills?
Powered by Ge-Force GTX 750 Ti and OpenGL, the Augmented Reality Sandbox comes to bring back that supreme childhood fantasy only better – because it has technology.
The ARS lets users sculpt mountains, canyons and rivers, then fill them with water or even create erupting volcanoes.
The device was built by Glen Glesener and others at the Modeling and Educational Demonstrations Laboratory at the UCLA’s Department of Earth, Planetary, and Space Sciences, using off-the-shelf parts and regular playground sand.
Shapes made in the sand are detected by a Xbox Kinect sensor and processed through open-source software, and projected back on the table as a color-coded map.
Valleys are filled with water, and the liquid flows over the surface with realism, flowing over hillsides and mountain flanks, and coming to rest with glorious, computer calculated inertia.
The Augmented Reality Sandbox is mobile and can be set up in any classroom, allowing students and teachers to see their creations come to life with color and motion, to study geographical structures and weather phenomena in real time.
In the middle of the hype for Apple’s WWDC 2013, the biggest names in technology gathered in the Santa Clara Convention Center last week for the fourth Augmented World Expo (formerly the Augmented Reality Expo). CNET calls it “the next big thing in tech”, but what exactly is augmented reality?
The World, Enhanced
The dictionary defines augment as “to make (something already developed or well under way) greater, as in size, extent, or quantity.” From that, we can take the concept of augmented reality, or AR, to refer to the enhanced view of the physical environment thanks to computer-generated input. In contrast to virtual reality, AR doesn’t transport you to a simulated world; it only builds upon the world that you see with naked eyes. If it sounds too mind-boggling, just think of Iron Man’s helmet, which supplies him with all kinds of real-time information, from stats to maps.
Perhaps the most recognized product of AR technology is the Google Glass, a wearable computer with a head-mounted display. This souped-up eyewear functions similar to a smartphone, allowing you to capture images and videos, receive messages and notifications, and lookup directions, among other things. However, Steve Mann, recognized as the pioneer of wearable computing, says that Glass is just the first stage of wearable AR devices. According to Mann, “Google Glass is a third eye off to the side, but what’s coming is a generation of glasses whose information becomes immersed in your reality.”
Meet the Players
Meta, a Y Combinator-funded startup, is one of the companies attempting to take AR technology one step further. They recently launched a Kickstarter campaign for Meta 1, a 3D augmented reality headset that actually lets users interact with the virtual world using their hands. Their video demonstrates the possible applications of this hardware/software kit, from allowing shoppers to try on digital versions of clothes before buying to providing architects the tools to manipulate 3D models of their designs. Meron Gribetz, the founder of Meta, claims that they are “architecting the future of interaction…creating the keyboard and mouse of the future”. It’s a lofty goal, but judging by the fact that they’ve already surpassed their funding target, people are excited to see this technology in the market.
Another startup that’s making waves in the AR scene is Atheer. Like Meta, they want to create that digital layer—what could be called the fourth dimension—that people can interact with like they do with their smartphones and tablets, only on a bigger, three-dimensional scale. In the case of Atheer, they’re developing a mobile 3D platform that will run on Android and eventually, other operating systems. Aside from this, they plan to work with other developers on more applications of their platform. CEO Sulieman Itani says that their goal is “to create a portable device you can put in your pocket and the interface is as big as possible.”
The Future of Reality
In his article for CNET, Dan Farber discusses how the technology of wearable computing will be more and more integrated with the human body over time, evolving from glasswear to high-tech contact lenses to bionic eyes. There’s already been a lot of controversy regarding Google Glass, with businesses and lawmakers trying to establish rules for its use. Some have gone to the extent of lobbying to get it banned—and it isn’t even on the market yet. Is the world ready for more technology, especially one that brings us farther from the reality that we know? It’s amazing and exciting to think of, but it’ll have a big consequence on the way we live and how we interact with each other.
What do you think? Are you ready to throw out your keyboard for a virtual one?