Tag Archives: virtual reality

These stretchable gloves could let you touch stuff in VR

Credit: Cornell University.

Virtual Reality and its cousin Augmented Reality have come a long way in the last decade. But despite phenomenal progress in motion tracking and 3-D graphics, the immersive experience is lacking in the non-visual department. When you put on a VR headset, it may look and sound like you’re in another world but your body is still firmly rooted in reality. But VR may be in for an upgrade to the next level of sensory experience if these nifty gloves are any indication.

Designed at Cornell University, these gloves are fitted with stretchable fiber-optic sensors that can replicate the complex touch of your fingers in a VR environment — the implications could be pretty wild.

“Let’s say you want to have an augmented reality simulation that teaches you how to fix your car or change a tire. If you had a glove or something that could measure pressure, as well as motion, that augmented reality visualization could say, ‘Turn and then stop, so you don’t over-tighten your lug nuts.’ There’s nothing out there that does that right now, but this is an avenue to do it,”  Rob Shepard, an engineering professor at Cornell and lead author of the new study, said in a statement.

Shepard and colleagues have been working with stretchable sensors since 2016. These sensors measure changes in the intensity of light shined through an optical waveguide in order to determine a material’s level of deformation. The team has, so far, developed all sorts of sensory materials such as optical lace and foams, as well as a stretchable lightguide for multimodal sensing (SLIMS). This latter experimental material is the focus of the researchers’ new study published Science.

SLIMS consists of a long tube fitted with a pair of polyurethane elastomeric cores. One of the tubes is transparent, the other is filled with absorbing dyes and connected to an LED. Each core is coupled with a red-green-blue sensor chip to register geometric changes in the optical path of light.

Credit: Cornell University.

Thanks to this dual-core design, the sensors can detect fine hand gestures, being able to convey pressure, bending, or elongation.

The researchers fitted SLIMS sensors onto each finger of a 3D-printed glove, which they powered with a regular lithium battery. Sensor data was relayed via Bluetooth to a computer that reconstructs the glove’s movements and deformation in real-time.

“Right now, sensing is done mostly by vision,” Shepherd said. “We hardly ever measure touch in real life. This skin is a way to allow ourselves and machines to measure tactile interactions in a way that we now currently use the cameras in our phones. It’s using vision to measure touch. This is the most convenient and practical way to do it in a scalable way.”

For now, the researchers are working to patent the technology with immediate applications in physical therapy and sports medicine. Both fields are already benefiting from motion-tracking technology but have lacked the ability to leverage force interactions until now.

Credit: Pixnio.

How Virtual Reality is poised to change the aviation industry

Credit: Pixnio.

Credit: Pixnio.

Although technologists, media outlets, and fictions have been teasing it for decades, it’s only these past couple of years that technology has caught up with consumers’ ambition for virtual reality (VR). VR is particularly exciting for gaming and entertainment, but it also the potential to radically transform many other industries and aspects of our lives. For instance, VR is now helping surgeons with complicated operations by offering cyber training or treating patients with schizophrenia by providing a visual space where they can meet the voices that torment them. Another huge area that’s set to be impacted by VR is aviation, where it has the power to revamp the industry. Here’s how.

Enhancing the flight experience

Some flights can take as much as eight hours, which can be excruciatingly boring. People usually pass the time by reading books, watching a movie, or listening to music. By its very nature, however, VR is a far more immerse form of entertainment which might help make that flight from London to New York just a little more bearable.

In-flight VR could also help people who are afraid of flying. Instead of going through a traumatizing experience for hours, passengers can immerse themselves in a calm environment of their choosing, whether it is somewhere in nature or a stadium watching football. And for those on the opposite side of the spectrum, you could even enjoy a view of the air plane’s outside surroundings as if you were a bird high above the clouds.

Training the next generation of pilots

Credit: Bohemia Interactive Simulations (BISim).

Credit: Bohemia Interactive Simulations (BISim).

VR is now offering a new way to train pilots beyond the capabilities of traditional flight simulators. As in a simulator, VR flight simulator’s such as Bohemia Interactive’s BISimulator offer cadets access to flight controls that are analogous to those in a real cockpit. However, the immerse experience means that would-be pilots go through a more realistic training scenario. Another added benefit is that training wouldn’t have to be limited by cumbersome equipment and space. Simulators emulate different kinds of cockpits for different kinds of aircraft training, whereas VR training is a lot more versatile and portable. This alone could save billions across the industry.

The French military is already using VR to train their pilots, according to a 2007 study.

Cabin crew training

The advantages of VR training also extends to the cabin crew, which needs to be prepared for all kinds of special situations like emergency landings, passengers in need of medical assistance, and even terrorist hijacking. For instance, a company called Future Visual designed software that allows trainees equipped with a VR headset to inspect airplane models. Everything is exactly as in a real airplane, allowing trainees to learn first hand how emergency doors and other important features of each aircraft work without having to keep an aircraft grounded for training and having to use a lifesize model. Again, there’s a lot of potential for saving costs.

Aircraft engineering

Modeling in 3D has been a ubiquitous tool in many engineering disciplines for decades. Pratt & Whitney, an engineering company, has designed virtual reality tools that allow aviation mechanics and engineers to peer inside a jet engine, for instance. There’s even an “exploded view” feature that allows engineers to examine the jet engine’s individual parts.

Over the next three to five years, as graphics cards to operate VR become cheaper, higher-end cards will be able to drive very large models of millions of polygons with complex lighting and shading. This is when VR engineering will truly become exciting.

There’s also many other unexplored areas of aviation where VR might make an impact. This kind of technology is still in its infancy, so one can only guess what kind of developments and exciting new features will be enabled when VR and aviation fully cross paths. So far, they’re only starting to know each other.

Virtual reality simulation makes people more compassionate towards the homeless

People who were immersed in a virtual reality experience that simulated what it would be like to lose their jobs and homes developed long-lasting empathy toward the homeless. The new study published by researchers at the University of Stanford found that following the VR experience, participants were more likely to sign a petition supporting affordable housing, compared to those that interacted with a 2-D version of the scenario on a computer screen or read a narrative.

Stanford researcher supervises a student while he navigates through a VR experience that simulates what it feels like becoming homeless. The scene shown here is that of an eviction notice. Credit: L.A. Cicero.

“Experiences are what define us as humans, so it’s not surprising that an intense experience in VR is more impactful than imagining something,” said Jeremy Bailenson, a professor of communication and a co-author of the paper.

Unless you’re a psychopath, chances are that your brain is wired to empathize — the ability for stepping into the shoes of others and understanding their feelings and perspectives. Some people have more empathy than others, but that doesn’t mean that they’ve reached their full potential. A Buddhist-inspired technique for fostering empathy, for instance, involves spending a whole day being mindful of every person connected to your routine. But, nowadays, modern technology offers promising new tools to augment empathy — and virtual reality is a prime candidate.

In the United States alone, more than 10 million VR headsets have been sold in the past two years. It’s quite possible that the technology could become as ubiquitous in households as video games consoles.

But unlike traditional media, virtual reality presents exciting new opportunities owing to its immersive nature. Entertainment is obviously an important industry where VR shines, but there are also untapped opportunities in research or therapy. For instance, British researchers showed in a 2016 study how virtual reality can be used to ease depression.

Now, Stanford researchers have shown that VR can be a great tool for enhancing empathy and promoting social causes.

Unlike previous similar studies that used a small sample size and didn’t examine long-term effects, the new study involved 560 participants, aged 15 to 88, whose behavior was examined over a two-month-long period.

The research involved two studies. During one of the studies, some of the participants were immersed in a seven-minute experience called “Becoming Homeless”. The program, which was developed by Stanford’s Virtual Human Interaction Lab, puts the participant in the shoes of a person who’s forced to live on the streets. In one scene, the participant had to scour the apartment for things to sell in order to pay the rent. In another scene, the participant had to protect his belongings from being stolen while seeking shelter on a public bus.

At the end of the studies, the researchers found that the participants who took part in the “Becoming Homeless” condition were far more likely to have enduring positive attitudes toward the homeless than people who performed other tasks (reading a narrative or interacting with a 2-D version of the scenario on a computer).

For instance, participants in the “Becoming Homeless” condition were more likely to agree with statements like “Our society does not do enough to help homeless people.” Also, 85% of participants in the VR experience signed a petition supporting affordable housing, versus 63% who read the narrative and 66% who went through the 2-D version on a computer.

“We tend to think of empathy as something you either have or don’t have,” said Jamil Zaki, an assistant professor of psychology and a co-author of the paper. “But lots of studies have demonstrated that empathy isn’t just a trait. It’s something you can work on and turn up or down in different situations.”

The positive attitude toward the homeless carried on over a long period of time, suggesting that the VR experience had a long-lasting impression on the participants.

In the future, the research team plans on performing new studies to tease out the nuances of VR that influence people to behave in certain ways.

The findings appeared in the journal PLOS One.

Three Old Scientific Concepts Getting a Modern Look

If you have a good look at some of the underlying concepts of modern science, you might notice that some of our current notions are rooted in old scientific thinking, some of which originated in ancient times. Some of today’s scientists have even reconsidered or revamped old scientific concepts. We’ve explored some of them below.

4 Elements of the Ancient Greeks vs 4 Phases of Matter

The ancient Greek philosopher and scholar Empedocles (495-430 BC) came up with the cosmogenic belief that all matter was made up of four principal elements: earth, water, air, and fire. He further speculated that these various elements or substances were able to be separated or reconstituted. According to Empedocles, these actions were a result of two forces. These forces were love, which worked to combine, and hate, which brought about a breaking down of the elements.

What scientists refer to as elements today have few similarities with the elements examined by the Greeks thousands of years ago. However, Empedocles’ proposed quadruplet of substances bares resemblance to what we call the four phases of matter: solid, liquid, gas, and plasma. The phases are the different forms or properties material substances can take.

Water in two states: liquid (including the clouds), and solid (ice). Image via Wikipedia.

Compare Empedocles’ substances to the modern phases of matter. “Earth” would be solid. The dirt on the ground is in a solid phase of matter. Next comes water which is a liquid; water is the most common liquid on Earth. Air, something which surrounds us constantly in our atmosphere, is a gaseous form of matter.

And lastly, we come to fire. Fire has fascinated human beings for time beyond history. Fire is similar to plasma in that both generate electromagnetic radiation such as light. Most flames you see in your everyday life are not hot enough to be considered plasma. They are typically considered gaseous. A prime example of an area where plasma is formed is the sun. The ancient four elements have an intriguing correspondent in modern science.

Ancient Concept of Dome Sky vs. Simulation Hypothesis

Millennia ago, people held the notion that his world was flat. Picture a horizontal cooking sheet with a transparent glass bowl set on top of it. Primitive people thought of the Earth in much the same way. They considered the land itself as flat and the sky as a dome. However, early Greek philosophers such as Pythagoras (c. 570-495 BC) — who is also known for formulating the Pythagorean theorem — understood that Earth was actually spherical.

Fast forward to the 21st century. Now scientists are considering the scientific concept of the dome once again but in a much more complex manner.

Regardless of what conspiracy lovers would have you believe, the human race has ventured into outer space, leaving the face of the Earth to travel to the stars. In the face of all our achievements, some scientists actually question if reality is real, a mindboggling and apparently laughable idea.

But some scientists have wondered if we could be existing in a computer simulation. The gap between science and science fiction starts to become very fine when considering this.

This idea calls to mind classic sci-fi plots such as those frequently played out in The Twilight Zone in which everything the characters take as real turns out to be something entirely unexpected. You might also remember the sequence in Men in Black in which the audience sees that the entire universe is inside an alien marble. Bill Nye even uses the dome as an example in discussing hypothetical virtual reality. This gives one the feeling that he is living in a snowglobe.

Medieval Alchemy vs. Modern Chemistry

The alchemists of the Middle Ages attempted to prove that matter could be transformed from one object into an entirely new object. One of their fondest goals they wished to achieve was the creation of gold from a less valuable substance. They were dreaming big, but such dreams have not yet come to fruition. Could it actually be possible to alter one type of matter into another?

Well, modern chemists may be well on their way to achieving this feat some day. They are pursuing the idea of converting light into matter, as is expressed in Albert Einstein’s famous equation. Since 2014, scientists have been claiming that such an operation would be quite feasible, especially with extant technology.

Einstein’s famous equation.

Light is made up of photons, and a contraption capable of performing the conversion has been dubbed “photon-photon collider.” Though we might not be able to transform matter into other matter in the near future, it looks like the light-to-matter transformation has a bright outlook.

virtual-reality-1389040_960_720

Virtual reality out-of-body experience makes people less fearful of death

Scientists used virtual reality to convince a group of volunteers that they were having a so-called out-of-body experience. The illusion significantly lowered the participants’ fear of death, akin to anecdotal reports of people who previously had similar mystical experiences.

virtual-reality-1389040_960_720

Credit: Pixabay.

Man vs reality is a strong and common theme in postmodern art. Virtual reality, however, seems poised to make the fine line between reality and illusion even blurrier. Right now, virtual reality (or VR) is an extremely hot topic. Millions are flowing into various VR tech as venture capitalists are sniffing the next fat unicorn to milk. MagicLeap, for instance, has received over $1.4 billion — billion — in funding from Google, Alibaba, and others. The focus is on entertainment apps, first and foremost, then education and, likely, porn. But the thing about VR is that its immersive nature can be extremely powerful — so powerful that it can drastically change people’s perspective about the world and our life here on Earth. Yes, so can a good book but VR … it’s something unprecedented.

Previously, researchers used VR to demonstrate how racial bias can be reduced when white people were put inside a black virtual body. Similar results were found after adults stepped in the shoes of virtual children, changing their perception in the process.

These sort of studies inspired Spanish researchers from the University of Barcelona to test how people identify with a virtual body and how this might change their attitudes about death. Sixteen women were instructed how to use the Oculus Rift VR head-mounted display to create the illusion that the virtual body in the simulation was, in fact, their own biological bodies. To help the participants identify with the virtual body, some clever tricks were used. For instance, when a virtual ball was dropped onto the foot of the virtual body, a vibration was triggered on the person’s real foot. This is a technique called the “rubber hand illusion”, used to much effect by psychologists for years. Previously, Swedish neuroscientists used the rubber hand illusion to make people “feel” the space immediately around them, which participants described as a ‘force field’. It can also give people the illusion is being invisible. 

VR

Credit: YouTube.

Once the illusion was well rooted, the researchers switched the participants’ perspective from first-person to third-person, like you gamers sometimes switch perspectives in video games. Meanwhile, 16 other women immersed themselves in the same VR simulation but never left the first-person view.

When tested with a standard questionnaire, the participants who had experienced the third-person perspective reported being less fearful of death than the control group, as reported in the journal PLOS One

“Fear of death in the experimental group was found to be lower than in the control group,” researchers conclude in the study abstract. “This is in line with previous reports that naturally occurring [out-of-body experiences] are often associated with enhanced belief in life after death.”

The researchers believe the VR out-of-body experience might have given the participants the idea that body and consciousness are separate entities. It follows that it’s possible to survive death which might explain why there was lower anxiety around it all. People who are terminally ill or who feel petrified by the prospect of death to the point life becomes unbearable might alleviate their anxiety by using a similar VR app.

virtual reality oculus rift

The Difference between Virtual and Augmented Reality

Virtual and augmented reality seem to be on everybody’s lips nowadays, both promising to revamp the tech scene and change the way consumers interact in the digital space. Despite the hype and media attention, the two often get confused as some people use the terms interchangeably. While there are many similarities between virtual reality (VR) and augmented reality (AR), the two are definitely distinguishable. Let’s dive into these differences.

What’s Virtual Reality?

virtual reality oculus rift

Credit: Silicon Beat

Virtual reality is a computer simulated reality in which a user can interact with replicated real or imaginary environments. The experience is totally immersive by means of visual, auditive and haptic (touch) stimulation so the constructed reality is almost indistinguishable from the real deal. You’re completely inside it.

Marked by clunky beginnings, the idea of an alternate simulated reality took off in the late ’80s and early ’90s, a time when personal computer development exploded and a lot of people became excited about what technology had to offer. These attempts, like the disastrous Nintendo Virtual Boy which shut down after only one year, were marked by failure after failure, so everyone seemed to lose faith in VR.

Then came Palmer Luckey, who is undoubtedly the father of contemporary VR thanks to his Oculus Rift. Luckey built his first prototype in 2011, when he was barely 18, and quickly raised $2 million with Kickstarter. In 2014, Facebook bought Oculus Rift for $2 billion. Other popular VR headsets include Samsung Gear VR or Google Cardboard.

What’s Augmented Reality?

augmented reality interior design

Credit: Syrus Gold

While VR completely immerses the user in a simulated reality, AR blends the virtual and real. Like VR, an AR experience typically involves some sort of goggles through which you can view a physical reality whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data. In augmented reality, the real and the non-real or virtual can be easily told apart.

Wearing Google Glass — the biggest effort a company ever made to bring AR to mass consumers — you can walk through a conference hall and see things ‘pop to life’ around the booths, such as animated 3D graphics of an architecture model if the technology is supported. The goggles aren’t even necessary since you can do this via mobile apps which use a smartphone’s or tablet’s camera to scan the environment while augmented elements will show on the display. There are other creative means, as well.

Unfortunately, Google Glass didn’t take off and the company discontinued the product in 2015. Instead, AR apps on smartphones are much more popular, possibly because they’re less creepy than a pair of glasses with cameras.

Pokemon Go in action. Credit: New Yorker

Pokemon Go in action. Credit: New Yorker

Perhaps the most revealing example of AR is Pokemon Go, a viral phenomenon which amassing more than 100 million downloads in a few week. In Pokemon Go, you use your smartphone to find pokemons lurking in your vicinity with the help of a map that’s build based on your real-life GPS signal. To catch the pokemon you have to throw a pokeball at it by swiping on your mobile’s screen and when you toggle AR on, you can see the pokemon with the real world in the background.

Despite the hype, Pokemon GO has a minimal and basic AR interface. Some more revealing examples include:

  • Sky Map — a mobile app that lets you point your phone towards the sky and ‘see’ all the constellations you’re facing in relation to your position.
  • Word Lens — A Google app that allows you to point your phone to a sign and have it translated in your target language, instantly.
  • Project Tango – another Google project which aims to create a sensor-laden smartphone that can map the real world and project an accurate 3D picture of it.

“I’m excited about Augmented Reality because unlike Virtual Reality which closes the world out, AR allows individuals to be present in the world but hopefully allows an improvement on what’s happening presently… That has resonance.”
Tim Cook, CEO, Apple

Virtual reality vs augmented reality

Credit: David Amerland

Credit: David Amerland

Both technologies

  • enrich the experience of a user by offering deeper layers of interactions;
  • have the potential to transform how people engage with technology. Entertainment, engineering or medicine are just a couple of sectors where the two technologies might have a lasting impact;

However, the two stand apart because:

  • virtual reality creates a completely new environment which is completely computer generated. Augmented reality, on the other hand, enhances experiences through digital means that offer a new layer of interaction with reality, but does not seek to replace it.
  • AR offers a limited field of view, while VR is totally immersive.
  • Another way to look at it is once you strap those VR goggles, you’re essentially disconnected from the outside world. Unlike VR, an AR user is constantly aware of the physical surroundings while actively engaged with simulated ones.
  • virtual reality typically requires a headed mount such as the Oculus Rift goggles while augmented reality is far less demanding — you just need a smartphone or tablet.

What’s sure is we’re just barely scratching the surface of what AR and VR can do. In a report earlier this year, BCC Research estimated the global market for both virtual reality and AR will reach more than $105 billion by 2020, up from a mere $8 billion last year.

If you’re still confused, you can always use a cinematic analogy. For instance, the world of The Matrix corresponds to virtual reality while augment reality is akin to The Terminator. Another way to look at this is to think about scuba diving versus going to the aquarium. In virtual reality, you can swim with sharks and with augment reality you can have shark pop out of your business card through the lens of a smartphone. Each has its own pros and cons, so you be the judge which of the two is better.

Bonus: What’s Mixed Reality?

mixed reality

Credit: Frontiers of Science

We just made things pretty clear on what VR and AR are and where the boundary between the two lies. It’s Following innovation in the two fields, a third distinct medium has surfaced: mixed reality (MR).

What MR does is mix the best of augmented and virtual reality to create a … hybrid reality. I confess it gets confusing, partly because the technology is very new and it might innovate itself into something different, but the best explanation I can offer is that mixed reality overlays synthetic content over the real world. If that sounds familiar, it’s because MR is very similar to AR. The key difference here is that in MR the virtual content and the real-world content are able to react to one another in real time.  The interaction is facilitated by tools you’d normally see in VR like special goggles and motion sensors used to control.

For the sake of clarity, perhaps the best way to explain MR is to see it in action — enter Microsoft’s Hololens as demoed with Minecraft.

Parents might soon watch their unborn babies grow up in 3D

Modern technology is impressive, extremely useful, and sometimes a bit disturbing. Thanks to a new development, parents might soon be able to see their unborn babies developing in realistic 3-D immersive visualizations.

3D virtual model MRI view of fetus at 26 weeks.
Credit: Image courtesy of Radiological Society of North America

Even if you’re not a parent you have to admit this is pretty cool. The new technology transforms MRI and ultrasound data into a 3-D virtual reality model of a fetus. Initially, sequential MRI slices are used as a scaffold for the model, and then the entire fetus baby is reconstructed in 3D. The accurate model includes the womb, umbilical cord, placenta and fetus, and researchers argue that the technology could even be used for educational purposes – not just for eager parents.

“The 3-D fetal models combined with virtual reality immersive technologies may improve our understanding of fetal anatomical characteristics and can be used for educational purposes and as a method for parents to visualize their unborn baby,” said study co-author Heron Werner Jr., M.D., Ph.D., from the Clínica de Diagnóstico por Imagem, in Rio de Janeiro, Brazil.

They key is the virtual reality which makes the visualization so spectacular. Dr. Werner and colleagues used the latest-generation Oculus Rift 2 headset. Oculus Rift is a virtual reality headset developed and manufactured by Oculus released earlier this year, one of the best virtual reality technologies available at the moment.

“The experience with the Oculus Rift has been wonderful,” Dr. Werner said. “It provides fetal images that are sharper and clearer than ultrasound and MR images viewed on a traditional display.”

But this is more than just a cool visualization technique, it could actually save lives. A big advantage is that it offers an assessment tool for fetal airway patency. Fetal airway patency is basically the way through which the unborn baby’s airways open and close, and this technology could highlight dangerous abnormalities. Researchers already report that the technology has proven useful in one case, where a baby suffered from an abnormality that required postnatal surgery. They hope to use this approach more broadly over the next year.

“The physicians can have access to an immersive experience on the clinical case that they are working on, having the whole internal structure of the fetus in 3-D in order to better visualize and share the morphological information,” Dr. Werner said. “We believe that these images will help facilitate a multidisciplinary discussion about some pathologies in addition to bringing a new experience for parents when following the development of their unborn child.”

Exoskeleton glove lets you touch, grasp and feel objects in virtual reality

Credit Youtube

Virtual reality (VR) allows users to immerse themselves in a completely new visual environment, but the experience isn’t complete. Emulating reality would require also stimulating the other senses, like smell, taste, and touch. While smell and taste aren’t that important in this context, stimulating touch in VR could make the experience a whole lot more appealing. With this in mind, several companies are already working to bring haptic feedback for VR to the market, but Dexmo‘s exoskeleton glove seems one of the most promising.

Users wearing the glove can touch, grasp and feel the texture of various objects. Squeezing a rubber duckling in a VR environment, for instance, feels almost like the real deal, the company claims.

“The maximum level of feedback current VR controllers give is a gentle rumble using vibration motors,” said Aler Gu, the company’s co-founder, for MIT’s Techreview. “But vibration alone isn’t enough to fool the brain. The moment you detect anomalies in how objects feel, your sense of immersion is broken.”

The small team of engineers made their glove by assembling five custom-built force-feedback units, one for each finger. These motors alter the torque to simulate the stiffness of various objects; little resistance for soft objects like sponges and more resistance for stiff objects like wooden beams. All while providing 11 degrees of freedom for the hand’s motion. In fact, in some cases, the glove’s resistance can be so great that it’s physically impossible to penetrate through objects in the VR environment.

I know a lot of gamers are excited by the prospect of using Dexmo’s glove, but the potential applications are more versatile. Engineers could make good use of it in computer-aided design (CAD) and medical personnel might profit by using the glove in a VR setting for training.

The glove runs on battery and can be operated wirelessly. There’s no word on the pricing yet, but Gu said it will be something “eventually everybody should be able to afford.”

Virtual reality could help in depression treatment

The future is now – a new study found that virtual reality can help alleviate depressive symptoms.

Photo by Nan Palmero.

Photo by Nan Palmero.

The therapy was previously tested on healthy people and showed positive results. Now, the British team tested their idea on 15 depression patients aged 23-61. Nine reported reduced depressive symptoms a month after the therapy, and 4 of them reported clinical improvements in the severity of their symptoms.

Basically, the idea is to make people less harsh on themselves.

“People who struggle with anxiety and depression can be excessively self-critical when things go wrong in their lives,” explains study lead Professor Chris Brewin (UCL Clinical, Educational & Health Psychology).

In the virtual reality setting, patients see a life-size ‘avatar’ or virtual body. The idea is to make the experience as realistic as possible, so that the patient identifies with the avatar. Then, the avatar has to comfort a visibly distressed virtual child. It’s a very simple 8 minute scenario, repeated once a week for three weeks.

“In this study, by comforting the child and then hearing their own words back, patients are indirectly giving themselves compassion. The aim was to teach patients to be more compassionate towards themselves and less self-critical, and we saw promising results. A month after the study, several patients described how their experience had changed their response to real-life situations in which they would previously have been self-critical.”

The results were surprisingly good, but the sample size was too small to draw definite conclusions. The study does do a great job as a proof of concept, and now the results have to be replicated at a larger scale – because the potential seems to be there.

“We now hope to develop the technique further to conduct a larger controlled trial, so that we can confidently determine any clinical benefit,” says co-author Professor Mel Slater (ICREA-University of Barcelona and UCL Computer Science). “If a substantial benefit is seen, then this therapy could have huge potential. The recent marketing of low-cost home virtual reality systems means that methods such as this could potentially be part of every home and be used on a widespread basis.”

 

the-next-big-thing-0423

The ‘Next Big Things’ in Science Ten Years from Now

ZME Science reports the latest trends and advances in science on a daily basis. We believe this kind of reporting helps people keep up with an ever-changing world, while also fueling inspiration to do better.

But it can also get frustrating when you read about 44% efficiency solar panels and you, as a consumer, can’t have them. Of course, there is a momentary time lapse as the wave of innovation travels from early adopters to mainstream consumers. The first fully functional digital computer, the ENIAC, was invented in 1946, but it wasn’t until 1975 that Ed Roberts introduced the first personal computer, the Altair 8800. Think touch screen tech is a new thing? The first touch screen was invented by E.A. Johnson at the Royal Radar Establishment, Malvern, UK, between 1965 – 1967. In the 80s and 90s, some companies like Hewlett-Packard or Microsoft introduced several touch screen products with modest commercial success. It wasn’t until 2007 when Apple released the first iPhone that touch screen really became popular and accessible. And the list goes on.

the-next-big-thing-0423

The point I’m trying to make is that all the exciting stuff we’re seeing coming out of cutting-edge labs around the world will take time to mature and become truly integrated into society. It’s in the bubble stage, and for some the bubble will pop and the tech won’t survive. Other inventions and research might resurface many decades from now.

So, what’s the future going to look like in ten years from now? What’s the next big thing? It’s my personal opinion that, given the current pace of technological advancement, these sorts of estimates are very difficult, if not impossible, to make. As such, here are just a few of my guesses as to what technology — some new, other improved versions of what’s already mainstream today — will become an integral part of society in the future.

The next five years

Wearable devices

A hot trend right now is integrating technology into wearable devices. Glasses with cameras (such as Google Glasses) or watches that answer your phone calls (like the Apple Watch) are just a few products that are very popular right now. Industry experts believe we’re just scratching the surface, though.

Thanks to flexible electronics, clothing will soon house computers, sensors, or wireless receivers. But most of these need to connect to a smartphone to work. The real explosion of wearable tech might happen once these are able to break free and work independently.

“Smart devices, until they become untethered or do something interesting on their own, will be too complicated and not really fulfill the promise of what smart devices can do,” Mike Bell, head of Intel’s mobile business, said. “These devices have to be standalone and do something great on their own to get mass adoption. Then if they can do something else once you pair it, that’s fine.”

Internet of Things

In line with wearable devices is the Internet of Things — machines talking to one another, with computer-connected humans observing, analyzing, and acting upon the resulting ‘big data’ explosion. Refrigerators, toasters, and even trash cans could be computerized and, most importantly, networked. One of the better-known examples is Google’s Nest thermostat.

This Wi-Fi-connected thermostat allows you to remotely adjust the temperature of your home via your mobile device and also learns your behavioral patterns to create a temperature-setting schedule. Nest was acquired by Google for $3.2 billion in 2014. Another company, SmartThings, which Samsung acquired in August, offers various sensors and smart-home kits that can monitor things like who is coming in and out of your house and can alert you to potential water leaks to give homeowners peace of mind. Fed by sensors soon to number in the trillions, working with intelligent systems in the billions, and involving millions of applications, the Internet of Things will drive new consumer and business behavior the likes of which we’ve yet to see.

Big Data and Machine Learning

Big data is a hyped buzzword nowadays that’s used to describe massive sets of (both structured and unstructured) data which are hard to process using conventional techniques. Big data analytics can reveal insights previously hidden by data too costly to process. One example is peer influence among customers revealed by analyzing shoppers’ transaction, social, and geographical data.

With more and more information being stored online, especially s the internet of things and wearable tech gain in popularity, the world will soon reach an overload threshold. Sifting through this massive volume is thus imperative, and this is where machine learning comes in. Machine learning doesn’t refer to household robots, though. Instead, it’s a concept much closer to home. For instance, your email has a spam folder where email that fit a certain pattern are filtered through by an algorithm that has learned to distinguish between “spam” and “not spam”. Similarly, your Facebook feed is filled with posts from your closest friends because an algorithm has learned what your are preferences based on your interactions — likes, comments, shares, and clickthroughs.

Where big data and machine learning meet, an informational revolution awaits and there’s no field where the transforming potential is greater than medicine. Doctors will be aided by smart algorithms that mine their patient’s dataset, complete with previous diagnoses or genetic information. The algorithm would go through the vast records and correlate with medical information. For instance, a cancer patient might come in for treatment. The doctor would then be informed that since the patient has a certain gene or set of genes, a customized treatment would apply. Amazing!

Cryptocurrency

You might have heard of Bitcoin, but it’s not the only form of cryptocurrency. Today, there are thousands of cryptocurrencies. Unlike government-backed currencies, which are usually regulated and created by a central bank, cryptocurrencies are generated by computers that solve a complex series of algorithms and rely on decentralized, peer-to-peer networks. While these were just a fad a few years ago, things are a lot more serious now. Shortly after Bitcoin’s creation, one user spent 10,000 Bitcoin for two pizzas. That same amount of bitcoin would be worth about $8 million a few short years later. Today, they’re worth around $63 million.

There’s much debate surrounding cryptocurrency. For instance, because it’s decentralized and anonymous, Bitcoin has been used and is used to fund illegal activities. Also, there’s always the risk of a computer crash erasing your wallet or a hacker ransacking your virtual vault. Most of these concerns aren’t all that different to those concerned about traditional money, though, and with time, cryptocurrencies could become very secure.

Driverless cars

In 2012, California was the first state to formally legalize driverless cars. The UK is set to follow this year.

Some 1.2 million people worldwide die in car accidents every year. Tests so far have shown that driverless cars are very safe and should greatly reduce motor accidents. In fact, if all the cars on a motorway were driverless and networked, then theoretically no accident should ever occur. Moreover, algorithms would make sure that you’d get the best traffic flow possible as mathematical functions would calculate what velocity a car should go relative to one another such that the whole column would move forward at maximum speed. Of course, this would mean that most people would have to give up driving, which isn’t an option among those who enjoy it. Even so, you could get to work alone in the car without a driver’s license. “Almost every car company is working on automated vehicles,” says Sven Beiker, the executive director of the Center for Automotive Research at Stanford.

3D printing

A 3D printer reads every slice (or 2D image) of your virtual object and proceeds to create the object, blending each layer together with no sign of the layering visible, resulting in a single 3D object. It’s not exactly new. Companies, especially in the R&D or automotive business, have been using 3D printers to make molds and prototypes for more than two decades. What’s new is how this technology has arrived to the common folk. Nowadays, you can buy a decent 3D printer for less than $600. With it, you can print spare parts for your broken machines, make art, or whatever else suits your fancy.

You don’t even have to know how to design. Digital libraries for 3D parts are growing rapidly and soon enough you should be able to print whatever you need. The technology itself is also advancing. We’ve seen 3D printed homes, cars, or ears, and this is just the beginning. Scientists believe they can eventually 3D print functioning organs that are custom made for each patient, saving millions of lives each year.

Virtual reality

The roots of virtual reality can be traced to the late 1950s, at a time when computers where confined Goliaths the size of a house. A young electrical engineer and former naval radar technician named Douglas Engelbart saw computers’ potential as a digital display and laid the foundation for virtual reality. Fast forward to today and not that much has become of VR — at least not the way we’ve seen in movies.

But if we were to try on the proverbial VR goggles what insight into the future might they grant? Well, you’d see a place for VR that goes far beyond video games, like the kind Oculus Rift strives towards. Multi-player VR provides the foundation by which a class of students can go on a virtual tour of the Egyptian pyramids, let a group of friends watch the latest episode of “Game of Thrones” together, or let the elderly experience what it is like to share a visit with their grandkids who may be halfway around the world. Where VR might be most useful is not in fabricating fantasies, but enriching reality by connecting people like never before. It’s terribly exciting.

Genomics

It’s been 10 years since the human genome was first sequenced. In that time, the cost of sequencing per person has fallen from $2.7bn to just $5,000! Raymond McAuley, a leading genomics researcher, predicted in a lecture at Singularity University’s Exponential Finance 2014 conference that we will be sequencing DNA for pennies by 2020.  When sequencing is applied to a mass population, we will have mass data, and who knows what that data will reveal?

The next ten years

Nanotechnology

There is increasing optimism that nanotechnology applied to medicine and dentistry will bring significant advances in the diagnosis, treatment, and prevention of disease. Many researchers believe scientific devices that are dwarfed by dust mites may one day be capable of grand biomedical miracles.

Donald Eigler is renowned for his breakthrough work in the precise manipulation of matter at the atomic level. In 1989, he spelled the letters IBM using 35 carefully manipulated individual xenon atoms. He imagines one day “hijacking the brilliant mechanisms of biology” to create functional non-biological nanosystems. “In my dreams I can imagine some environmentally safe virus, which, by design, manufactures and spits out a 64-bit adder. We then just flow the virus’s effluent over our chips and have the adders attach in just the right places. That’s pretty far-fetched stuff, but I think it less far-fetched than Feynman in ’59.”

Angela Belcher is widely known for her work on evolving new materials for energy, electronics, and the environment. The W. M. Keck Professor of Energy, Materials Science & Engineering and Biological Engineering at the Massachusetts Institute of Technology, Belcher believes the big impact of nanotechnology and nanoscience will be in manufacturing -– specifically clean manufacturing of materials with new routes to the synthesis of materials, less waste, and self-assembling materials.

“It’s happening right now, if you look at the manufacturing of certain materials for, say, batteries for vehicles, which is based on nanostructuring of materials and getting the right combination of materials together at the nanoscale. Imagine what a big impact that could have in the environment in terms of reducing fossil fuels. So clean manufacturing is one area where I think we will definitely see advances in the next 10 years or so.”

David Awschalom is a professor of physics and electrical and computer engineering at the University of California, Santa Barbara. As pioneer in the field of semiconductor spintronics, in the next decade or two, Awschalom would like to see the emergence of genuine quantum technology. “I’m thinking about possible multifunctional systems that combine logic, storage, communication as powerful quantum objects based on single particles in nature. And whether this is rooted in a biological system, or a chemical system, or a solid state system may not matter and may lead to revolutionary applications in technology, medicine, energy, or other areas.”

Graphene

ZME Science has never backed down from praising graphene, the one atom thick carbon allotrope arranged in a hexagon lattice — and for good reason, too. Here are just a few highlights we’ve reported: it can repair itself; it’s the thinnest compound known to us; the lightest material (with 1 square meter coming in at around 0.77 milligrams); the strongest compound discovered (between 100-300 times stronger than steel and with a tensile stiffness of 150,000,000 psi); the best conductor of heat at room temperature; and the best conductor of electricity (studies have shown electron mobility at values of more than 15,000 cm2·V−1·s−1). It can be used to make anything, ranging from aircraft, to bulletproof vests ten times more protective than steel, to fuel cells. It can also be turned into an anti-cancer agent. Most of all, however, its transformative potential is greatest in the field of electronics, where it could replace poor old silicon, which is greatly pressed by Moore’s law.

Reading all this, it’s easy to hail graphene as the wonder material of the new age of technology that is to come. So, what’s next? Manufacturing, of course. The biggest hurdle scientists are currently facing is producing bulk graphene that is pure enough for industrial applications at a reasonable price. Once this is settled, who knows what will happen.

Mars Colony

After Neil Armstrong’s historic moonwalk, the world went drunk with dreams of conquering space. You’ve probably seen or heard about ‘prophecies’ made during those times of how the world might look like in the year 2000. But no, we don’t have moon bases, flying cars or a cure for cancer — yet.

In time, the interest for manned space exploration dwindled, something that can has been unfortunately reflected in NASA’s present budget. Progress has still been made, albeit not at the pace some might have liked. The International Space Station is a fantastic collaborative effort which is now nearing two decades of continued manned operation. Only two years ago, NASA landed the Curiosity rover, which is currently roaming the Red Planet and relaying startling facts about our neighboring planet. By all signs, men will walk on Mars and when this happens, as with Armstrong before, a new rejuvenated wave of enthusiasm for space exploration will ripple through society. And, ultimately, this will be consolidated with a manned outpost on Mars. I know what you must be thinking, but if we’re to lend our ears to NASA officials, this target isn’t that far off in time. By all accounts, it will most likely happen during your lifetime.

Beginning in 2018, NASA’s powerful Space Launch System rocket became operational, testing new abilities for space exploration, like a planned manned landing on an asteroid in 2025. Human missions to Mars will rely on Orion and an evolved version of SLS that will be the most powerful launch vehicle ever flown. Hopefully, NASA will fly astronauts to Mars (marstronauts?) sometime during the 2030s. Don’t get your hopes up too much for Mars One, however.

Wireless electricity

We’ve know about the possibilities for more than a century, most famously by the great Tesla during his famous lectures. The scientist would hang up a light bulb in the air and it would light up — all without any wires! The audience was dazzled every time by this performance. But this wasn’t any parlor trick — just a matter of current by induction.

Basically, Tesla relied on sets of huge coils which generated a magnetic field, which induces a current into the light bulb. Voila! In the future, wireless electricity will be accessible to anyone — as easy as WiFi is today. Smartphones will charge in your pocket as you wander around, televisions will flicker with no wires attached, and electric cars will refuel while sitting on the driveway. In fact, the technology is already in place. What is required is a huge infrastructure leap. Essentially, wirelessly charged devices need to be compatible with the charging stations and this requires a lot of effort from of both the charging suppliers and the device manufacturers. We’re getting there, though.

Nuclear Fusion

Nuclear fusion is essentially the opposite of nuclear fission. In fission, a heavy nucleus is split into smaller nuclei. With fusion, lighter nuclei are fused into a heavier nucleus.

The fusion process is the reaction that powers the sun. On the sun, in a series of nuclear reactions, four isotopes of hydrogen-1 are fused into a helium-4, which releases a tremendous amount of energy. The goal of scientists for the last 50 years has been the controlled release of energy from a fusion reaction. If the energy from a fusion reaction can be released slowly, it can be used to produce electricity in virtually unlimited quantities. Furthermore, there’s no waste materials to deal with or contaminants to harm the atmosphere. To achieve the nuclear fusion dream, scientists need to overcome three main constraints:

  • temperature (you need to put in a lot of energy to kick off fusion; helium atoms need to be heated to 40,000,000 degrees Kelvin — that’s hotter than the sun!)
  • time (charged nuclei must be held together close enough and long enough for the fusion reaction to start)
  • containment (at that temperature everything is a gas, so containment is a major challenge).

Though other projects exist elsewhere, nuclear fusion today is championed by the International Thermonuclear Experimental Reactor (ITER) project, founded in 1985, when the Soviet Union proposed to the U.S. that the countries work together to explore the peaceful applications of nuclear fusion. Since then, ITER has ballooned into a 35-country project with an estimated $50 billion price tag.

Key structures are still being built at ITER, and when ready the reactor will stand 100 feet tall, weigh 23,000 tons, and its core will be hotter than the sun. Once turned on (hopefully successfully), the ITER could solve the world’s energy problems for the foreseeable future, and help save the planet from environmental catastrophe.

VR bicycle

Cycling while playing virtual reality games: will this convince people to exercise?

VR bicycle

There’s no secret that exercising, along with a healthy diet, is the best way to stay lean, healthy and strong. A lot of people, including yours truly, however lack the discipline to seriously commit to regular exercising. Part of the problem lies in framing it as a drag; something unpleasant that involves making sacrifices. A San Francisco startup is trying a novel approach to enticing people to exercise more: combine people’s love for video games with a virtual reality assisted bicycle.

bicycle

The company called Virzoom uses a special bicycle that’s been designed to sync with the Sony PlayStation VR, HTC Vive, and Oculus Rift. At the handlebar’s ends are gaming controllers that allow the user to perform various in-game functions and control the environment. It also consists of sensors and vitals monitor to keep track of the users’ gestures and movements. So far, the company released five games.

pegasus

One is a Wild West themed game in which you ride a horse and have to lasso bandits. The faster you peddle, the faster the horse gallops. To catch the bandit you just have to stare at him, then a colour gauge appears. You fire the lasso when you hit green. Another sees you riding a pegasus through a forest, filled with fruit which the winged horse needs to keep on flapping. Again, speed is synced with peddling and leaning left or right on the bike will similarly direct pegasus left or right.

You can also play in multiplayer so the competition gets your heart pumping even more.

The whole concept sounds interesting, but will this fool anyone? People who exercise often will not be enticed to buy this product simply because it distracts them from real work. Wearing a headset VR while exercising can be very uncomfortable. The unit is bulky, and your head can feel like a sauna once you really start to burn some fat. On the other hand, people who hate exercising but love video games (the target audience) might get bored by the interface fast. Hayden Dingman from PCworld had a hands-on test with the device and says that he isn’t sold. He thought it was boring and the VR environment made him feel sick because it doesn’t emulate real life physics properly (a huge problem for all VR developers).

I like the idea, but I wouldn’t use it. I don’t enjoy exercising (few people do), but having to distract yourself to the point of total VR immersion seems like a stretch for now. Maybe I’m just not ready. What about you? Would you buy it?

Virzoom’s stationary bike will be released in 2016. You can pre-order now for $200 ($50 off from $250).

Kickstarter project plans to put you virtually on the ISS

There is no denying that the view from the International Space S is spectacular. But precious few of us will ever get to visit it and catch yell enthusiastically “i can see my house from here” to the sounds of astronauts sighing, and asking to do so without taking a long and probably tiresome trip is just silly.

“Only 536 people have ever been to space; at SpaceVR we ask, what about the other 7 billion?”

Well, what about them, go on!

SpaceVR started a Kickstarter campaign today with the goal of sending a 3D, 360-degree camera to the ISS. This camera will collect footage that you can then view in virtual reality goggles. Space-views from the comfort of your home? Yes please!

The Overview One, the camera that SpaceVR plans to sent to the ISS. Image via Kickstarter

The Overview One, the camera that SpaceVR plans to sent to the ISS.
Image via Kickstarter

“We have all dreamed of the stars. Imagine being able to float through the space station, experience a space walk, or even explore the Moon and Mars. Our goal at SpaceVR is to bring space exploration within reach of everyone. We are sending a 360-degree camera to the International Space Station (ISS) to collect footage that anyone can experience using virtual reality headsets. With SpaceVR, now anyone can be an astronaut!” their Kickstarter page reads.

The name of the project comes from something that astronauts refer to as the Overview Effect, the feeling that our planet is a tiny, fragile ball of life, “hanging in the void”, shielded and nourished by a paper-thin atmosphere that one gets from seeing Earth from outer space.

“The idea of national boundaries vanishes, the conflicts that divide people become irrelevant, and the need to come together as a civilization to protect this “pale blue dot” becomes both obvious and imperative” SpaceVR’s pitch says.

The company doesn’t want to limit our journey to the ISS alone, either. Other goals they set for their future include:

  • Live-streaming content from space to your VR headset. You can see what’s happening in orbit in real-time.
  • Sending a VR camera to the moon in 2017.
  • Landing a VR camera on an asteroid in 2022.
  • Launching a remote controllable cube-sat VR camera system into orbit. You can not only control where the satellite goes, but also see exactly what it’s seeing from your headset!
  • Going to Mars as soon as 2026.

At the time i’m writing this, the campaign has 142 backers, gathering $9,916 of the $500,000 goal.

 

Photo of Sidekick tested in zero-g. Image: NASA

Astronauts on the ISS will soon work with holographic goggles: the HoloLens

This weekend, SpaceX is scheduled to deliver cargo and other much needed supplies to the International Space Station via its Dragon capsule. Among the supplies is a surprise for the astronauts on board: the latest high tech gadget from Microsoft, the HoloLens. If you missed ZME Science’s feature of the HoloLens, well you’re in for a treat if this is the first time you hear about it. Basically, the tech involves using holographic computing  which enables you to mix virtual reality with ..actual reality. Holograms following in your kitchen, weather reports on your coffee cup. Really, anything is possible with the HoloLens, let alone in the final frontier: space.

Photo of Sidekick tested in zero-g. Image: NASA

Photo of Sidekick tested in zero-g. Image: NASA

Microsoft’s HoloLens has been modified per NASA’s specifications under a project called “Sidekick.” As the name suggests, the virtual reality goggles and systems will help the astronauts perform better on their missions and, most importantly, improve their training.

“HoloLens and other virtual and mixed reality devices are cutting edge technologies that could help drive future exploration and provide new capabilities to the men and women conducting critical science on the International Space Station,” NASA’s Sam Scimemi said in a statement. “This new technology could also empower future explorers requiring greater autonomy on the journey to Mars.”

Sidekick works in two modes:  Remote Expert Mode and Procedure Mode. In the first mode, an expert on Earth is connected with the astronaut via Skype. The on ground specialist can guide the astronauts with procedures, draw pictures that show in the astronaut’s interface, play animations and so on. In Procedure Mode, astronauts can use holographic animations over the real-time viewing to help them get up to speed with their tasks. Scenario: an astronaut is in space tasked with repairing a satellite or telescope. The astronaut activates his Sidekick and is then shown animations or static graphics with the arrangement of the defective parts, how these work or what needs to replaced – all in real time superimposed over the real deal. This is possible with HoloLens, although not quite yet.

For now, this set of Sidekicks delivered to the ISS over the weekend will only work in Procedure Mode with limited capabilities. Just to get the astronauts familiar with the gear’s capabilities.  At another unspecified date in the future, a second set of the Sidekicks will arrive on the station to test the network functionality of the Remote Expert Mode.

That’s not all. Microsoft is working NASA on another project called “OnSight” which will use similar holographic computing technology to virtually visit Mars. Astronauts can step in the shoes, sort to speak, of a Martian rover and effectively borrow its eyes and ears. Yes, astronauts can technically visit the Red Planet without even having to set foot on it. Amazing!

 

 

 

virtual reality porn

Teledildonics is here: sex toys linked to virtual reality


virtual reality porn

Don’t make that face. It’s not like you didn’t see it coming, after all with each technological step forward porn has always shared the ride. Among the oldest surviving examples of erotic depictions are Paleolithic cave paintings and carvings. Prints became very popular in Europe from the middle of the fifteenth century, and because of their compact nature, were very suitable for erotic depictions that did not need to be permanently on display.  An earthier eroticism is seen in a printing plate of 1475-1500 for an Allegory of Copulation where a young couple are having sex, with the woman’s legs high in the air, at one end of a bench, while at the other end a huge penis, with legs and wings and a bell tied around the bottom of the glans, is climbing onto the bench. The oldest surviving permanent photograph of the image formed in a camera was created by Joseph Nicéphore Niépce  in 1826, porn likely soon followed there after. Imagine what happened once film came along. Nevermind the internet. Though the Oculus Rift is still in beta, and only a handful of developers own one, virtual reality is certain to change how people enjoy porn.

We’re already seeing the first steps: a company called  VirtualRealPorn has partnered with a high-tech sex toy developed called Lovense to sync sex gadgets with virtual reality. They designed and coded VR porn videos and synced them with remote controlled, bluetooth-enabled dildos and rubber vaginas. All so you can feel what you’re seeing. For instance, the Nora sex toy for females, which looks like a creepy rabbit glove, vibrates to match the speed of a male performing in the porn VR video, while a smaller arm vibrates when the two performing actors collide their bodies. Likewise, the Max vagina for men uses air pumps to contract the walls of the toy to match the speed of the female performer.

The Nora sex toy. Image: Lovense

The Nora sex toy. Image: Lovense

“This was an unexpected partnership, but we believe it is a positive development for Lovense,” said founder Dan Liu in a statement. “Our focus is to use sex tech products to solve problems for consumers, but when they approached us, we immediately saw the potential. They are the top virtual porn content producers, and we are a leading teledildonics [computer-controlled sex toys] company, so the partnership felt natural. We are both pioneers in our respective industries.”

That’s quite a lot of trouble just for the sake of masturbation. You need to strap a huge and quite heavy set of VR goggles, then wear some artificial mating devices. Some might feel it’s worth the effort.

“Viewing virtual porn is more work and requires more components compared to viewing regular porn,” says the Lovense founder Dan Liu for Wired. “But viewing regular porn is sometimes too much work, too. It’s not unusual for people to spend a lot of their time finding the right video and a fraction of the time actually getting off.”

Liu started his company after he had a long-distance relationship with a girlfriend in China while he was in the United Kingdom. It was tough to communicate sexually, so he began to explore options for long-distance sex. That’s how he eventually came upon the idea for Lovense’s sex toys.

The Max. Image: Lovense

The Max. Image: Lovense

To use VR porn, you’ll need to install the VirtualRealPlayer  which is currently only compatible with Windows computers and the Oculus DK1 and DK2 headsets. But developers are working on apps for iOS and Android. So far, VirtualRealPorn only made 32 synched videos with an 180 degree field of vision, but it plans on releasing at least one VR coded video per month. It will likely scale with demand and once they’ll find a way to use algorithms that automate the process.

“[We] code all the motions for the toy in each video and integrate them with our video player,” says VirtualRealPorn co-founder Leonor Laplaza. “We’re looking for ways to automate the integration, but for now it’s all done manually.”

The Nora and Max are already pretty high-tech as it is. The original offerings are  synced to work with each other via Skype or a mobile app, geared towards long-distance partners. In fact, there’s even a mod that syncs their throbs and pulses to music.

“The goal was to create an experience that felt completely immersive,” says VirtualRealPorn’s Laplaza. “We dream that teledildonics will enable users to not have to interact at all—or that any of their interaction is reflected in the content.”

But what’s the ultimate technology-driven porn? One can imagine an electroencephalogram that reads brain activity then directs pulses of electricity in key areas of the brain so they fire in a specific pattern. This would trigger a dream like state, effectively immersing you and a partner or just yourself if you want to fly solo in a fantasy land… all in your brain. But I’ll leave that to the field of teledildonics to elaborate.

oculus rift

How Oculus Rift could revolutionise Social Psychology

Upon acquiring virtual technology company Oculus, Facebook CEO Mark Zuckerberg predicted that virtual reality technology would one day permeate areas of life further than just the world of gaming, and we would ‘someday [use virtual reality] to enjoy a courtside seat at a basketball game, study in a classroom, consult with a doctor face-to-face or shop in a virtual store’. It’s true – the creation of immersive, virtual environments does indeed have masses of potential for industries which beforehand, were seemingly incongruous with such technology. Social psychology, the study of human experience and behaviour, is one of them.

oculus rift

Image: Oculus Rift

For decades, the world of psychology has struggled to maintain the balance between a fundamental trade-off in research: realism vs control. Realistic, immersive experiments are fantastic for engaging participants, allowing them to act as natural as possible and to therefore gaining results that accurately represent real life. However, to achieve this vivid realism, social psychologists often have to incur massive costs and more crucially, sacrifice the amount of control they have over the experiment. In more realistic experiments, such as ones in the field, a wealth of other factors can enter the mix that cannot be controlled for, but could affect the behaviour, meaning uncertainty can be cast over any conclusions drawn.

Historically, in the face of this dilemma, researchers have turned to laboratory experiments and questionnaires. Yet these methodologies are arguably less compelling. The participants are aware that their behaviours are being monitored, and this can lead to distortions of the behaviour they would usually exhibit, and it reduces how far psychologists can generalise their results. How can psychologists make accurate conclusions about naturally-occurring behaviour when their data is being corrupted by participant self-consciousness?

Enter: Oculus Rift. With reports suggesting that the virtual experience of Oculus technology provides a fully real sensation for the user, researchers could potentially get the best of both worlds: a sense of realism combined with full control. But how does this work? Well, putting on an Oculus Rift headset replaces your field of vision with a digital image. When you turn your head to the left, the technology recognizes this movement and shift in orientation, and replicates them in the virtual environment. It’s quite literally like you are entering a virtual world, and reportedly, the experience is every bit as vivid as real life. Take this account:

“It looks like and feels like I’m getting a ride to the top of the wall, though I haven’t moved. No matter which way I turn my head, I see snow-capped mountains. Oh, my God. Wow. I am looking out over a frozen field. This is truly an incredible experience. When I reach the top of the wall, the ground below feels as if it’s moving like a conveyor belt taking me to the edge of the wall. I see soldiers coming up in the snow holding torches. Oh my God. And fiery arrows – ah – I’ve been hit by a fiery arrow. Oh, my God. Whoa. Fortunately, it didn’t hurt. But it sure looked real.”

Although this sounds like something out of Inception, the potential of crafting intricate, controlled environments without sacrificing this vivid sense of realism is vast; the realism vs control trade-off could be completely wiped out. Researchers could build virtual environments designed specifically to study the behaviour that they want to research. Hospitals, train stations, work offices, the environments they could create could are theoretically infinite. All while maintaining the level of control which they need to ensure valid, reliable results. If participants are engaging with the virtual environment in the same manner as they would in real life, researchers can use these constructed environments to draw conclusions about behaviour that are far more accurate than any behaviour studies in a University laboratory.

Virtual reality also opens up new doors for experimental replication. If virtual reality technology progresses in the way it has done, it could mean in the future, a psychologist could send the exact experimental set up to another psychologist on the other side of the world, who could then download this experiment and perform a perfect replication at their laboratory. Using virtual reality technology, we could replicate and extend experiments without having the worry about issues such as missing information or small, environmental differences in the laboratories; it would be all there ready for us to use at the touch of a button.

Virtual reality technology has the potential to overhaul social psychology and the way we carry out experiments. It could, if utilised creatively and correctly, propel our knowledge of even basic human behaviour far further than any of us can currently comprehend. What’s more, as virtual reality technology advances and evolves, it is likely that its uses in science and the study of human behaviour will, too. The world of psychology has long been in the dark over the benefits of correctly implemented technology, but this time round, the illumination that virtual reality can provide to our understanding of fundamental human behaviour just cannot be ignored.

References

Associated Press. (2014, March 25). Facebook buys virtual reality co. Oculus for $2B. National Public Radio.

Sydell, L. (2014, March 16). Goggles bring virtual reality closer to your living room. National Public Radio. Retrieved from http://www.npr.org

virtual_reality_control

Possessing humans using virtual reality

virtual_reality_control

Using an Oculus Rift, a Microsoft Kineckt 3D sensor and electrodes strapped onto key muscles, Yifei Chai, a student at the Imperial College London, devised a system that can be used to control surrogate humans. Basically, the person who wears the Rift virtual reality headset sends a signal that will cause the other person wearing a headmounted, twin-angle camera and electrical stimulators to respond in the same way, essentially controlling him. This is only a prototype and the device is far from being perfect, but this is definitely an interesting first step towards building a new kind of technology. Applications include empathy simulations (put yourself in the shoes of an elder for instance), muscle stimulation for those in recovery following an accident or even better surgical robots. And mind control devices, of course.

It’s worth noting that Chai’s system only stimulates 34 arm and shoulder muscles, and that there’s quite a bit of delay in response. Even so, this is a powerful demonstration. Check out the video below for a glimpse.

rat-navigation

Virtual reality for rats shows how different brain functions cooperate during navigation

Some people are better navigators than others, i.e. men better than women. Whether you can make your way effortlessly through the woods to reach a safe house or get seemingly lost on your way home from a different bus stop, it doesn’t make that much of a different at a sensory level. Navigation is often taken for granted, but the truth is it’s one of the most complex neurological function of the brain, one which requires a great of energy and complexity. This fundamental skill is paramount to avoiding threats and locating food (the reward), and through the mechanisms of evolution which promotes survival traits, it has steadily improved generation after generation.

rat-navigation

Rat in virtual reality. (c) UCLA

The connection between spatial reasoning and reward (anticipating and actually locating food) has been very difficult to measure, mainly because current technology doesn’t permit to simultaneously study both while an animal was moving. A team of researchers at UCLA have devised, however, an ingenious multisensory virtual world for rats in order to understand how the brain processes the environmental cues available to it and whether various regions of the brain cooperate in this task.

Rats are inserted in a sort of cube, with displays on each side, and are trained to navigate their environment that changes each time through a trackball to reach their reward (sugar water). Since the animal moves on the trackball, it actually is stationary, but is offered the illusion of movement aided by visual and auditory cues.

Previously, the same team of  UCLA researchers, led by neurophysicist Mayank Mehta, discovered how individual brain cells compute how much distance the subjects traveled. All animals, including humans, need to know where they’re located at a certain point in order to compare to their reference frame and navigate. Which way is left, right, up, down etc. How reward anticipation and reward seeking or navigation are connected has escaped scientists for some time.

“Look at any animal’s behavior,” Mehta said, “and at a fundamental level, they learn to both anticipate and seek out certain rewards like food and water. But until now, these two worlds — of reward anticipation and navigation — have remained separate because scientists couldn’t measure both at the same time when subjects are walking.”

Navigation requires the animal to form a spatial map of its environment so it can walk from point to point. An anticipation of a reward requires the animal to learn how to predict when it is going to get a reward and how to consume it. Mehta and colleagues, using their rat virtual environment, have now found a way to correlated the two.

The rat MATRIX

While the rats where navigating their environment in search for the reward (food), visual and auditory cues were played. When both sound and visual was played, the rats used both their legs and tongue to navigate in harmony and easily locate the feed tube. Yum!  This confirmed a long held expectation, that different behaviors are synchronized. When the visual cues were shut off, and only sound was there, the rats legs seemed to be “lost” as the rodents randomly walked about, but their tongue showed a clear map of space, as if the tongue knew where the food was.

“They demonstrated this by licking more in the vicinity of the reward. But their legs showed no sign of where the reward was, as the rats kept walking randomly without stopping near the reward,” he said. “So for the first time, we showed how multisensory stimuli, such as lights and sounds, influence multimodal behavior, such as generating a mental map of space to navigate, and reward anticipation, in different ways. These are some of the most basic behaviors all animals engage in, but they had never been measured together.”

Previously, Mehta said, it was thought that all stimuli would influence all behaviors more or less similarly.

“But to our great surprise, the legs sometimes do not seem to know what the tongue is doing,” he said. “We see this as a fundamental and fascinating new insight about basic behaviors, walking and eating, and lends further insight toward understanding the brain mechanisms of learning and memory, and reward consumption.”
The study results were reported in the journal  PLOS ONE.
elon musk spaceX

SpaceX’s Elon Musk presents Iron Man-like engineering lab

elon musk spaceX

Founder of Paypal, Tesla Motors and SpaceX, Elon Musk has gained the reputation of a brilliant entrepreneur and engineer. By many he’s viewed as a real life Tony Stark, a comic book and, most recently, Hollywood blockbuster character better known by his Iron Man persona.

Musk in many respects,  in my humble opinion at least, is actually more able than Stark. From an ethical point of view, Stark is quite despicable. While Stark chose to continue his family legacy acquiring a vast fortune from weapons manufacturing, Musk raised his billions through high-risk businesses on a mission to better humanity, a business model few people choose to follow. Then, Musk is real, whereas Stark is pure fiction. Clearly, Musk tops Iron Man, but that doesn’t mean that he can’t take inspiration from the comic strip super hero.

Recently, Musk showcased how he hopes high-end engineering at SpaceX will take place in the near future. Combining currently existing technology like Leap Motion, Oculus Rift and precision metal 3D printing Musk’s team have set-up a virtual lab that might change the way engineers design anything from spacecraft to buildings to proteins. If you saw any of the Iron Man movies, then you’ll reminisce.

[READ] SpaceX founder envisions 80,000 people colony on Mars

Musk reasons that we interact with current technology in a 2D frame to create 3D objects. In typical engineering projects, like say designing spacecraft components, engineers first need to figure out how to manipulate their software environment so they might render what is already in their heads.

Musk presents what he calls a more natural approach to designing. Using the Leap Motion, on which I wrote extensively before, and the Oculus Rift, a virtual reality hand-set, SpaceX has develop a highly interesting environment, which can be used either with a typical PC display, a projector and even on a transparent glass surface, exactly like in Iron Man. I his demo, Musk showcased the tech by manipulating SpaceX’s Merlin engine, and to top things out, he used a metal 3D printer to create on of the engine’s parts, effectively pulling it from the virtual environment into reality.

Check out the presentation below.


So, what do you guys think?

CAVE2-virtual-reality

Star Trek holodeck-like imaging offers a whole new perspective on virtual reality

CAVE2-virtual-reality

Computer scientists at University of Illinois at Chicago have created what can only be described as a real-life Star Trek holodeck. Now, it’s not nearly as impressive as its SciFi counterpart, after all futurist Tim Huckaby predicted it will take some ten years before a full blown version might be created, still virtual reality is about to step in a whole new ground, thanks to innovations just like this.

Called CAVE2, it features screens eight feet high wrapped in 320 degrees – once you step inside, you’re in a whole new world be it aboard the Starship enterprise, on a voyage to Mars, or take you for a stroll between the blood vessels in the brain. The latter is what makes it a truly useful addition to science, not just a geeky high-tech cave.

The thing is, technology and science is so advanced nowadays that data is available on a myriad of aspects. However, it’s still up to scientists, who are just humans after all, to put all the pieces together. In a virtual environment where you can explore and interactively see what happens, like folding a protein or adding drugs to see how they react and such, this whole process runs a whole lot smoother. Hopefully, we’ll see more CAVE2 like labs in the future.

The video below explains how CAVE2 works and presents the concept in greater detail. Enjoy!