Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What VR could, should, and almost certainly will be within two years [pdf] (steampowered.com)
152 points by modeless on Jan 18, 2014 | hide | past | favorite | 74 comments


In case anyone missed this amazing optical illusion from the article, it's worth looking at. I've never been so visually/mentally conflicted.

The center pieces (grey on the left, and yellow on the right) are actually the exact same color.

   http://i.imgur.com/UYRtqA5.png
My brain simply cannot accept it, even after confirming it w/ photoshop.


Very nice. If you use the eraser tool to shave off a circle around the center pieces, you can watch them get desaturated. Works particularly well on the "yellow" one. Here's the result, but I recommend not opening it and doing it yourself to see the desaturation in progress: http://i.imgur.com/WqFwLVX.png


You rule. Because I'm lazy.


It's a very good example of the fact that our brain processes information in a _top-down_ manner. i.e. we take into account much more information than the signal itself. It's the reason why ventriloquism works so well. We see and hear what we expect to see and hear.


Just a nitpick, but it's not exactly the same color - they're a few shades different. But yeah, both of them are yellow-ish according to my color picker, even though the left one looks gray... Interesting...


Here's one I saw today that blew my mind:

http://i.imgur.com/RiIqWFN.jpg


I find it hard to believe. I know it's true. I just don't believe it.


It does make sense if you think about it. In the "grey" version, you perceive it as grey since it looks like there is a transparent yellow layer, and based on the surrounding, it looks like the element itself is just grey rather than yellow (even though there is a yellow tint on the object due to the layer).

But, with a blue transparent layer, there is no yellow to add on from the layer, so the object itself must be yellow in order to show through as such. So our brain makes the appropriate adjustment. Very smart. We see what the actual object probably looks like, rather than the exact colors.

The classic optical illusion with a rubik's cube with a side in shadow is basically the same illusion.


I'll take this opportunity to share another amazing optical illusion:

  http://9gag.com/gag/a9d1jxj
I wonder if dynamic optical illusions such as this one will prove relevant to VR somehow.


Has anyone thought about using this new VR tech outside games? Maybe games will perfect the tech but there is so much potential outside it.

For example the OS desktop metaphors could finally be taken to the next level. Files, recycle bins, folders these were mapped concepts in the 80s to try to transfer the office to a 2D computer screen.

But now we can simulate a true desktop. Imagine you enter a room, a nice wooden desk in front of you. On the desk are documents that could be web pages videos word docs anything. You touch it and a floating screen appears to display contents. To your right is an infinite capacity filing cabinet. A trash bin under the desk. Its like the real world but without the limits of the real world.

Another interesting effect is memory. I remember reading somewhere that in ancient times they would memorize lots of data by imagining walking through a house. As you walk you would place relate objects like a closet to the item you want to remember. In this way you memorize the data and its order. This is where the phrase "in the first place...and the second place..." came from.

And its something I've noticed in real life. My desk looks like a mess except to me. There's a order to it I understand and I can find things. A similar thing happens with the mess of icons on my desktop. I can often find something because I know I put it in some folder on some place on the desktop. But in both the real desktop and my windows desktop I sometimes need help finding things and use the search tool.

The VR desktop can be the best of both worlds. It allows that natural chaotic organization I do with my real world desktop. But a floating search box can appear anytime to help me.


Initial driving lessons, pilot training, Movies where you're sat inside the scenery. Virtual conferences, meetings, lectures, school lessons. Virtual strip clubs. Virtual weddings. The list is almost endless. If something can be done in VR, somebody will program it.

It's gonna be so massive and so addictive, I predict within ten years there'll be news stories about people dying in VR from malnutrition or heart failure and clinics specialising in VR addicition.


Plus another host of side effects like:cybersickness, dissociation(a serious mental disease) and lower sense of presence in objective reality, unstable gait.

http://www.quora.com/Oculus-Rift/Virtual-Reality-Oculus-Rift...


Just like in Inception where it's mentioned how some people have become addicted to the dream world. I can definitely see this happening.


Some of the most addictive games you can play today are some of the least realistic. I don't expect VR games to be any more addictive than non-VR games.

Here's a news story from more than 6 years ago talking about South Korea:

> Last year alone, at least seven people died from deep vein thrombosis, heart failure, or exhaustion while playing online games....

> The Information & Communication Ministry began carrying out an annual survey of game addiction in 2002, and has set up counseling centers in eight cities to help addicts. Game companies such as Seoul-based NCsoft Corp. also spend hundreds of thousands of dollars a year each to help finance some 40 private counseling centers.

http://www.businessweek.com/stories/2006-09-10/online-gaming...


Oculus Rift driving lessons with support for those wheels would be amazing.


Every time I consider the VR / 3D desktop, the big problem I run into is: what productivity, efficiency or ease-of-use gain is there? The only benefit I see is just a gee-whiz factor.

Why would I want to have to reach over and open a virtual filing cabinet? I can click a 2D folder easier and faster, and view the content faster in a line list. Reproducing archaic physical storage and work systems into a 3D space is going backwards, that includes a fake wooden desk and wastebin under it.

It's not so much the headgear presentation I view as an issue, but rather the theory of the '3D' desktop. My prediction is that over the next 20 years, companies will attempt over and over again to push consumers into it, and it'll fail every time because there is no real value gain. What will happen instead is 2D desktop interfaces will become multiplied and easier to use several of in a faux 3D space that is easy to use for your average person (by strictly limiting 3D spatial skill requirements or motion). Something much more akin to flipping the multi-home pages on Android.


I think that since adopting the 2D desktop metaphor, we've all started working with way too much data for it to be reasonably represented as a 3D world. The trend in OS design recently seems to be away from representing physical objects and more towards

  * dashboards (Windows 8 start screen and Mac OS dashboard) 
  * lists (browser tabs, Windows taskbar jump lists)
  * search
I think we're evolving towards a common visual language for representing information, and the concept of a physical piece of paper or a table on which to store it is going to become a distant memory.


>I remember reading somewhere that in ancient times they would memorize lots of data by imagining walking through a house.

Sherlock makes use of this as well :)

http://en.wikipedia.org/wiki/Method_of_loci

It actually seems quite legit. I'm tempted to attempt it. I wonder if I could imagine my code/project in this way. To visualize each file, each function, each block as physical objects in a mind palace.


I hope it's legit, I was taught this method in school, at around 13 years old (now 31).

I don't use it very often, but when I do I picture the same house I lived in at the time. I wouldn't necessarily say I can remember lots of data with it, but maybe the order of 20-50 items doesn't take too long.


If you are interested, 'Moonwalking with Einstein' is all about memorization contests and it talks about the method of loci.


Thanks!


Its like the real world but without the limits of the real world.

Except that you have lovingly recreated those limits by replicating an archaic paper-based office. The desktop metaphor was a horrible, crippling kludge to avoid explaining to people how their computer manages data. We can do so much better.


There are some projects worth keeping an eye on --

ibex VR desktop - https://developer.oculusvr.com/forums/viewtopic.php?f=29&t=4...

VR Launchpad - https://developer.oculusvr.com/forums/viewtopic.php?f=29&t=4...

The thing to consider is the field of view and just how huge what you are looking at is while wearing the Oculus. The best example is the Minecraft block. Imagine each one as a giant box which you would lift up using both arms.

The second thing we miss everywhere else is depth perception. Certain tasks, like building objects is greatly helped by having depth perception.

In those terms, your perception in the Oculus mirrors the real world more so than a flat, 2D screen or book.

I think we will see people who put this thing on when they wake up, and take it off when they go to sleep. At some point perhaps not even then.


I'd be happy to have one screen hooked up to the computer for display purposes and my real display to be an oculus-type VR device if they can get the resolution high enough. Just being able to develop software without being limited to the number of monitors or their dimensions would be a godsend. I'd get one for work and for home. Even better if they can eventually make it wireless so that I can use the one at home for work when I need it, and to watch movies when I don't. Being a sports fan, the ability to watch games without having parts of the screen obscured by scores and ticker stats would also be a plus.


I'm looking forward to being able to work on a tropical beach while still being in the office.


I get a bit giddy everytime I read about the upcoming-gen of VR. There's enough big players talking about it now, with real impressive and affordable hardware to back it up.

The metaverse concept is also probably the one I'm most interested in - what I really want from high-res VR is the ability to get rid of monitors altogether.


>what I really want from high-res VR is the ability to get rid of monitors altogether.

I don't want to get rid of them, I want to emulate an arbitrary number of them within VR.


How would you redesign the terminal window if it existed in VR space?

* No need to worry about small line-wraps.

* A spiral of text.

* A tile hemisphere, xmonad-like, in terminals.

* Exploit depth as importance (shrink things you want to keep on eye but don't need to read carefully).

What about input?

* Finally kill off the mouse and use eye tracking instead.

* Put cameras on the HMD to decode hand movements and get rid of physical keyboards/mice/touchscreens (virtual chording keyboards, all surfaces become touch screens).

* All input devices become wireless/powerless because they collapse into haptic props. Want a control panel for a flight sim, glue a bunch of fake buttons to a wooden stand and let the cameras register the input. Same thing with joysticks.

>what I really want from high-res VR is the ability to get rid of monitors altogether.

What I really want from high-res VR is to collapse my desktop/laptop into my HMD.


Everyone is trying to "get rid of the keyboard", but it's the most reliable way I've ever seen to get large amounts of text in to a machine.

Having well made key-switches and a good placement is probably the best thing next to just reading what I mean from my mind.


People speak at ~100-180 WPM (words per minute), with American Sign Language ~200 WPM[1]. Professional typists type at 50 to 80 WPM [2]. Here is a video of someone signing at 120 WPM, he looks like he is moving in slow motion. [3]

Everyone is trying to get rid of the keyboard because:

1. they take up a lot of space,

2. typically require a desk/not mobile friendly,

3. only supports fast entry of a limited number of characters,

4. are far slower than the theoretical best (for instance people can talk much faster than they can type),

5. many of the non-alpha keys require the user to look away from the monitor.

> Having well made key-switches and a good placement is probably the best thing next to just reading what I mean from my mind.

In theory there are input devices that allow far more characters, at higher speed, with fewer errors. Keyboards have stuck around both because they are a great technology and because of path dependency. They aren't going to stick around forever.

[1]: http://www.dbcusa.org/index.php/About-Us/ [2]: http://en.wikipedia.org/wiki/Words_per_minute [3]: http://www.youtube.com/watch?v=Cj7OpQEu5-w


> People speak at ~100-180 WPM (words per minute), with American Sign Language ~200 WPM[1]. Professional typists type at 50 to 80 WPM [2].

That gap closes very quickly if you have to say anything other than words (versus typing it). Saying numbers or symbols is far slower than typing them, as is spelling out something a computer doesn't know how to spell. I'd bet that, unless you had an optimized "dialect" for coding or a language designed to be efficiently spoken, a keyboard will beat a human speaking code every time.

Even being very generous with the computer's ability to interpret open/close parentheses and brackets intuitively, having different terms for initial capitals versus all-caps, and assuming automatic indentation and newlines:

"include s t d i o dot h, blank line, int main paren int argc comma char pointer argv bracket bracket paren brace printf paren doublequote capital Hello world exclamation mark backslash n doublequote paren semicolon return zero semicolon brace"


Input isn't always the bottle-neck. I think far slower than I type. Actually, a keyboard with editing commands (for cut, past, word select ... the basic VIM / emacs commands) and special keys along with a mouse and voice input could be optimal.

Programming is a special case, but I think programmers can cope. The languages might have to change. IIRC, C still uses the ";" instead of newlines, because that was optimal for punch cards, but more modern languages often don't.

Most people just want to churn out emails and tweets, though.

The big issue is, people use much more complex grammar than when they write. It's fine if you're listening (since the human brain is really good at parsing spoken words), but reading what the average person will produce on a speech recognition system will be brutal.

Example: It's amazing how people speak - they use split phrases, and they have long run-on sentences; often with embedded subclauses; which go on and on and are really quite complex but then they trail off when they forget what they were talking about. Oh yeah, I think it was how speech recognition is might have unexpected consequences or something.


As soon as we can talk to computers to code, we can start actually make the language for it.

Think "include I-O, method int main take int argument count, character pointer values. Body print Hello world, Exclamation mark. Edit line 6, capital w for world. Move line 8. Return zero. Compile and run."

Start adding in gesture and stuff and you can get rid of the 'moves' and 'edits'.

And definitely stop thinking in the old-fashioned C way of truncating every variable name to some bizarrely short name and start speaking properly. Most other languages have given up this readability disaster already.


> I'd bet that, unless you had an optimized "dialect" for coding or a language designed to be efficiently spoken, a keyboard will beat a human speaking code every time.

More or less what this guy did: http://youtu.be/8SkdfdXWYaI

It seems to me that languages with a more regular syntax are going to have a dramatic advantage here.


> That gap closes very quickly if you have to say anything other than words (versus typing it). Saying numbers or symbols is far slower than typing them

Might be fixable in a language with a limited variety of symbols such as a concatenative language, or a somewhat reimagined smalltalk-ish language. Then the few common symbols get assigned to e.g. click consonants.


I'm learning sign-language at the moment (my son is deaf, and we live in Switzerland, so we're learning SDGS [Schweizer Deutsch Gebaerdesprache]) and I believe you could be right on this although I do not know what those particular symbols might be at the moment (subject to further research)

Having also used VR in the mid-90's (Virtuality systems etc), I am pretty excited about the movement in VR (oculus) and AR (meta) at the moment. Interesting times!


People speak at ~100-180 WPM (words per minute), with American Sign Language ~200 WPM[1]. Professional typists type at 50 to 80 WPM [2]

I'd wager that I can enter computer code at a higher information rate via keyboard than speaking. "Open-brace" takes a lot longer to say than to type.


of course, but nothing is forcing you to say "open-brace" for this character. there was a video of someone here not that long ago that customized a speech recognition software to use some easily pronounced sounds for this.

while it would still take longer to say, for example koi or roi for ( or ), it reduces it significantly, thus making it into a viable option for programming input


Actually, don't forget about stenographers. Court stenographers must type at least 225 WPM. It seems it's not keyboards in general are slow, just the particular kind we have today.


If I were always perfect on the first draft I'd agree with you. But keyboards are far better suited for editing than voice is.


Lately I've been thinking about whether you could simulate enough of the physicality of a keyboard by electrically twitching the muscles in your fingers to provide some back resistance.


On a related note, the best practices release for the Oculus:

http://static.oculusvr.com/sdk-downloads/documents/OculusBes...


> Content available for use on the Headset produces an immersive virtual reality experience, and users may have reactions to that experience, including simulation sickness (similar to motion sickness), perceptual after-effects, disorientation, decreased postural stability, eye strain, dizziness and/or nausea, and feelings of depersonalization (feelings of watching yourself act) or derealization (feelings that the external world seems unreal).

O_O


I'm really interested in it's usefulness as a learning tool. An immersive environment designed to teach a new language could be incredible and fun.

Also... sex stuff.


I always thought ubiquitous VR was a race between porn and games, looks like games will win.



Is that where SteamOS wins? Only platform to have full Oclus support + major game studio behind it.


Unity can already publish Oculus-compatible games for Windows, Mac and Linux. I think Unreal Engine can too. Big game studios use those engines.

SteamOS might be able to optimize for the Oculus Rift in ways that Windows and Mac OS can't. I guess we'll see.


This could really hurt consoles. Also this require a really strong machine, which costs a lot - which means it's harder to charge apple margins on top of it.

Also a new platform means reducing the value of software available to windows/mac, since everything needs rewriting.


> This could really hurt consoles.

Absolutely. I really disappointed by the lack of 4K in the Xbox One and PS4[1]. When blu-ray was new, it was the PS3 that really drove adoption. Now, the newer consoles will drastically lag behind the state of the art (compared to PCs), which will be exacerbated by the relatively long lifecycles of consoles.

I feel once consumers experience a VR demo in the next 2-3 years, and start seeing 4K games on PC (both perhaps as in-store demos), going back to their consoles is going to be very disappointing.

[1] http://reviews.cnet.com/8301-9020_7-57592390-222/why-next-ge...


Thinking about this, maybe MS has a solution: combining VR with project spark - it's game creation platform(with really good demos). Not sure about the business model and hardware , but the abilities of users to travel/play in virtual worlds created with/by friends, and the huge amount of unique content such a platform promises seems really exciting.


The state of the art in virtual audio is keeping pace with these developments. I've had some experience with research headphones that combine HRTF models with head tracking, and the result is uncanny - Except that the environment your ears are presented with has to match the strong priors from your eyeballs. So it will be awesome to experience virtual audio and video at the same time.


The biggest longterm impact that vr will have on humanity won't be entertainment, it will be on research. VR and brain machine interfacing enables navigating data/leveraging patterns in a way that current interfacing technologies (eg mouse, keyboard, gui) only hint at.


That's really exciting. Looks like someone is actually going to nail VR this time around.


Valve + Oculus + Carmack. I agree, they're going to nail it. This is the first time that VR doesn't seem like a gimmick, but rather the beginning of something great.


I'm curious - has anyone tried one of these head-mounted displays while on a psychedelic? Do they still get that primal sense of "being somewhere else" as described in the PowerPoint?

For example, it is impossible for me to get immersed in movies while on psychedelic drugs; but when I'm sober I definitely can get "sucked into it". I wonder if this phenomenon applies to VR as well.


There's a whole subreddit dedicated to this sort of thing, /r/trees3d


This seems to just be centered around pot, not psychs :(


Would it be a good idea to have a ycombinator for the games that need to be made. I imagine that if there existed a really good VR SDK, and distribute on Steam, that the cost of making said games could drop. I know the indie game industry exists now, but the VR industry is going to really need a breakout game and a ycombinator style funding an distribution model might work?


It's really easy to start with Oculus SDK as it's already integrated into all major (and minor) game engines, such as Unreal/CryEngine/Unity3D. So, there's lots of people both indie and professionals who are already developing VR games (or porting existing ones). See http://www.riftenabled.com/admin/apps/

Also, there isn't market yet for VR games as consumer VR isn't yet available. Most developers are just experimenting and waiting for the final hardware.



Waitaminute! Is this the same Valve who FIRED Jeri Ellsworth last year because Gabe didn't see any value in the VR/AR work she was doing, to the point that he just let her have the IP rights to her prototypes as she left?


It's not that they didn't see value in it. Augmented reality and virtual reality are two different things. Valve believes that AR technology will be harder to develop than VR, and consequently that VR will become feasible and popular before AR. They chose to go after VR first, postponing AR for later. Jeri Ellsworth didn't want to wait, so Valve let her leave and take all the IP with her to try AR on her own.


Also, there is probably no way to say this without sounding crass, but after listening to her tell her side of the story, it seems like she wasn't a great fit for Valve's obviously very particular culture.

I don't know if that is a good thing or a bad thing, but sometimes it is better if two parties split so they can go do their best work separately.


Valve had two teams -- Abrash's VR and Ellsworth's AR. They decided to bet on VR and Jeri was the unfortunate result of them focusing their resources.


Abrash says in the slides they're not actually working on a device internally, except for research. I could be wrong but I remember reading that Ellsworth's team was developing a product.


They're not working on a production device, but they have an internal prototype which the few third-parties who've tested it consider mind-blowing even compared to OR's Crystal Cove. It might be the result of Ellsworth's work though

https://twitter.com/DaveOshry/statuses/423961443889717248 (Palmer is Palmer Luckey, founder and CEO of Oculus Rift)

https://twitter.com/TheDavidHensley/statuses/423591891171426... https://twitter.com/TheDavidHensley/statuses/423592847531466... (Tripwire, Killing Floor & Red Orchestra)

http://garry.tv/2014/01/16/steam-dev-days-day-1/ (Garry's Mod author)

http://www.reddit.com/r/oculus/comments/1vc9gz/im_a_dev_that... (random schmuck who got to try it)


Jeri's CastAR glasses were Kickstarted: http://www.kickstarter.com/projects/technicalillusions/casta...

Really cool innovation in itself using retroreflectors and micro-projectors.


According to the presentation, Valve sees tremendous value in VR, but doesn't necessarily want to be the one to develop the hardware themselves. What you describe sounds completely consistent with that.


That they're working with Oculus VR and sharing their knowledge is super awesome!

> By showing them a prototype with low persistence, we convinced Oculus of its importance, and the lack of blur in Crystal Cove is a direct result of that.


Who are the major players making awesome strides right now in VR?


Valve has done some foundational research and they've already released VR support in Steam, but don't plan to sell hardware. Oculus is designing hardware and working with Valve along with many other game devs. They will likely release an awesome consumer product in about a year.

Oculus + Steam OS will be the best VR platform for the foreseeable future. Everyone else is at least a year behind; probably much longer than that.


Here's a dark horse which might become interesting soon: http://highfidelity.io/


Of the programmers who made Quake, John Cash went on to lead WoW, probably the most inhabited shared reality yet; John Carmack went on to work on Oculus; and Michael Abrash ... is doing whatever he's doing here - I'm still reading the paper.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: