MetaHammer is about tools and resources with impact. A metahammer is a script, a tool, a piece of software, an idea, or... anything... that has impact on what we do and on how we think.

3D HMD Watch: Oculus Rift and Beyond


  • Samsung working with Oculus on a HMD to use a Galaxy Note 4? (11 Aug 2014)
  • Samsung GearVR – Headset holder for a smartphone. (8 July 2014)
  • Control VR – Wearable arm/hand/finger “power glove” controller (Kickstarter – fully funded). To use with Oculus, etc. (2 July 2014)

The Oculus Rift 3D HMD (head-mounted display) has caused a tremendous amount of excitement since its Kickstarter launch on 1 August 2012. The Kickstarter campaign accumulated nearly 10 times the initial funding goal and the company raised $91 million by the time Facebook bought it on 25 March 2014 for a reported $2 billion. At that point Oculus had sold around 60K original Developer Kits (DK1) and had taken some 20K orders for the yet to be released DK2.

The excitement has been tempered a bit by the continuing mystery about when a consumer model would be released (with numerous promised improvements). It has always seemed imminent, but with the announcement of DK2, it would appear they are still not satisfied with it, and it seems a consumer release is not likely for at least another year http://viagraspills.com/tab/otc-viagra/. It’s especially unnerving because of the unclear intentions of Facebook in their purchase of the company. We need only look to the purchase of Cloud Party by Yahoo! to see that the public’s interest in an existing product is not necessarily the same as that of a big-player investment corporation. (The Cloud Party virtual world was closed, with Yahoo! apparently being interesting primarily in the company’s patents and creative personnel.)

I own a DK1 and have used it primarily for immersive experiences in Second Life (SL). There are a couple of viewers (i.e., the software used to view virtual worlds) for SL that support the Oculus Rift. The first viable one was the CtrlAltStudio viewer. This works very well, but it an alpha viewer with no pretensions about being anything more. (It also supports Kinect control, which I have not tried.) More recently, Linden Lab has released a “project viewer” (i.e., a viewer with specific experimental features not yet incorporated into the official viewer) with support for Oculus Rift. (The Oculus Project Viewer is not currently shown on the viewer download page, but is available at this link. I think the intent is that you request an invitation to their beta testing team. See here.) What sets this viewer apart is that it has controls within the view screen for doing most things you need to do in SL using a sort of floating mouse cursor.

The truth is, I don’t use the Oculus very often because, as is often reported by others, it makes me a bit nauseous. I expect this is due in part to latency and resolution issues. But the experience itself is really quite engaging. The feeling of being physically present in the virtual space is intense.


The anxiousness of consumers to experience the 3D immersion afforded by the Oculus Rift has spawned an astounding array of alternatives, from established companies like Sony, to do-it-yourself videos on YouTube. Among the more intriguing alternatives is the use of smart phones for HMDs. The iPhone current generation Retina display is significantly higher resolution than the Oculus DK2’s anticipated 1080 pixels, and most smart phones have gyroscopic motion detection. So the low-tech solutions are simply using various ways to strap a phone to your head in an appropriate position.

But there are other problems, as users of the Oculus Rift have learned. Among the primary issues are:

  • Its visual isolation means you can’t see your keyboard or mouse. This makes it difficult to communicate or operate controls in virtual spaces.
  • It is tethered to the computer via a cable, restricting the user’s motion.
  • Its view is not altered according to the user’s position, only the user’s head rotation. So standing or walking in real space has no effect.
  • It is basically useless for augmented reality (AR) applications because it has no forward-looking camera.
  • There is no audio component to it. The user needs either speakers or a separate headphone to hear spatial audio.

So I have been following the alternatives to Oculus for some time and have assembled a list of links to many of the more interesting ones.



Originally posted 29/6/2014

Touch me….. there….

Haptic technology, or haptics, is a tactile feedback technology which recreates the sense of touch by applying forces, vibrations, or motions to the user. So an object can have resistance to motion, or even push back as force is applied. The idea is to make an object, often a controller of some kind, feel like it would in the mechanical real world. This is a pretty interesting and potentially very deep area of exploration.

Stanford University created an online course in haptics that uses a basic device called the Hapkit cheap generic viagra. They provide complete instructions and parts list for making your own, though it does require the use of a 3D printer and/or laser cutter to make some of the parts. Also, some essential parts, such as the core motor, make not be easy to obtain at a reasonable price. However, for the electronic VR hobbyist, this could be a worthwhile exploration into an important part of immersive technology.

The online intro course in Haptics is being offered beginning October 1. See the website for full details.

Stanford Hapkit

Photo © Stanford Univ.

Why Hi Fi?

Second Life creator Philip Rosedale and partners are working on a new VR platform called High Fidelity. It’s still in very alpha state, but they are occasionally releasing little samples of their progress. What I’ve seen until now has been less than impressive, but I was pretty amazed at this latest demo. The characters look very cartoon-ish, but what you need to understand is that the gestures and facial expressions are all being generated in real time based on the actual expressions of the users. Eye movement, raised eyebrows, the guitar strums… A top priority of the project is near zero latency, which would open the doors to the Internet being more widely used for real time music collaboration, among other benefits.

I have questions about the point of all this, e.g., at what point is it no different from Skype viagraindian.com? The advantage to Second Life is that you can present yourself as you are not in real life. When I’m inworld, I’m often glancing at my two side monitors or doing a quick Google search relating to a conversation I’m having. With HiFi the people I’m with would see that I’m distracted. This may be a more “authentic” experience, but I don’t use SL to recreate a real life experience. I use text chat for the same reason. I’m much more articulate and thoughtful when I can edit what I’m saying on the fly. The “inauthenticity” of SL is precisely what makes it engaging and empowering. Given infinite possibilities, why be what you already are?

If a picture is worth a thousand words….

When addressing technical problems, it’s often a paradigm shift that makes for the most compelling breakthroughs. PiCam is an experimental new micro-camera array from Pelican Imaging intended to be used on mobile devices. It provides high resolution images by using an array of tiny lenses instead of one large one. Think about radio telescope arrays and how they are able to gather celestial data much more efficiently than a single large instrument. With PiCam, you have an array of 12 lenses, each gathering information on an isolated sensor. The red, green, and blue spectra are gathered separately, with 4 sensors each. With such small lenses, the focal length is reduced and depth of field becomes nearly infinite.

Then the magic happens as the software combines the images buy kamagra oral jelly canada. What makes this compelling is parallax. You would think having multiple lenses would cause image degradation due to the fact that each lens is looking at the subject from a slightly different angle. But the engineers are using the parallax to create a depth map from the parallax data. Almost like using a stereo camera. So not only does the software correct for angles, they it can do it selectively according to the distances of the various surfaces in the scene. Among the side benefits of this process are both 3D views and user-variable depth of field. Watch the video for a full explanation:

See also: PiCam: An Ultra-Thin High Performance Monolithic Camera Array

Oculus Rift Not A Game Changer (probably)

OK, I’ve tried the Rift developer kit device. Cool factor is very high. Possibly too high. I got the thing to use in Second Life, which does not yet officially support it, but plans to in the very near future. In the meantime there is an independent developer version that minimally supports the device, (CtrlAltStudio Viewer). It’s a bit clunky. but definitely worth a try, and eons better than trying to compile a test viewer yourself.

In the SL Oculus Rift Users group, the general feeling is generally very positive, but there have been numerous reports of motion sickness, with which I can personally concur. There does seem to be some inevitable tweakage required of video settings to render maximum resolution and reduce latency, both of which contribute to the dizzy-making motion. When I first managed to get it working, it was actually better than I have tried since. This is in part due to my video card, which is a top of the line ASUS that I normally use with 3 monitors. It has 2 DVI connectors as well as HDMI and DisplayPort. I disconnected one monitor on the DisplayPort connector and ran it with an adapter cable to the Rift’s DVI connector, which should have worked, but did not. And I have since been unable to get my 3rd monitor back online. No idea what the issue was, but could be a problem in the monitor itself or the DP connection.

In any case, I did get the Rift working on the second DVI connector. I have to say, trying to use the Rift in a normal screen view, either as a normal monitor or in SL, is nearly impossible. I had to start the viewer in the primary screen to log on and so on, and then drag it over to the secondary, using the keyboard shortcut to toggle between “normal” (i.e., nearly unusable) and 3D Rift view. Thank goodness Win7 allows me to drag a window to the top of the screen and it automatically maximizes. I also wear glasses. I’m farsighted, so the standard issue lenses in the Rift work fine for me. (It comes with 2 set of corrective lenses for nearsighted users.) But I was having to take off the viewer in order to see the screen in normal mode, which meant switching to my glasses.

A later test actually used the DisplayPort connector to the Rift DVI, but I was still unable to run 3 monitors as before. I had to configure the Rift as the secondary and disable what I was using. grrr this link… (No blaming Oculus for this, mind you. Just saying. Users without multi-display graphics cards are going to have real issues with this.) For whatever reason, the DP connector was noticeably lower resolution and laggier than the DVI. Again, I presume it’s an issue on my end, but anyone experiencing the Rift for the first time with performance like that is not going to be happy.

Assuming the best of all possible worlds… The experience itself is both awesome and weird. The Rift assumes a 3D world in which  you physically turn your head to see left and right and up-down, somewhat like SL’s mouselook. But in SL, you are normally in a 3rd person and you always move in the direction you’re facing. In other words, the world basically moves so it’s facing whatever direction you are facing. With Rift (and this is most obvious with the included demo environment), no matter what direction you’re facing, the UP directional key moves you north, RIGHT moves you east and so forth. If you want to be facing the direction you want to move in, you have to physically turn your head (at least) to that direction. If you are walking on the street, this is natural, but if you are seated at a computer (especially tethered to it with a wired keyboard), turning your body to turn your character’s orientation is not only inconvenient, but contributes to the vertigo. If you have any noticeable lag at all, it’s just not fun.

But.. despite the problems, there is a truly astounding sense of being physically in the space. Whey you walk up to someone, their head is at eye level with yours and you can look at them as you would as though they were physically next to you. There is no way to explain the feeling without experiencing it. Will it change completely and forever the way we work and play in 3D virtual worlds? I don’t think so. At least not for a while. There needs to be a visual control interface (especially for people who are not touch typists). It needs to be easier to hook up and use.

The consumer model to be released is expected to have significantly better resolution and overall performance characteristics at a similar or better price. Als0, SL’s Rift viewer should be much more fully developed than the CtrlAltStudio viewer is. For most users, I strongly recommend waiting for the consumer model of the Rift and at least a Release Candidate SL viewer before investing time and money in the technology. I do hope it lives up to its potential. But I don’t see it happening a dramatically as some early adopters might like.

Microsoft Research shows Holodesk

Augmented reality tech from Next at Microsoft.

Decentralization and Socialism

N.B.: This is an unfinished article written in July 2009.

One of the most revolutionary and evolutionary changes that information technology has brought about is the decentralization of networked resources. I am writing this post with a keyboard connected to my local PC which is connected to the Internet where there is a server with my website running the word processing software. I don’t know (or care) where on Earth the physical server is or if the processed file will be kept there or somewhere else. This document will be instantly visible across the globe for anyone to see and to comment on using a similar process. In the early days of the Internet, you had to log in to a remote computer to get the data from there. Now, the various elements on this page might be served from a dozen different systems anywhere in the world. Ads may come from a Google server in California, graphics from a Flickr archive, calendar information from Facebook, etc. All seamlessly delivered in an integrated page.

Modular computing has been around for a long time, but it has mostly been confined to large esoteric corporate processes. Theoretically, each sentence in this article could come from a different source and the reader would have no way of knowing that.

What makes modularity possible is the energy behind those who see its potential. The tech pioneers saw the power of pulling in the computing power and software resources of another machine to add to their own projects achat de viagra en ligne. They realized early on that in a high speed networked world, there was no particular need to have applications living on every local machine.When friends talked, they realized they had useful modules to share, and thus was born the idea of Open Source. The furthering of the technology became such a powerful motivator that it even overcame the motive for profit and ownership. As the technologies developed and stabilized, the profit motive re-emerged. Now that decentralized computing is becoming commonplace, even Microsoft is finally moving towards hosting their Office applications online. No need for you to install Word and Excel on your computer, taking up disk space and making you jump through hoops trying to decide what features you might need as you install it. Just open it online and let them maintain it. (This also guarantees the company’s control of its products and licensing.)

The advent of virtual worlds and social networking is breaking down traditional models of ownership and individuality. In a virtual world, it’s fairly easy to imitate a product. If I see an automobile in Second Life, I probably have most of the skills to build a copy of it using the tools that are readily at hand. In fact, the way I learn to do But if I do, can I then sell it at a lower cost? If anyone can make an automobile, clothing, or a house, how can a maker of these things make a living? What can I add to this object that is unique and that only I can provide?

LSL Snippets: Toggle

Function: Toggle Switch

Description: Alternates between two actions, like turning a light on and off.

Clarification: A toggle is designed to run two actions alternatively. In its simplest form, it will start or stop an action. But it’s important to think of stopping an action as starting another action that negates the previous action. For example, we think of turning off a light as stopping the flow of electricity to the light using a switch. In SL, however, a point light emitter is not a continuous action, but rather a steady state. There is no electricity running to the virtual lamp. It is simply in a light emitting state or a non-light emitting state. Similarly, a rotating object is in a moving or non-moving state. In order to change states, you have to set a new state using the same function used to set the previous state. Thus, if an object is rotating using llTargetOmega and you want to stop the rotation, you must invoke the llTargetOmega function and set some parameter to zero to actively stop it.

This works no matter how the function is invoked. A light or rotation can be started or stopped using a touch_start(), timer(), sensor(), listen(), etc. All of these are events triggered by various interactions.


On-off, like a light switch. Sets PRIM_LIGHT condition to zero of not zero.

Swap textures.

Open/close a door.

Start/stop a sound, or rotation, or any continuing action.

Activated by:

touch_start, collisions, anything that triggers a function.


integer on=FALSE; //Place at top of script to define global variable. The variable "on" is either TRUE or FALSE. If it's not TRUE, then it has to be FALSE, and vice versa.
//For a light switch, the default state of the light is normally off, which is the same as saying "not on," or "on=FALSE".

touch_start(integer num_detected)
//touch_start is used to signal a process that initiates once when the object is touched. You can use the same process with a collision_start or sensor() event.
if (on) //Is the light on? If so, do the following
//turns prim light off
on=FALSE;//Resets the variable for next touch_start.
}//End if and stop.

else{//If the previous condition was not true, do the following
llSetPrimitiveParams([PRIM_LIGHT,1,<1,1,1>,1]); //turns prim light on
    on=TRUE;//Resets the variable for next touch_start.
}// End else
}//end touch_start

//—–End Snippet—–


Instead of using the 0n=TRUE/FALSE test, you can substitute the following:


This automatically switches the variable between TRUE and FALSE.  Thus, the more concise snippet:

integer on=FALSE; //Place at top of script to define global variable.

touch_start(integer num_detected)
//The exclamation point means "not". If on was TRUE, on is now not TRUE (i.e.,FALSE) and vice versa.
if (on)

//—–End Snippet—–

Emerging Technologies

N.B.: This article was written in October 2009. It represents unrefined notes and should be considered unfinished.

I just attended an online voice seminar from New Media Consortium (NMC) titled Talking Emerging Technologies With Bryan Alexander. Alexander is Director of Research for National Institute for Technology and Liberal Education (NITLE), and is currently thet Chair of the Advisory Board for the 2010 NMC Horizon Report an important annual assessment of technology and learning. In this hour-long conversation with NMC’s Alan Levine (VP of Community and Chief Technology Officer (CTO) for NMC), there were so many ideas laid out that my head nearly exploded. The presenters were very responsive to the back chat, which was also informative and insightful.

Emerging technologies are coming at us at an astounding rate. Things like Twitter and podcasting are already pretty familiar to us all by now. I had just heard about Google Wave — a new collaborative environment that could promise to change the ways we do everything in a networked world. These guys are already thinking past that. Thinking it has great potential, perhaps, but may not be the prime mover into the next decade of technology.

Here’s an abbreviated list of topics:

Virtual Memory

N.B.: This is post originally written in January 2009.
A recent article in the PrimPerfect blog has brought up the important issue of the ephemeral nature of creative activity in the virtual worlds.  As one who has worked in libraries and government agencies, I am reasonably conscious of the idea of records retention and archives. In SL there are now over 600 documented art galleries. The work being done in world is not all great, but some of it is truly extraordinary and should be preserved. The loss of important exhibition spaces at Princeton and elsewhere highlights the urgency of creating a cost-effective means or archiving works and installations.

Argument can be made that ephemerality is the nature of this work, but that rather depends on the intent of the creators. I think a lot of artists would like the opportunity in 10 years to do a retrospective show. We can save copies in inventory and, using certain software, we can even download and back up the work to some extent and place it on an open sim. That works for small, discrete pieces with full permissions. But what of large installations like those of AM Radio with lots of unlinked objects? AM’s installations “Beneath the Tree That Died” existed on the University of Kentucky’s Art Department gallery space for a few months, and then went away. AM is accustomed to moving his environments aound and recycling them into revised versions in other places, so they aren’t necessarily lost forever. But the configuration, the context, will be different and that makes it a different work. A different vision. When I see an amazing show close, like the installations at NPIRL’s Garden of Earthly Delights, I can’t help feeling sadness at losing so much creative work. Just because a work still exists in someone’s inventory doesn’t mean it can ever be reproduced. (Another example is Sue Stonebender’s remarkable Zero Point installation extending several hundred meters into the sky that took two years to create and was inadvertently returned to her inventory one day. It was so vast and complex, there was simply no conceivable way to archive the whole thing. I am happy to say that she has made great progress in rebuilding it.)

Gallery spaces like the University of Kentucky can provide a place to recycle some of the best work that has been removed from their original locations. AM Radio’s installation was new, but used recycled objects from previous works. Similarly, the next show will feature a reworked version of a remarkable piece by the innovative SL sculptor/story teller Bryn Oh. (Bryn’s work features layers of detail and meaning that encourage spending time to discover. )

But what of the long-term archiving of creative work? I would like to see an opensim foundation dedicated to keeping large digital works in perpetuity. I doubt such a thing would work. As the technology advances the rendering engines will change and the current technology will become obsolete. I fully expect that most of the content in SL will not exist in five years