... to "Personal Vision"... part 2

The hypotheses suggested by information on the web


One more thing, to meet some friends, all of what you will find written in these pages, this project or project better these reflections did not like the project "Google project glass." In fact, their hypothesis is linked to a vision of Augmented Reality, a term coined in the 90s by researchers Tom Caudell and David Minzell, which identifies a particular extension of virtual reality. Return receipt consists of superimposing the reality perceived by the user, a virtual reality generated by a computerized system in real time. In essence, the user's perception of the world is increased, enriched by virtual objects reconstructed, which provide information to the real environment. There is talk of extending the virtual reality because in this situation the user continues to perceive the real environment, but this may overlap and integration of digital images or data ad hoc products that enrich the reality of information, in order to Solving complex situations , and then the purpose is not 'to replace the real world, as in the case of the Virtual reality, but to extend the reality. The adjective is augmented to define precisely the increase in the level of knowledge offered on the surrounding reality, the solutions provide the overlap and the combination of the real world with the digital world. This type of vision increases the user's perception of the environment, providing visual information that the user can not directly detect with his senses.


The real environment and virtual co-exist and the user can move freely in the real world, with the option, optional, to interact with it. In my opinion, for now, the greatest difficulties in applications of AR,(also the QR code is an example of augmented reality AR) so that the system is working and useful application implemented, is that the real content and virtual content to be precisely aligned and synchronized, all this requires precise calibration of the video -camera with the tools used to ensure that the virtual object, processed through the computer, expected to lie in the real environment with precision and accuracy, in order to be correctly perceived by the user. I recommend an in-depth studies frealizzati by the Human Interface Technology Lab (HIT) at the University of Washington in the early 90s by the team of Professor Thomas Furness.


The second aspect that I would like to emphasize, is that compared to my original project, in which I shared the story and the study in three sessions, it would be worthwhile to leave it open will be the various topics trattatii to determine a path with a more logical fractal linear.


For visualization we use an existing model of video glasses vrd 920W virtual LCD size 80 ". In addition to u and types of viewers in Optical and Video See Through See Through, currently on the market research is muovendonella direction of the Virtual Retinal Display (VRD) that have been designed and implemented as stated by the Human Interface Technology Lab (HIT)The VRD was invented at the University of Washington in the Human Interface Technology Lab (HIT) in 1991. The development began in November 1993. The aim was to produce a full color, wide field-of-view, high resolution, high brightness, low cost virtual display. Microvision Inc. has the exclusive license to commercialize the VRD technology.
This technology has many potential applications, from HMD in military aerospace medicine, cultural heritage. The VRD works projecting a beam modulated light, (generated, using laser at very low power which move horizontal and vertical), directly on the retina of the eye, thus producing as a raster image. The observer has the illusion of seeing the source image displayed as if it were about 50-60 cm apart on a 14 "monitor with an excellent quality in terms of definition and more in stereoscopic mode. The DVR can be connected via wireless to any device, in our case the Nokia once worn, viewing pictures and information directly on the retina of our eye. His strength compared to other similar devices, is the lack of a screen. We can say that all technologies have their merits and their faults, so the choice of technology is fundamentally dependent on application requirements assumed in the project.

How do these glasses ....

In Active Stereoscopy, with this technique using special projectors can update the image to more than 120 times per second, and the two images are alternated during the update process through the mechanism of page flip or quad-buffered. The result is that the two images are alternated, each of them is updated 60 times per second, enough to be able to view the content without noticing flicker (flickering) in the images. The projection system is connected to the special control units Infrared: these units communicate with said eyeglass LCD shutter glasses and are able to control the filling of the right and left lens of the glasses. If the control unit is well synchronized with the only projector it will be capable of bridging the right lens while the projector displays the left image and vice versa all at an update rate of 60 times per second, not perceptible to 'human eye, and then the brain believed to see them simultaneously and "melts" the two images. 
Active systems, as opposed to those that use simple passive polarized glasses and which cause artifacts in the original images, offer the best possible quality of stereoscopic vision without altering the initial quality of the image....




As suggested above, I would like to write posts shorter but with a temporal frequency closer, so that the story can be more exciting.

In the next chapter 2.1 smart phone interfaces with glasses ...(Nokia)

Thank You

Commenti

Post più popolari