This page contains a list of recent published work that relates to User Experiences in Augmented Reality. These are largely academic in nature but we will also link to any relevant published work that contributes to the ARUX discussion. This list will be updated regularly and the aim is for this page to be a useful and reliable source that you refer and return to regularly.
Museum Exhibit Content Recommendation and Guidance System Focusing on Experience Design
Luh, Chiang, Huang and Yang - 2011
With the growth of economic and the change of consumers’ needs, the museum gradually adopts the concept of experiential marketing. The guidance system has played an important experience media. Its development process of services model starts from “one-way standardization” and “passive customization”, to “active customization” and “personal adaptability”, trying to attract visitors by providing unforgettable and unique experience. This study integrated different scholars’ “experience” viewpoints and principles, utilizing design innovation to develop recommendation and guidance systems of content displaying, which consists three man-machine systems, four databases and three core techniques. Furthermore, this study established violin exhibition as an example to descript the “actively customized and recommended displaying content” innovative experience model of “artificial intelligence people” and “invisible encircling”.
Mobile Augmented Reality for books on a shelf
Chen, Tsai, Hsu, Singh & Girod - 2011
Retrieving information about books on a bookshelf by snapping a photo of book spines with a mobile device is very useful for bookstores, libraries, offices, and homes. In this paper, we develop a new mobile augmented reality system for book spine recognition. Our system achieves very low recognition delays, around 1 second, to support real-time augmentation on a mobile device’s viewfinder. We infer user interest by analyzing the motion of objects seen in the viewfinder. Our system initiates a query during each low-motion interval. This selection mechanism eliminates the need to press a button and avoids using degraded motion-blurred query frames during high-motion intervals. The viewfinder is augmented with a book’s identity, prices from different vendors, average user rating, location within the enclosing bookshelf, and a digital compass marker. We present a new tiled search strategy for finding the location in the bookshelf with improved accuracy in half the time as in a previous state-of-the-art system. Our AR system has been implemented on an Android smartphone.
Computation of Three-Dimensional Electromagnetic Fields for an Augmented Reality Environment
Buchau & Rucker - 2011
Augmented reality is predestined for visualization of electromagnetic fields in air or inside transparent matter. Real existing objects are studied and invisible electromagnetic fields are added as virtual objects. Hence, experts as well as students are able to connect electromagnetic fields easily with studied objects. They can concentrate on physical effects instead on reading figures. Here, an easy to use augmented reality environment for three-dimensional electromagnetic fields is presented.
Wherever you go, there you are: Place-based Augmented Reality Games for learning
Squire, Jan, Matthews, Wagler, Devane & Holden - 2008
Games are among the oldest forms of experiential learning. Game-based learning scenarios are a staple in the military; games have been used to represent, communicate and explore the dynamics of complex situations with multiple interacting variables. Today’s videogames allow new kinds of interactions, including real-time 3D and physics simulation. Learners can participate in complex systems over distance and time, and express themselves through game tools (Casti, 1997; Squire, 2004). In recent years, the military has embraced gaming (Prensky, 2001). However, the lack of clear purpose, rationale, and theoretical framework for educational games has hindered their uptake in other environments. (Gredler, 1996). Games may create “greater engagement,” but they have, with few exceptions, have rarely demonstrated long term learning gains.1 Positivist research paradigms have failed to detect changes because they have overlooked the interdependences between gaming and other instructional strategies, the importance of social interactions in the gaming experience, or unanticipated learning outcomes (Squire, 2004). Better developed pedagogical models that can be refined and tested through iterative research and design and more open and flexible assessment models might push the field forward (Barab & Squire, 2004).
A smart crane operations assisted system using Augmented Reality
Chen, Chi & Kang - 2011
With the increasing complexity of modern construction projects, it is becoming an important issue to enhance the erection speed yet maintain safe operations. There are four major problems in current crane operations: (1) dynamic working environments, (2) limited operators’ view, (3) unclear communication with other crew and (4) oversimplified controlling interface impeding efficient and safe erections. This research proposed an integrated environment to provide sufficient information to operators during the erection processes. To achieve the goal, we designed an augmented reality (AR) system with four modules: field information collector, virtual information collector, construction planner, and integrated AR display. Field information is collected by four video cameras and virtual information was collected from building information model (BIM). Then the construction planner module processes the information, calculates efficient erection paths and analyzes the possible risks in the erection environment. The results are categorized into two groups: erection progress information and limitation information and can be delivered to operators. To verify of the feasibility, we implemented a control system for Kuka KR 16 CR, which simulates a construction crane. In the near future, we will conduct a user test to verify the usability of the proposed system.
Non-Visual Augmented Reality and the Evaporation of the Interface
Parecki & Case - 2011
This presentation will highlight the advantages and disadvantages of visual and non-visual augmented reality. We’ll cover alternate types of augmented reality techniques and how they have been saving us time in the past few months. We’ll demonstrate how we’ve been merging available technologies with custom programming to create location-aware social networks with custom proximity notification. Finally, we’ll describe other uses for location sharing, such as automatically turning off house lights when leaving for work, wayfinding with piezoelectric buzzers, geonotes and other mashups that can be done using sms, gps, x-10 and irc as a control hub.
Dingli - 2011
Annotation is generally referred to as being the process of adding notes to a text or diagram giving explanation or comment. At least, this is the standard definition found in the Oxford Dictionary . As a definition, it is correct but think a little bit about today’s world. A world where the distinction between the virtual and the real world is slowly disappearing; physical windows which allow people in the real world and people in a virtual world to see each other (such as the virtual foyer in ) are starting to appear. These portals are not only limited to buildings, in fact the majority of them find their way in people’s pockets in the form of a mobile phone. Such phones go beyond the traditional voice conversations and allow their users to have video calls, internet access and the list of features can go on to even include emerging technologies such as augmented reality . This extension of reality is obviously bringing about new forms of media and with it, new annotation needs ranging from the annotation of videos  or music  for semantic searches up to the annotation of buildings  or even humans .
Towards intelligent environments: an augmented reality-brain-machine interface operated with a see-through head-mount display
Takano, Hata & Kansaku - 2011
The brain-machine interface (BMI) or brain-computer interface (BCI) is a new interface technology that uses neurophysiological signals from the brain to control external machines or computers. This technology is expected to support daily activities, especially for persons with disabilities. To expand the range of activities enabled by this type of interface, here, we added augmented reality (AR) to a P300-based BMI. In this new system, we used a see-through head-mount display (HMD) to create control panels with flicker visual stimuli to support the user in areas close to controllable devices. When the attached camera detects an AR marker, the position and orientation of the marker are calculated, and the control panel for the pre-assigned appliance is created by the AR system and superimposed on the HMD. The participants were required to control system-compatible devices, and they successfully operated them without significant training. Online performance with the HMD was not different from that using an LCD monitor. Posterior and lateral (right or left) channel selections contributed to operation of the AR-BMI with both the HMD and LCD monitor. Our results indicate that AR-BMI systems operated with a see-through HMD may be useful in building advanced intelligent environments.
Face Tracking for Augmented Reality Game Interface and Brand Placement
Lee & Lee - 2011
This paper proposes the AR game interface which is more faster and emotive by using an intelligent autonomous agent. As the operation of AR game has designed in accordance gamer’s face tracking, this study applied the movements on game. Since the nature of the game requires the real-time interaction, CBCH algorithm has been selected for face recognition. In case of failed face tracking, the interface agent has been used to provide the gamer with the sound information to be helped with situation perception. Furthermore, re-tracking has been enabled so that the proposed algorithm could help the gamer to be able to effectively react to the attack. This paper also looked at the design for the new beneficiary model for the game industry through interdisciplinary research between game and advertising. In conclusion, application to the 3D ping-pong game brought about effective and powerful results. The proposed algorithm might be used as fundamental data for developing the AR game interface.
From Sensor to Observation Web with Environmental Enablers in the Future Internet
Havlik, Schade, Sabeur et al.
This paper outlines the grand challenges in global sustainability research and the objectives of the FP7 Future Internet PPP program within the Digital Agenda for Europe. Large user communities are generating significant amounts of valuable environmental observations at local and regional scales using the devices and services of the Future Internet. These communities’ environmental observations represent a wealth of information which is currently hardly used or used only in isolation and therefore in need of integration with other information sources. Indeed, this very integration will lead to a paradigm shift from a mere Sensor Web to an Observation Web with semantically enriched content emanating from sensors, environmental simulations and citizens. The paper also describes the research challenges to realize the Observation Web and the associated environmental enablers for the Future Internet. Such an environmental enabler could for instance be an electronic sensing device, a web-service application, or even a social networking group affording or facilitating the capability of the Future Internet applications to consume, produce, and use environmental observations in cross-domain applications. The term ―envirofied‖ Future Internet is coined to describe this overall target that forms a cornerstone of work in the Environmental Usage Area within the Future Internet PPP program. Relevant trends described in the paper are the usage of ubiquitous sensors (anywhere), the provision and generation of information by citizens, and the convergence of real and virtual realities to convey understanding of environmental observations. The paper addresses the technical challenges in the Environmental Usage Area and the need for designing multi-style service oriented architecture. Key topics are the mapping of requirements to capabilities, providing scalability and robustness with implementing context aware information retrieval. Another essential research topic is handling data fusion and model based computation, and the related propagation of information uncertainty. Approaches to security, standardization and harmonization, all essential for sustainable solutions, are summarized from the perspective of the Environmental Usage Area. The paper concludes with an overview of emerging, high impact applications in the environmental areas concerning land ecosystems (biodiversity), air quality (atmospheric conditions) and water ecosystems (marine asset management).
GeoPointer – approaching tangible augmentation of the real world
Beer - 2011
Purpose – The aim of this paper is to present an architecture and prototypical implementation of a context-sensitive software system which combines the tangible user interface approach with a mobile augmented reality (AR) application.
Design/methodology/approach – The work which is described within this paper is based on a creational approach, which means that a prototypical implementation is used to gather further research results. The prototypical approach allows performing ongoing tests concerning the accuracy and different context-sensitive threshold functions.
Findings – Within this paper, the implementation and practical use of tangible user interfaces for outdoor selection of geographical objects is reported and discussed in detail.
Research limitations/implications – Further research is necessary within the area of context-sensitive dynamically changing threshold functions, which would allow improving the accuracy of the selected tangible user interface approach.
Practical implications – The practical implication of using tangible user interfaces within outdoor applications should improve the usability of AR applications.
Originality/value – Despite the fact that there exist a multitude of research results within the area of gesture recognition and AR applications, this research work focuses on the pointing gesture to select outdoor geographical objects.
Futuristic learning and training in augmented reality
Lee - 2011
There are many different ways for people to be educated and trained in regards to specific information and skills they need. These methods include classroom lectures with textbooks, computers, handheld devices, and other electronic appliances. The choice of learning innovation is dependent on individual’s access to various technologies and the infrastructure environment of the surrounding community available. In a rapidly changing society where there is a great deal of available information and knowledge, adopting and applying information at the right time and right place is needed to main efficiency in both school and business settings. Augmented Reality (AR) is one technology that dramatically shifts the location and timing of learning and training. This literature review research describes Augmented Reality (AR), how it applies to learning and training, and the potential impact on the future of education.
Psychophysical Influence of Mixed-Reality Visual Stimulation on Sense of Hardness
Hirano, Kimura, Shibata & Tamura - 2011
In a mixed-reality (MR) environment, the appearance of touchable objects can be changed by superimposing a computer-generated image (CGI) onto them (MR visual stimulation). At the same time, when humans sense the hardness of real objects, it is known that their perception is influenced not only by tactile information but also by visual information. In this paper, we studied the psychophysical influence on the sense of hardness by using a real object that has a CGI superimposed on it. In this experiment, we deform in an extreme way the CGI animation on the real object, while the subject pushes the real object using his/her finger. The results of the experiments found that human subjects sensed different hardnesses by emphasizing the dent deformation of the CGI animation.
A complete optical see-through set up for augmented reality exhibitions: from lab innovations to audience testings
Agnola, Lutton, Liu, et al. - 2011
We present a complete augmented reality set up based on a meaningful educational content that has been held and assessed within a large audience experience feedback. To meet the scenographic and technical requirements, the project came along with the design and integration of innovative technologies. Based on an original optical see-through display, we designed specific sensing algorithms ranging from infrared diode pattern tracking to gesture recognition methods along with a customizable P3P pose estimation module. All these modules have been efficiently integrated to set up a realtime, immersive educational experience for the exploration of the Earth environment and the understanding of how satellites work. With the support of a professional exhibition team, we also benefited from a consistent human factors study to draw exciting perspectives for the system and assess its usability into other applications.
What is the future of Augmented Reality technology on smartphones
Bieszke - 2011
The technological developments have been influencing the way we live and changing our behavioural patterns. Following the diffusion of more powerful and media centric cell phones Augmented Reality technology migrates from industrial niches to mass technology. This work investigates Augmented Reality technology and its future on smatphones. The insight into the history of Augmented Reality explains the term and its origin. Augmented Reality as an intersection between the virtual and the physical realities, where synthetic information is integrated into the real world, possesses virtuality and reality features. The dissertation describes the advantages of using that technology in different applications area. Furthermore, it demonstrates the key types of Augmented Reality and the common output devices from Head Mounted Displays to mobile phones. The work examines the mobile applications market in order to obtain relevant data about current application trends and assess prosperity of AR technology. The survey results are used to identify the level of awareness amongst the owners of smartphones. In addition, it indicates what mobile handsets and operating systems are widely used by UK customers as one of the factors having an impact on further progress in adapting the Augmented Reality technology. The dissertation outlines the existing problems, such as technical restrictions, low awareness problems as well as ethical and social barriers. Finally it focuses on future directions and further possible improvements.
Modeling Places for Interactive Media and Entertainment Applications
Nóbrega - 2011
Taking advantage of the multitude of cameras now available and capable of recording all aspects of our lives, this work explores the notion of virtualizing a physical place using cameras and sharing the resulting model with others. This social sharing would create new forms of relationship and common space discovery that would enhance video chats and virtual visiting of physical places. Furthermore, the research will consider the possible interactive applications, from games to augmented reality, which can take advantage of the created spatial and temporal models.
Mobile Augmented Reality as an Emerging Technology in Education
Smith & Brown - 2011
Augmented Reality is a digital layer of information overlaid on a real environment. Although augmented reality applications have existed for many years, the increased capabilities of mobile media devices have led to the development of mobile augmented reality applications, which use the device’s camera and internet connection to overlay the image on the screen with additional information. This paper examines the impacts this emerging technology might have on learning and education.
The Effect of Augmented Reality Training on Percutaneous Needle Placement in Spinal Facet Joint Injections
Yeo, Ungi, et al. - 2011
The purpose of this study was to determine if augmented reality image overlay and laser guidance systems can assist medical trainees in learning the correct placement of a needle for percutaneous facet joint injection. The Perk Station training suite was used to conduct and record the needle insertion procedures. A total of 40 volunteers were randomized into two groups of 20. (1) The Overlay group received a training session that consisted of four insertions with image and laser guidance, followed by two insertions with laser overlay only. (2) The Control group received a training session of six classical freehand insertions. Both groups then conducted two freehand insertions. The movement of the needle was tracked during the series of insertions. The final insertion procedure was assessed to determine if there was a benefit to the overlay method compared to the freehand insertions. The Overlay group had a better success rate (83.3% vs. 68.4%, p=0.002), and potential for less tissue damage as measured by the amount of needle movement inside the phantom (3077.6mm2 vs. 5607.9mm2, p=0.01). These results suggest that an augmented reality overlay guidance system can assist medical trainees in acquiring technical competence in a percutaneous needle insertion procedure.
The Use of Tangible and Augmented Reality Models in Engineering Graphics Courses
Chen, Chi, Hung & Kang - 2011
Engineering graphics courses are typically a requirement for engineering students around the world. Besides understanding and depicting graphic representation of engineering objects, the goal of these courses is to provide students with an understanding of the relationship between three‐dimensional (3D) objects and their projections. However, in the classroom, where time is limited, it is very difficult to explain 3D geometry using only drawings on paper or at the blackboard. The research presented herein aims to develop two teaching aids; a tangible model and an augmented reality (AR) model, to help students better understand the relationship between 3D objects and their projections. Tangible models refer to the physical objects which are comprised of a set of differently shaped pieces. The tangible model we developed includes eight wooden blocks that include all the main geometrical features with respect to their 3D projections. The AR models are the virtual models which can superimpose 3D graphics of typical geometries on real‐time video and dynamically vary view perspective in real‐time to be seen as real objects. The AR model was developed using the ARToolKitPlus library and includes all the geometrical features generally taught in engineering graphic courses or technical drawing courses. To verify the effectiveness and applicability of the models we developed, we conducted a user test on 35 engineering‐major students. The statistical results indicated that the tangible model significantly increased the learning performance of students in their abilities to transfer 3D objects onto 2D projections. Students also demonstrated higher engagement with the AR model during the learning processes. Compared to using the screen‐based orthogonal and pictorial images, the tangible model and augmented reality model were evaluated to be more effective teaching aids for engineering graphics courses.
Kinected conference: augmenting video imaging with calibrated depth and audio
DeVincenzi, Yao, Ishii & Raskar - 2011
The proliferation of broadband and high-speed Internet access has, in general, democratized the ability to commonly engage in videoconference. However, current video systems do not meet their full potential, as they are restricted to a simple display of unintelligent 2D pixels. In this paper we present a system for enhancing distance-based communication by augmenting the traditional video conferencing system with additional attributes beyond two-dimensional video. We explore how expanding a system's understanding of spatially calibrated depth and audio alongside a live video stream can generate semantically rich three-dimensional pixels containing information regarding their material properties and
3D Modeling for Mobile Augmented Reality in Unprepared Environment
Thomas, Daniel & Pouliot - 2011
The emergence of powerful mobile smartphones, with embedded components (camera, GPS, accelerometers, digital compass), triggered a lot of interest in the mobile augmented reality (AR) community and new AR applications relying on these devices are beginning to reach the general public. In order to achieve a rich augmentation in terms of immersion and interactions, these mobile AR applications generally require a 3D model of the real environment to provide accurate positioning or to manage occlusions. However, the availability of these 3D models based on real spatial data is limited, restraining the capacity of these applications to be used anywhere, anytime. To overcome such limits, we developed a framework dedicated to the fast and easy production of 3D models. The proposed solution has been designed for the specific context of mobile augmented reality applications in unprepared environment and tested on iPhone.
A combined mixed reality and networked home approach to improving user interaction with consumer electronics
Belimpasakis & Walsh - 2011
User interaction in a modern networked home can be unintuitive when utilizing a smartphone for control. Different layers of menus and the lack of a tangible real-world interaction metaphor complicates the everyday usage of those devices in a smartphone. We propose a solution focusing on mixed reality interaction, by combining computer vision and ubiquitous computing techniques, to allow the user to simply point to the device of interest, to be controlled, via the smartphone camera. This adds value to the consumer electronics ecosystem by improving networked home user interaction experiences.
Personal Projectors for Pervasive Computing
Rukzio, Holleis & Gellersen - 2011
Projectors are pervasive as infrastructure devices for large display, but are now also becoming available in small form factors that afford mobile personal use. This article analyzes “projectors on the move” and their interaction space with a survey of input and output concepts, underlying sensing challenges, and emerging applications.
Perceptual Issues in Augmented Reality Revisited
Kruijff, Swan & Feiner - 2010
This paper provides a classification of perceptual issues in augmented reality, created with a visual processing and interpretation pipeline in mind. We organize issues into ones related to the environment, capturing, augmentation, display, and individual user differences. We also illuminate issues associated with more recent platforms such as handhelds or projector-camera systems. Throughout, we describe current approaches to addressing these problems, and suggest directions for future research.
A Cultural Perspective on Mixed, Blended and Dual Reality
Applin & Fischer- 2011
An anthropological perspective on the impact of Dual, Mixed Reality, Augmented Reality (AR), as well as 'PolySocial Reality' (PoSR) on Location Awareness and other applications in Smart Environments. Humans can, and do switch context between environments and blend traces of one into the other in a socially unconscious manner often seemingly simultaneously. We propose that the cultures and behaviors of humans are increasingly actively permeating Internet and network- based applications, as well as those that are geolocal. With the future Internet of things, Dual, Mixed, and Augmented Reality, the opportunity for humans to extend their own blended reality, as well as to create new ones is unfolding. That said, because humans interact within groups, the multiplexing of their mutual blended realities rapidly creates a PolySocial Reality (PoSR). Sorting out the relationships between realities as well as between synchronous and asynchronous time and geolocal space can create a condition where realities are simultaneous and the idea of 'x' can be perceived as equaling 'not x.' We explore this new type of interoperability between virtual and physical, ideational and material, representations and objects and culture.
Peripheral vision annotation: noninterference information presentation method for mobile augmented reality
Ishiguro & Rekimoto - 2011
Augmented-reality (AR) systems present information about a user's surrounding environment by overlaying it on the user's real-world view. However, such overlaid information tends to obscure a user's field of view and thus impedes a user's real-world activities. This problem is especially critical when a user is wearing a head-mounted display. In this paper, we propose an information presentation mechanism for mobile AR systems by focusing on the user's gaze information and peripheral vision field. The gaze information is used to control the positions and the level-of-detail of the information overlaid on the user's field of view. We also propose a method for switching displayed information based on the difference in human visual perception between the peripheral and central visual fields. We develop a mobile AR system to test our proposed method consisting of a gaze-tracking system and a retinal imaging display. The eye-tracking system estimates whether the user's visual focus is on the information display area or not, and changes the information type from simple to detailed information accordingly.
Augmenting the Present with the Past
Champion - 2011
The spectrum of virtual reality was explained in conference papers by Paul Milgram and others (Drascic and Milgram 1996; Milgram and Kishino 1994). In the later paper they defined augmented reality as “augmenting natural feedback to the operator with simulated cues” but they also noted that it has been defined as “a form of virtual reality where the participant’s head-mounted display is transparent, allowing a clear view of the real world.”
Augmented Reality 2.0
Schmalstieg, Langlotz & Billinghurst - 2011
Augmented Reality (AR) was first demonstrated in the 1960s, but only recently have technologies emerged that can be used to easily deploy AR applications to many users. Camera-equipped cell phones with significant processing power and graphics abilities provide an inexpensive and versatile platform for AR applications, while the social networking technology of Web 2.0 provides a large-scale infrastructure for collaboratively producing and distributing geo-referenced AR content. This combination of widely used mobile hardware and Web 2.0 software allows the development of a new type of AR platform that can be used on a global scale. In this paper we describe the Augmented Reality 2.0 concept and present existing work on mobile AR and web technologies that could be used to create AR 2.0 applications.
Using presence to evaluate an augmented reality location aware game
McCall, Wetzel, Löschner & Braun - 2011
Location-aware augmented reality games provide players with a rich and potentially unlimited range of interaction possibilities. In this paper, a study is described which uses a number of measurement techniques including questionnaires, direct observation, semi-structured interviews and video analysis to measure player's sense of presence. The paper points to the importance of the availability of actions within augmented reality games and how this shapes their sense of presence. The findings indicate that such an approach to measuring presence can provide valuable information on the structure of augmented reality location-aware games.
Dörner, Lok & Broll - 2011
Backed by a large consumer market, entertainment and education applications have spurred developments in the fields of real-time rendering and interactive computer graphics. Relying on Computer Graphics methodologies, Virtual Reality and Augmented Reality benefited indirectly from this; however, there is no large scale demand for VR and AR in gaming and learning. What are the shortcomings of current VR/AR technology that prevent a widespread use in these application areas? What advances in VR/AR will be necessary? And what might future “VR-enhanced” gaming and learning look like? Which role can and will Virtual Humans play? Concerning these questions, this article analyzes the current situation and provides an outlook on future developments. The focus is on social gaming and learning. Augmented reality application for the navigation of people who are blind
Sánchez & Tardes - 2011
A person who is blind can be capable of locating objects and also other people, such as a sighted person, by using audio cues alone. In this research we present the design, development and evaluation of ARTAB (Augmented Reality Tags for Assisting the Blind), a technological assistant for people who are blind, which uses Augmented Reality to identify a set of objects in an indoor environment. As a result, we generated audio-based representations that allow a user to determine the position of an object relative to the angle of vision of the video capture device for navigation purposes. The usability testing performed allowed us to detect that it is not trivial to assign sound effects so that the variation of such effects would imply changes in the position of an object. The continual variation of the sound pitch does not generate the contrast necessary for the user who is blind to be able to obtain a certain kind of information. However, users generally perceive ARTAB as a useful tool for assisting orientation and mobility tasksReal-time photorealistic virtualized reality interface for remote mobile robot control
Kelly, Chan, Herman, et al. - 2011
The task of teleoperating a robot over a wireless video link is known to be very difficult. Teleoperation becomes even more difficult when the robot is surrounded by dense obstacles, or speed requirements are high, or video quality is poor, or wireless links are subject to latency. Due to high-quality lidar data, and improvements in computing and video compression, virtualized reality has the capacity to dramatically improve teleoperation performance - even in high-speed situations that were formerly impossible. In this paper, we demonstrate the conversion of dense geometry and appearance data, generated on-the-move by a mobile robot, into a photorealistic rendering model that gives the user a synthetic exterior line-of-sight view of the robot, including the context of its surrounding terrain. This technique converts teleoperation into virtual line-of-sight remote control. The underlying metrically consistent environment model also introduces the capacity to remove latency and enhance video compression. Display quality is sufficiently high that the user experience is similar to a driving video game where the surfaces used are textured with live video.Using augmented reality to support the understanding of three-dimensional concepts by blind people
Kirneremail, Kirner, Wataya & Valente
Describing real and imaginary three-dimensional scenes from the observer’s viewpoint is an intuitive activity for visually non-impaired people, but it is difficult for congenitally blind people, since it involves abstract concepts, such as perspective, depth planes, occlusion, etc. This paper discusses these problems and presents physical environments and procedures supported by an augmented reality tool in order to help blind people to understand, describe and convert three-dimensional scenes into two-dimensional embossed representations. To verify how blind people can acquire these concepts, we developed an augmented reality application that worked as an audio spatial tutor to make the perspective-learning process easy. The application was tested with 10 congenitally blind people, who learned to understand the perspective concepts and who reported on the experience. Finally, we discuss the learning method and technical aspects, suggesting ways to improve the augmented reality application and how it can be released.Augmented calligraphy: experimental feedback design for writing skill development
Shichinohe, Yamabe, Iwata & Nakajima - 2010
In this demonstration, we introduce the augmented calligraphy system that aims at supporting a calligraphy learner's self-training process by giving feedback. In order to write characters well, body posture is a very important factor. However, it is difficult to keep proper posture without any assistance. Therefore, the system monitors the learner's posture by a web camera and notifies them if the posture moves into a bad shape. Several types of multimodal feedback were implemented, since we are particularly interested in how feedback design can decrease cognitive load.ARCrowd - a tangible interface for interactive crowd simulation
Zheng & Li - 2011
Manipulating a large virtual crowd in an interactive virtual reality environment is a challenging task due to the limitations of the traditional user interface. To address this problem, a tangible interface based on augmented reality (AR) technology is introduced. With a novel interaction framework, the users are allowed to manipulate the virtual characters directly, or to control the crowd behaviors with markers and gestures. The marker-gesture pairs are used to adjust the environment factors, the decision-making processes of virtual crowds, and their reactions. The AR interface provides more intuitive means of control for the users, promoting the efficiency of user interface. Several simulation examples are provided to illustrate the various crowd control methods.A ludological view on the pervasive mixed-reality game research paradigm
Montola - 2011
During the last 10 years, numerous mixed-reality game prototypes have been built and studied. This paper is a game studies attempt at understanding the findings of that research. First, this paper will look into the paradigm of pervasive mixed-reality game research, analyzing how these games have been produced and studied. Then, there is an overview of some central, reoccurring findings of that paradigm that is written with the intent of generalizing lessons of individual experiments. Finally, there is a discussion on research methodology, analyzing how this type of research could better validate the findings that have to do with play experiences and game design.Co-presence in Mixed Reality-Mediated Collaborative Design Space
Wang & Wang - 2011
Co-presence has been considered as a very critical factor in shared virtual environments (SVEs). It is believed that by increasing the level of co-presence, the collaborative design performance in a shared virtual environment could be improved. The aim of this chapter is to reflect on the concept and characteristics of co-presence, by considering how Mixed Reality (MR)-mediated collaborative virtual environments could be specified, and therefore to provide distributed designers with a more effective design environment that improves the sense of “being there” and “being together”. Theoretical and methodological implications of designing and implementing multiuser location-based games
Diamantaki, Rizopoulos, Charitos & Tsianos - 2011
Multiuser location-aware applications present a new form of mediated communication that takes place within a digital as well as a physical spatial context. The inherently hybrid character of locative media use necessitates that the designers of such applications take into account the way communication and social interaction is influenced by contextual elements. In this paper, an investigation into the communicational and social practices of users who participate in a location-based game is presented, with an emphasis on group formation and dynamics, interpersonal communication, and experienced sense of immersion. This investigation employs a methodological approach that is reliant on both qualitative and quantitative data analysis. A series of this user experience study's results are presented and discussed.A research methodology for evaluating location aware experiences
Reid, Hull, Clayton, Melamed & Stenton - 2011
Research field trials of fully functional prototypes of location-based games are an effective way to test game designs and develop an understanding of what makes games compelling. They may also expose emergent behaviors or facets of the game which were not predicted in the design. Experiments are a good way to explore these emergent behaviors in a rigorous way. We describe an emergence-driven research methodology that formalizes this process of using emergent phenomena from research field trials to drive experiments. We also describe a range of techniques that can be used to evaluate location aware experiences.GUI vs. TUI: Engagement for Children with No Prior Computing Experience (NOTE: TUI = Tangible UIs)
Cheng, Der, Sidhu & Omar - 2011
Graphical User Interfaces (GUIs) are a long lasting type of interface that has been used for the past four decades whereas Tangible User Interfaces (TUIs) is an emerging interface believed to address some of GUI weaknesses. There has been some research comparing these two interfaces in terms of their usability. ‘Engagement’ is a usability measure, but is not frequently utilized in GUI and TUI comparisons. Furthermore, most research done in such comparisons have not ventured into evaluating children with no prior computing experience. This paper will highlight the outcome of an evaluation process incorporating both, i.e. engagement as a measure and children with no prior computing experience as the sample group. Prior computer experience and engagement are both measured using performance metrics though time-on-tasks, a validated observational technique. At the end of all the experiments, this research demonstrates that there are no significance differences in engagement, comparing GUI and TUI, when these children were exposed to both interfaces for the first time.Sensor integration for perinatology research
Chen, Hu, Bouwstra, Oetomo & Feijs - 2011
The number of high-risk pregnancies and premature births is increasing due to the steadily higher age at which women get pregnant. The long-term quality of life of the neonates and their families depends increasingly critically on the ability to monitor the health status of mother and child accurately, continuously and unobtrusively throughout the perinatal period. Advances in sensor integration have enabled the creation of non-invasive solutions to improve the healthcare of the pregnant woman, and her child before, during and after delivery. In this paper, we present the design work of a smart jacket integrated with textile sensors for neonatal monitoring and software architecture of advanced sensor integration for delivery simulator. A balanced integration of technology, user focuses and design aspects is achieved. Prototypes are built for demonstrating the design concept and the experimental results are obtained in clinical settings. The social camera: a case-study in contextual image recommendation
Bourke, McCarthy & Smythe - 2011
The digital camera revolution has changed the world of photography and now most people have access to, and even regularly carry, a digital camera. Often these cameras have been designed with simplicity in mind: they harness a variety of sophisticated technologies in order to automatically take care of all manner of complex settings (aperture, shutter speed, flash etc.) for point-and-shoot ease, these assistive features are usually incorporated directly into the cameras interface. However, there is little or no support for the end-user when it comes to helping them to compose or frame a scene. To this end we describe a novel recommendation process which uses a variety of intelligent and assistive interfaces to guide the user in taking relevant compositions given their current location and scene context. This application has been implemented on the Android platform and we describe its core user interaction, recommendation technologies and demonstrate its effectiveness in a number of real-world scenarios. Specifically we report on the results of a live-user trial of the technology in a real-world tourist setting.From Internet to iPhone: Providing Mobile Geographic Access to Philadelphia's Historic Photographs and other Special Collections
Boyer - 2011
PhillyHistory.org contains more than 95,000 map and photographic records from the City of Philadelphia Department of Records and other local institutions, searchable and viewable by geographic location and other criteria. The Department of Records further expanded public access capabilities through the release of PhillyHistory.org optimized for smartphones, enabling users to view historic photos of a location as they stroll the streets of Philadelphia. PhillyHistory.org serves as a case study for how libraries can use mobile technologies to increase access to their special collections and provide learning opportunities that transcend the traditional web site. Theorizing Locative Technologies Through Philosophies of the Virtual
de Souza e Silva & Soutko - 2011
The concept of the virtual along with the philosophical traditions it invokes has received sparse attention in the communication literature, even though the term “virtual” is used so readily to refer to new information and communication technologies (ICTs). In this article, we argue that philosophies of the virtual that dichotomize reality/representation, although perhaps sufficient for analyzing human–computer interfaces through the 21st century, are inadequate for developing theoretical approaches to understanding location-based technologies. We thus summarize different philosophies of the virtual to propose a theoretical framework for the study of locative technologies. We do so to open up an understanding of the practical effects of locative technologies in lived experience and use examples of that experience to show motivations for shifts in theory.
[Virtual + 1] * Reality - Blending “Virtual” and “Normal” Reality to Enrich Our Experience
Beckhaus - 2011
Virtual Reality aims at creating an artificial environment that can be perceived as a substitute to a real setting. Much effort in research and development goes into the creation of virtual environments that in their majority are perceivable only by eyes and hands. The multisensory nature of our perception, however, allows and, arguably, also expects more than that. As long as we are not able to simulate and deliver a fully sensory believable virtual environment to a user, we could make use of the fully sensory, multi-modal nature of real objects to fill in for this deficiency. The idea is to purposefully integrate real artifacts into the application and interaction, instead of dismissing anything real as hindering the virtual experience. The term virtual reality – denoting the goal, not the technology – shifts from a core virtual reality to an “enriched” reality, technologically encompassing both the computer generated and the real, physical artifacts. Together, either simultaneously or in a hybrid way, real and virtual jointly provide stimuli that are perceived by users through their senses and are later formed into an experience by the user’s mind.
Wachs, Kölsch, Stern & Edan - 2011
Body posture and finger pointing are a natural modality for human-machine interaction, but first the system must know what it's seeing.