Skip to main content

Tarsier Goggles: a virtual reality tool for experiencing the optics of a dark-adapted primate visual system

Abstract

Charles Darwin viewed eyes as the epitome of evolution by natural selection, describing them as organs of extreme perfection and complication. The visual system is therefore fertile ground for teaching fundamental concepts in optics and biology, subjects with scant representation during the rise and spread of immersive technologies in K-12 education. The visual system is an ideal topic for three-dimensional (3D) virtual reality learning environments (VRLEs), and here we describe a 3D VRLE that simulates the vision of a tarsier, a nocturnal primate that lives in southeast Asia. Tarsiers are an enduring source of fascination for having enormous eyes, both in absolute size and in proportion to the size of the animal. Our motivation for developing a tarsier-inspired VRLE, or Tarsier Goggles, is to demonstrate the optical and selective advantages of hyperenlarged eyes for nocturnal visual predation. In addition to greater visual sensitivity, users also experience reductions in visual acuity and color vision. On a philosophical level, we can never know the visual world of another organism, but advances in 3D VRLEs allow us to try in the service of experiential learning and educational outreach.

Background

The eye is an exquisite anatomical structure and fertile ground for demonstrating core concepts in physics (optics) and biology (evolution by natural selection), a pattern that began with Darwin himself. He described eyes as “organ[s] of extreme perfection and complication” (Darwin 1859, p. 186) and he used them as foil for opposition in one of his most-quoted sentences:

To suppose that the eye with all its inimitable contrivances for adjusting the focus to different distances, for admitting different amounts of light, and for the correction of spherical and chromatic aberration, could have been formed by natural selection, seems, I freely confess, absurd in the highest degree.

Despite his ‘confession,’ Darwin never doubted the evolution of complex eyes, a view that has since received overwhelming support (Lamb et al. 2007; Gregory 2008). At the same time, the eyes and visual systems of animals are wonderfully diverse, a fact that fuels the pages of biology textbooks and fires our natural curiosity. Cronin et al. (2014) put it this way: “We humans are visual creatures. We are also introspective and curious, a combination that makes us all by nature amateur visual ecologists (even if we don’t know it). Because our world is dominated by visual sensations, we naturally wonder how other animals see their particular worlds.” On a philosophical level, we can never know the visual world of another organism (Nagel 1974), but the emergence and spread of immersive technologies enables us to try in the service of constructivist pedagogies (Colburn 2000), as a “way of seeing” fundamental concepts in optics and evolution (Scott et al. 1991).

3D virtual reality learning environments (VRLEs)

Three-dimensional (3D) virtual reality learning environments (VRLEs) are well suited to constructivism, especially when students must form 3D representations of course material or interact with a learning environment to construct knowledge (reviews: Huang et al. 2010; Merchant et al. 2014). Accordingly, the development and deployment of 3D VRLEs has expanded rapidly in K-12 and higher education, especially medical education (Wu et al. 2013; Jang et al. 2017); indeed, the anatomical education of medical students is a major catalyst for 3D VRLE technology. The practical value of 3D VRLEs for learning human anatomy hints at wider applications within K-12 biological education. For example, the principles of natural selection and evolution is another topic that invites constructivist pedagogies (Kalinowski et al. 2013; Lee et al. 2017; Prins et al. 2017). Here we describe a 3D VRLE with this goal in mind. It is intended to demonstrate the principles of visual optics and natural selection in a way that constructs knowledge and stimulates user reflection on diverse worldviews. The inspiration for our 3D VRLE is the tarsier, a primate with an extreme visual system.

Tarsiers and their visual world

Tarsiers are small (113–142 g) nocturnal primates (Fig. 1a). They are an enduring source of fascination for having enormous eyes, both in absolute size and in proportion to the size of the animal (Fig. 1b). Polyak (1957) concluded that the eye size relative to body size of tarsiers is unmatched by any living vertebrate. The extreme eye size of tarsiers is most likely related to the absence of a tapetum lucidum, the mirror-like structure that results in ‘eye shine’ (Cartmill 1980).

Fig. 1
figure 1

a Bornean tarsier (Tarsius bancanus) under nocturnal conditions; note the extreme dilation of the pupil (photograph by David Haring, reproduced with permission). b Anatomical preparation of the eye and brain of T. bancanus (modified from Sprankel 1965), illustrating the comparable volume of the two structures (Castenholz 1984). The eyes of T. bancanus are therefore enormous, both in absolute size and in proportion to the size of the animal. c Geometry of the tarsier eye (modified from Castenholz 1984) illustrating our calculation of the visual angle

A tapetum lucidum is prevalent among nocturnal mammals, including nocturnal primates, because it increases photon capture and visual sensitivity under low light levels. The absence of a tapetum lucidum in tarsiers is therefore puzzling, and it is interpreted as evidence of an ancestral shift from nocturnality to diurnality followed by a reversion to nocturnality with a diurnally-adapted, tapetum-free eye (Cartmill 1980; Martin and Ross 2005). Thus, the hyper-enlarged eyes of tarsiers are widely viewed as a compensatory adaptation to improve visual sensitivity at night in the absence of a tapetum lucidum.

To appreciate why enlarged eyes are advantageous at night, we can use the dimensions of tarsier eyes to calculate the corresponding parameters for humans. For example, the eye-to-brain volume ratio of tarsiers (see Fig. 1b) can be scaled to human dimensions (see Appendix for calculations), to produce an eye with a diameter of 13.6 cm, the approximate volume of a grapefruit (Fig. 2a). The biological plausibility of this thought experiment is attested by the eyes of colossal squid (Mesonychoteuthis hamiltoni), which are nearly twice as large (Nilsson et al. 2012). Yet, the optic axes of these hypothetical eyes would never align with the visual axes of human binocular vision, so we merged the eyes to bring the optic and visual axes into alignment (Fig. 2b). In theory, such tarsier-inspired eyewear would enhance the visual sensitivity of human users (Fig. 2c), as the enlarged corneas would capture more photons under low light levels.

Fig. 2
figure 2

a Hypothetical size of tarsier eyes when scaled to human dimensions. We used the mean interpupillary distance reported for humans (6.3 cm) to merge the eyes and align the optic and visual axes. b Scaling of the hypothetical eyes in relation to a human head. c Rendering to simulate the scaled eyes on a human user

Physical eyewear could demonstrate these principles, but virtual “lenses” enable the use of filters and interactive elements, essentially transcending physical limitations to create specialized environments for intentional exploration. Such a VRLE is exciting because it can better convey visual sensitivity at night by simulating the benefits of having high densities of rod photoreceptors—tarsiers have > 300,000/mm2, whereas humans have ~ 176,000/mm2 (Collins et al. 2005). It can also simulate other aspects of tarsier vision. For example, the visual acuity of Philippine tarsiers is estimated at 8.89 c/deg (Veilleux and Christopher 2009), a minimum resolvable angle that can be simulated for human users (Caves and Johnsen 2017). Another distinguishing trait of tarsiers is red-green colorblindness. This trait varies among species, but each phenotype can be simulated (Melin et al. 2013a, b; Moritz et al. 2017). Lastly, a VRLE can simulate the visual field of tarsiers (186°), which we calculated by summing the visual angle of each eye (156.5°; Fig. 1c) and subtracting the area of binocular overlap (127°; Ross 2000).

Collectively, these traits of the tarsier visual system are predicted to result in superior vision (relative to humans) at night, and they are widely interpreted as adaptations for visual predation—tarsiers are exceptional among primates for being 100% faunivorous (Ross 2004; Moritz et al. 2014, 2017). For humans to appreciate the optical and selective advantages of tarsier eyes for accomplishing this visual challenge (navigation and predation in the dark), we conceived and developed a VRLE in the service of education, science communication, and existential reflection. The result—which we call Tarsier Goggles—can simulate human and tarsier vision under varying ambient lighting conditions.

Design of the VRLE

We developed several virtual environments for users to explore within a classroom setting, each of which allows users to alternate between human and tarsier vision, highlighting corresponding differences in brightness, acuity, and color vision. In VR, users begin in an open space (Fig. 3a, b) where they can choose to receive guidance, including tutorials for interface controls and prompts for user behaviors. The first learning environment, “Matrix,” is a 3D lattice of beams that emphasizes human-tarsier differences in visual acuity and color vision (Fig. 3c, d). The second learning environment, “Labyrinth,” is a dark maze-like space that is practically opaque under human visual conditions but navigable as a tarsier, demonstrating the advantages of tarsier visual sensitivity (Fig. 3e, f). The third learning environment, “Bornean Rainforest,” is modeled on the dipterocarp rainforests of Borneo at night (Fig. 3g, h). In this final setting, users can navigate between trees, applying knowledge from previous environments to discover a new worldview—to both experience the worldview of a tarsier and to appreciate why natural selection favored such large eyes. For orientation purposes, two-dimensional (2D) video capture of the preceding progression is available as Additional file 1; however, we recommend that instructors are present to guide first-time student users.

Fig. 3
figure 3

Screen captures from each VRLE in Tarsier Goggles. Paired images simulate the vision of humans (left) and tarsiers (right) under identical twilight conditions, revealing differences in visual sensitivity (brightness), acuity, and color discrimination. a, b VR environment where users can elect to receive guidance. c, d The “Matrix” VRLE contains a lattice of beams that is intended to emphasize human-tarsier differences in visual acuity and color vision. e, f The “Labyrinth” VRLE is intended to emphasize human-tarsier differences in visual sensitivity by challenging users to navigate a dark (scotopic) environment. g, h The “Bornean Rainforest” VRLE enables naturalistic exploration within the understory of a lowland dipterocarp forest

Tarsier Goggles is available for free online (see Availability of data and materials). It is intended to enrich the classroom when the curriculum turns to optics or evolution, topics that have natural and enduring synergies. With even basic awareness of the relevant scientific principles, students or members of the public can wear a VR headset and reflect on how they currently experience the world and how they might through the eyes of another.

Development

At the time of writing, Tarsier Goggles was built in Unity 2018 with SteamVR for the HTC Vive and Vive Pro headsets. We used the Virtual Reality Toolkit (VRTK), an open source library of scripts for Virtual Reality development, to create some menu options and user “teleportation”. This action enables user navigation through each VR environment; it also simulates the vertical clinging-and-leaping behavior of tarsiers when users explore the Bornean Rainforest. We built all other functionalities such as the splash screens and tutorial. For visual effects, we used and modified Unity’s built-in post-processing stack as well as the Colorblind Effect asset created by Project Wilberforce. Our GameObjects, which include trees, grass, bushes, and other virtual structures, were designed and built in Maya as well as downloaded from the Unity Asset Store.

Assessments and discussion

Our initial assessments of Tarsier Goggles were ad hoc and opportunistic, stemming from five demonstrations across a wide range of settings and ages (Fig. 4). Demos were conducted during two on-campus events at Dartmouth that were open to students, faculty, and their families. In addition, we conducted a demo at a professional meeting of biological anthropologists, a group familiar with tarsier visual adaptations. In one case, we worked with middle school (6th grade) students visiting a nonprofit environmental education, research, and avian rehabilitation center in Vermont. Collectively, this broad mix of users (n ≈ 35) provided important feedback for improvements, many of which were implemented in subsequent iterations. Overall, pilot users experienced the effects that we intended—they integrated optical and biological concepts to enrich their understanding of eye evolution and tarsiers.

Fig. 4
figure 4

Pilot testing of Tarsier Goggles was ad hoc and opportunistic, but it generated uniform marvel and constructive feedback from a wide range of users. a Adults without formal training in evolutionary biology tended to view the experience as reflective. b Professional biologists tended to focus on the anatomical and physiological parameters informing the VR simulation of tarsier vision. c Many middle school students valued the gaming aspects of Tarsier Goggles; i.e., overcoming visual ‘impairments’ to explore some learning environments. d Younger children had difficulty mastering the hand controls, and they sometimes attempted to reach for objects in the virtual environment

They also reflected on their experiences. As one Dartmouth professor of engineering put it, “We all think we are seeing what everyone else sees, but in fact we are all seeing something different. I feel connected to animals in a way I haven’t been before.” Another adult user in the profession of science education and outreach added, “It’s not just speculating. It’s actually having it in front of my eyes.” Notably, some children described brief sensations of disorientation, which is not uncommon in VR. Nausea from prolonged use of VR has been reported (Madary and Metzinger 2016), and it is something that educators should consider when using the technology. Best practices for using VRLEs are still in development.

Formal assessment of Tarsier Goggles occurred at an independent private secondary school in New Hampshire serving  300 students. We focused on two courses, Anthropology (12th grade; 18 student users) and Inquiry to Science (9th grade; 8 student users). We used a central meeting room equipped with a large monitor, which allowed us to project images and orient students to the physical appearance of tarsiers (Fig. 1a) and the relative size of their eyes (Fig. 1b). We also played a brief, muted video of tarsier foraging behavior, in which it is evident that tarsiers are nocturnal visual predators. In the spirit of constructivism (Colburn 2000), there was no preparatory content related to visual anatomy, optics, or natural selection. Instead, we immersed students in the VRLE immediately, allowing them to experience and construct for themselves the adaptive advantages of having enormous eyes at night. Each user trial was 5 min; however, classmates were able to view the user’s learning environment via the monitor (Fig. 5). This configuration stirred considerable commentary and discussion among other students, enriching the learning experience beyond the individual user.

Fig. 5
figure 5

A student-user experiences Tarsier Goggles during formal assessment. The student stands in front of a projection of the internal experience for classmates to view (photograph by Dustin Meltzer, reproduced with permission)

Our post-survey instrument contained 9 open-ended questions (Table 1), and it was administered immediately after use of the VRLE. The present discussion of student reactions (see Additional file 2) will focus on the suitability of our VRLE for fulfilling constructivist principles. For example, when students were asked how Tarsier Goggles differs from traditional classroom content, 22 of 26 (85%) respondents expressed a preference for the VRLE (cf. question 4). As one student put it, “Instead of hearing what life is like, you [can] actually experience it.” Other questions assessed whether students grasped the learning objective; i.e., that larger eyes capture more light, which increases visual sensitivity and is advantageous for seeing prey at night. We found that user responses varied according to the nuance of the question. For example, 22 of 25 (88%) respondents understood that large eyes are advantageous (cf. question 8), and 23 of 25 (92%) recognized that tarsier eyes are more sensitive than our own (cf. question 9), but only 15 of 25 (60%) could articulate why on the basis of optical principles (cf. question 7). One student put it this way: “Large eyes means more light can hit the retina? I’m not positive, I assume it allows more light in.” This inquisitive response—expressed as conjecture—is a testament to the seven principles of constructivism, and we agree with Colburn (2000) that post-demonstration discussion or lecture content should verify or elaborate on the knowledge constructed. Accordingly, we developed a potential lesson plan with an eye to Next Generation Science Standards (see Additional file 3).

Table 1 Post-survey instrument together with our scoring criteria and results. Individual responses to each question are available in Additional file 2

Colburn (2000) argued that classroom demonstrations are at their best when they challenge student preconceptions, forcing them to account for discrepancies between their expectations and observations. Accordingly, we asked students if they were surprised by the differences in tarsier and human visual systems, and 12 of 18 (67%) respondents answered affirmatively (cf. question 6). We attribute this marginally equivocal result to our use of Fig. 1b as an orientation tool. One student said, “I wasn’t that surprised that their vision was that good. Their eyes are slightly larger than their brain so I would have thought their vision would be better.” Such a response reveals twin outcomes: first, it demonstrates the fulfillment of our learning objective; and second, it raises questions about the sequence of learning materials. For this student, prior exposure to Fig. 1b put Tarsier Goggles into the position of confirming rather than challenging expectations, which diminished its effect. An alternative approach in the spirit of constructivism would be to expose students to the VRLE and then prompt them to predict the proportions depicted in Fig. 1b (and perhaps Fig. 2c, which would require them to converge on the same calculations in the Appendix).

Taken together, we believe that Tarsier Goggles has the potential for widespread application. It is poised to complement middle and secondary school curricula in optics and biology; and it is a form of experiential learning that promotes user reflection. In some cases, reflection is the express goal of a VR simulation; for example, In the Eyes of the Animal (http://iteota.com) is a multisensory artistic exploration and technical achievement. An advantage of Tarsier Goggles is that it is designed to be integrated with educational curricula and is targeted to address specific scientific concepts. It may even extend into a museum settings, where people of all backgrounds could enhance their understanding of optics and natural selection via technology that might be new or generally unavailable to them. Further, this VR experience can be expanded to other senses—some tarsiers enjoy exceptional hearing (Ramsier et al. 2012)—or to other visual systems. For example, we have experimented with incorporating the vision of strigiform owls as an example of convergent evolution with tarsiers (Moritz et al. 2014, 2017). Other applications could include human visual impairments, which could further promote greater empathy.

Conclusions

Applications of VR to science education and outreach are certain to increase greatly over the next years. Here we developed a VR tool Tarsier Goggles to simulate the visual sensitivity, acuity, and red-green colorblindness of tarsiers, and the advantages of these traits under dim conditions. We found that user experiences of these traits were overwhelmingly positive, indicating an improved conceptual understanding of natural selection and visual optics. It also had a strongly reflective effect, with users describing an evolved outlook on their own perceptual systems especially in comparison to those of species that they have not considered before. These experiences are promising for future applications in education and personal use with the potential to cast new light on the world of a fascinating animal.

Abbreviations

3D :

three-dimensional

VR :

virtual reality

VRLE :

virtual reality learning environment

2D :

two-dimensional

References

Download references

Authors’ contributions

SRG and NJD conceived the project and implemented the parameters for simulating tarsier vision. SRG, NG, SZ, SXG, SP, LG, EKY, KAH, HJS, KC, and SL designed the environments and user interface. NG, KC, SL, BKC, SP, EKY, and AW were developers and EL and TT directed design and development operations. SRG and NJD wrote the paper with contributions from MML, NG, KC, and SL. All authors read and approved the final manuscript.

Acknowledgements

We thank the following individuals for technical and practical advice throughout the course of this project: John Allman, Anna Autilio, Chris Collier, Andy Cooperman, Lorie Loeb, Theo Obbard, Callum Ross, and Michele Tine.

Competing interests

The authors declare that they have no competing interests.

Availability of data and materials

Tarsier Goggles is available for free at https://dali-lab.github.io/tarsier. Users should follow the link to the GitHub page where they can read written instructions for using the VR and download the zip folder that contains the.exe file and the data folder, both of which will need to be in the same directory to function. If the user experiences problems within VR, restart the application.

Funding

Funding was received from the Digital Arts, Innovation, & Leadership Lab (DALI) at Dartmouth College. Additional funding was received from the Claire Garber Goodman Fund, Department of Anthropology, Dartmouth College (Grant to SRG and NJD), the Kaminsky Research Fund, Division of Undergraduate Advising & Research, Dartmouth College (Junior Research Scholarship to SRG), and The William H. Neukom Institute for Computational Science, Dartmouth College (Travel Grant to SRG).

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Samuel R. Gochman or Nathaniel J. Dominy.

Additional files

Additional file 1.

2D video progression of the learning environments in Tarsier Goggles.

Additional file 2.

Data containing user responses to questions listed in Table 1. Responses that neglected to answer the question at hand were omitted from the data.

Additional file 3.

Lesson plan to accompany Tarsier Goggles.

Appendix

Appendix

Scaling eye dimensions

We assumed a spherical eye geometry per Schultz (1940), and we used the following formula to scale the eye proportions of the tarsier for human dimensions:

$$\begin{aligned} {\text{TD}}_{{{\text{H,}}\,{\text{S}}}} = 2^{3} \sqrt {\frac{3}{4\pi }{\text{V}}_{{{\text{H,}}\,{\text{S}}\,\left( {\text{eye}} \right)}} } = 2^{3} \sqrt {\frac{3}{4\pi }\frac{{{\text{V}}_{{{\text{T}}\,\left( {\text{eye}} \right)}} {\text{V}}_{{{\text{H}}\,\,\left( {\text{brain}} \right)}} }}{{{\text{V}}_{{{\text{T}}\,\left( {\text{brain}} \right)}} }}} = 2^{3} \sqrt {\frac{3}{4\pi }\frac{{\left( {2.03\,{\text{cc}}} \right)\,\left( { \sim 1400\,{\text{cc}}} \right)}}{{2.14\,{\text{c}}}}} = 13.63758\,{\text{cm}} \hfill \\ {\text{CD}}_{{{\text{H,}}\,{\text{S}}}} = \frac{{{\text{CD}}_{\text{T}} }}{{{\text{TD}}_{\text{T}} }}\left( {{\text{TD}}_{{{\text{H,}}\,{\text{S}}}} } \right) = \frac{{1.59\,{\text{cm}}}}{{1.85\,{\text{cm}}}}\left( {13.63758\,{\text{cm}}} \right) = 11.75271\,{\text{cm}} \hfill \\ \end{aligned}$$

where

TDH, S = scaled transverse diameter for human

VH, S = scaled volume for human

VT = volume for tarsier

VH = volume for human

CDH, S = scaled corneal diameter for human

CDT = corneal diameter for tarsier

TDT = transverse diameter for tarsier.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gochman, S.R., Morano Lord, M., Goyal, N. et al. Tarsier Goggles: a virtual reality tool for experiencing the optics of a dark-adapted primate visual system. Evo Edu Outreach 12, 9 (2019). https://doi.org/10.1186/s12052-019-0101-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12052-019-0101-6

Keywords