Vision science is the study of vision, from human to non-human organisms, using scientific methods. It is a scientific discipline in and of itself while also overlapping with a wide range of other disciplines, ranging from ophthalmology and optometry, neuroscience, psychology, physics, computer vision, artificial intelligence, and computer graphics.

The interdisciplinary nature of vision science allows vision scientists to develop a deep understanding of vision through the lens of many different fields, borrowing ideas from a myriad of seemingly unrelated fields to address fundamental questions about visual perception, while driving scientific and technological innovations along the way.

Given the vastness of the topics that vision science encompasess, this course will by no means cover every single aspect of it. However, this course will illustrate the beauty of interdisciplinarity research, demonstrating, for instance, how one of the most popular and widely used artificial neural networks in computer vision comes from the study of the early visual cortex of the cat. Indeed, it is through the seamless integration of behavioural science, biological science, neuroscience, and computer science that we can get to where we are now in terms of technological advancement.
Course Content
In this Vision Science course, you will learn traditional and contemporary research methods and findings in vision research, spanning from computational neuroscience, computational vision, behavioural analysis, computer vision, machine learning, and many more.

The course components consist of lectures on each of the two modules, Computational and Neuroscientific Approaches to Visual Perception and Ecological Approach to Visual Perception with supplementary lectures to expand your knowledge and interdisciplinary lens within Vision Science.

Research methodologies such as using Bayesian deep learning, AI, and virtual reality will be covered. This course aims to help you broaden your critical thinking, an essential skill as a postgraduate researcher. Deepen your analysis skills and gain a better understanding of the cutting-edge topics and research methods in international vision science research. The course is designed to not teach you what to think but how to think as a scientific researcher.
Course Content
The biological foundation of human visual perception
  • Human visual pathway
  • The brain and the visual cortex

    Computational neuroscience
  • Bayesian statistics and the Bayesian brain

    Biological and artificial neural networks
  • Early visual processing and convolutional neural networks

    Depth and 3D shape perception
  • Optic flow and structure-from-motion
  • Stereopsis

Bei Xiao

Dr. Bei Xiao is Assistant Professor in Computer Science at American University. Xiao's research focuses on how the human visual system estimates physical properties of objects in our surroundings. Another focus of her research is to apply results from human perception and cognition to develop robust computer vision algorithms. Specifically, she studies perception and recognition of material properties, 3D shape, tactile properties, of objects in dynamic scenes. She uses a combination of human psychophysics, crowd-sourcing, haptic force-feedback, computer graphics, machine learning, image processing and VR techniques. If you are undergraduate students interested in perception, VR, machine learning, computer graphics, human computer interactions, come and talk to me about some possible projects in the lab. If you are interested in pursuing PhD studies with me, please contact me and apply for the BCCN graduate program.
Bei Xiao

Dimitris Pinotsis

Dr Dimitris Pinotsis is an Associate Professor at the Centre for Mathematical Neuroscience and Psychology and the Department of Psychology at City, University of London.

He is also a Research Affiliate at the Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at the Massachusetts Institute of Technology (MIT), where he worked between 2016 and 2018. Dimitris has also been a Senior Research Fellow at UCL’s Wellcome Trust Center for Neuroimaging.

In recent work, Dimitris exploits deep neural networks and hierarchical Bayesian inference to understand the causes of neurological and psychiatric disorders. He also exploits brain recordings to build better artificial intelligence algorithms.
Dimitris Pinotsis

Derya Akkaynak

Dr Derya Akkaynak is a mechanical engineer at Harbour Branch Oceanographic Institute, and previously a Research Fellow at Perceptual Science Group at Massachusetts Institute of Technology (MIT), and a mechanical engineer at Harbour Branch Oceanographic Institute, and previously a Research Fellow at Princeton University.

She received the 2019 Blavatnik Young Scientist Award for Significant breakthroughs and advancements in computer vision and underwater imaging technologies, resolving a fundamental technological problem in the computer vision community—the reconstruction of lost colours and contrast in underwater photographic imagery which will have real implications for oceanographic research.

She has also received the 2018received 2018 International Ocean Colour Coordinating Group Summer Lecture Series Travel Award (France), and 2018 IEEE CVPR Women in Computer Vision Travel Award. 
Derya Akkaynak

Mahipal Choudhary

Dr Mahipal Choudhary is a Scientific Consultant at NIRx Medizintechnik GmbH, Berlin. With a PhD and Master in Biomedical Engineering, he has a diverse background- ranging from research and development of Neuromodulation devices to functional imaging of the brain. His current job profile at NIRx involves consulting and providing technical support to users of NIRx Functional Near-Infrared Spectroscopy platforms.

Shaiyan Keshvari

Dr Shaiyan Keshvari is the Research Fellow at York University Centre for Vision Science and was Research Fellow at MIT Perceptual Science Group, Computer Science & Artificial Intelligence Lab (CSAIL)., and York University Centre for Vision Science.

His goal is to understand the computational mechanisms that underlie human vision. His approach tests modern machine vision and graphics algorithms as models of human behaviour. Apart from elucidating biological vision, his work has the potential to improve computer vision applications.
He brings together mathematical and machine learning methods with behavioral and neuroscientific data to better our understanding of the mind, and in particular visual perception.

His current research program aims to answer three main questions:
  • What computational mechanism does our visual system implement to piece together and encode the shape of object?
  • How similar are humans and modern learning-based computer vision algorithms in their extraction and use of object shape?
  • How does vision encode both objects and textures, particularly in the periphery where fidelity is reduced?
Shaiyan Keshvari

Michael Wang

Dr Michael Wang is the Research Fellow at the Centre for Vision Research at York University. He is also the faculty member of Computer Society, Institute of Electrical and Electronic Engineers (IEEE), International Vision Science Society, and International Society for Ecological Psychology.

He is currently using virtual reality (VR) to study the relationship between the 3D space depicted on a screen and the space that an observer is in. He wants to use the findings of this line of research to improve existing telecommunication technology and provide an increasing sense of presence when interacting with digital contents on the screen.

Current projects include: Human Identity Recognition – use machine learning to decompose high dimensional human movement data with Python, used the results to formulate viable hypotheses on human identity recognition, and designed and constructed behavioral experiments with JavaScript.

Direction and Distance Perception in VR - constructed behavioural experiments in VR using Unity and C#, investigated how various types of visual information affect direction and distance perception and its im- plication on video-based telecommunication, and developed a computational model to fit behavioral results with Python.
Michael Wang

Andrew Clement

Dr Andrew Clement is a cognitive psychologist who specializes in visual perception, attention, and memory for objects and scenes. In his research, he uses a combination of behavioral methods, eye tracking, and event-related potentials to examine how object representations influence our perception, attention, and memory for visual information.

He is a postdoctoral research associate at Texas A&M University working with Dr. Brian A. Anderson. He previously completed a postdoctoral fellowship at the University of Toronto with Dr. Jay Pratt, and completed his Ph.D. at the University of Notre Dame under the supervision of Dr. James R. Brockmole.

He is also currently an organizer for the Object Perception, Attention, and Memory (OPAM) conference. OPAM is a satellite conference of the Psychonomic Society’s annual meeting, and provides an excellent opportunity for graduate students, postdocs, and other early career researchers to present their research.
Andrew Clement