MACHINE LEARNING FOR SOCIALLY DRIVEN TECHNOLOGICAL INNOVATION
Overview
In this Machine Learning for Social Good course, you will learn foundational and cutting-edge topics along with contemporary research methods and findings in machine learning, spanning from supervised learning, unsupervised learning, neural networks, shape recognition, Bayesian deep learning, computer vision, active learning and many more.

This course aims to help you broaden your ways of thinking critically, deepen your analysis skills, and gain a better understanding of cutting-edge topics and research methods in machine learning research internationally. The course is designed to not teach you what to think but how to think as a scientific researcher.
Course Content
The course components are comprised of lectures, tutorials, machine learning class projects, and a final research design sprint assignment. The lectures are on machine learning topics and its applications across different industries and sectors from healthcare to sustainable manufacturing. The class projects are aimed to enforce the fundamental course material through practical guided machine learning sessions.
The final research design sprint assignment follows the design sprint model used by google ventures for successful businesses to launch new products and services. The final assignment focuses on answering the research question of “how can machine learning applications better lives in cities?” In a team, you will engage with applications such as AI, IoT, automation, computer vision, speech recognition, and many more to devise a proposed solution for a particular industry.
undraw_professor_8lrt
Lectures
undraw_conversation_h12g
Tutorials
undraw_programming_2svr
ML Class Projects
undraw_Design_sprint_re_tke3
Research Design Sprint
Tutors

Jens Rittscher

Jens Rittscher is Professor of Engineering Science at the University of Oxford. His appointment is held jointly between the Institute of Biomedical Engineering and the Nuffield Department of Medicine. He is a group leader at the Target Discovery Institute and is a member of the Ludwig Institute of Cancer Research.

Rittscher’ research is to enable biomedical imaging through the development of new algorithms and novel computational platforms. Current focus of his research is to improve mechanistic understanding of cancer and patient care through quantitative analysis of image data.
Jens Rittscher

Mateja Kovacic

Professor Mateja Kovacic is the Assistant Professor at Hong Kong Baptist University and Research Fellow at the Oxford School of Global and Area Studies, University of Oxford. Her research on social robots and AI, with particular focus on robots and automated systems in urban and social spaces. It includes the whole range of non-industrial robot experimenting, from autonomous vehicles and delivery drones, healthcare robots, to social and service robots. She is currently working on three projects: the politics of humanoid robots in Japan, the urban living labs for robot experiments with focus on Dubai, Singapore, California and Japan, and medical delivery drones in sub-Saharan Africa. She uses hermeneutics and semiotics of technology to address both social robots and their sociocultural backgrounds.

Kovacic is broadly specialised in history, anthropology and philosophy of technology and science, with focus on Japan. Kovacic’s current projects include history of science, technology and popular culture in Japan, biopolitics of traditional Japanese craftship, global catastrophic risks, and material and visual culture as links between consumerism, natural history, and ecology.
Mateja Kovacic

Farah Shamout

Professor Farah Shamout is the Assistant Professor and Emerging Scholar of Computer Engineering at NYU Abu Dhabi and was a Rhodes Scholar completing her PhD at the University of Oxford. She is team leader of Clinical AI Lab in NYU Abu Dhabi. She has also been an invited speaker at MIT since 2019.

At Oxford, she focuses on the HAVEN project that aims to produce a hospital-wide alerting system to assess patients continuously using Bayesian nonparametrics, deep learning and other machine learning methods. This project involves large-scale data acquisition from across the Oxford University Hospitals NHS Foundation Trust and Portsmouth NHS Foundation Trust.

Her current research interests include developing machine learning models to inform decision-making in healthcare, representation learning for large-scale multi-modal data, interpretability, and fairness and ethics of AI. While at Oxford, Shamout taught with the inaugural UAE-Oxford Artificial Intelligence Program and worked on global data commons and digital health policy.
Farah Shamout

Derya Akkaynak

Dr Derya Akkaynak is a mechanical engineer at Harbour Branch Oceanographic Institute, and previously a Research Fellow at Perceptual Science Group at Massachusetts Institute of Technology (MIT), and at Princeton University.

She received the 2019 Blavatnik Young Scientist Award for Significant breakthroughs and advancements in computer vision and underwater imaging technologies, resolving a fundamental technological problem in the computer vision community—the reconstruction of lost colours and contrast in underwater photographic imagerywhich will have real implications for oceanographic research.

She has also received the 2018 International Ocean Colour Coordinating Group Summer Lecture Series Travel Award (France), and 2018 IEEE CVPR Women in Computer Vision Travel Award.

Her research has been reported on by the New New York Times, National Geographic, EUMETSAT, In The Deep, and Hakai Magazine.
Hannah Hansell 1

Bogdan Knezevic

Dr Bogdan Knezevic is the CoFounder and Chief Advisor to the Rhodes Incubator at Oxford, and Artificial Intelligence Lead of Creative Destruction Lab, as well as Programme Lead at Entrepreneur First, a $140M fund and the world's leading technology talent investor.

His research focuses on using genomic data (e.g. GWAS and eQTL) to develop bioinformatic pipelines for drug target prediction and prioritization by aiming to aggregate existing drug data- target, type, development stage, etc.- in order to understand the underlying genetic factors that may be predictive of successful drug development. By pooling various data sources and adding multiple layers and types of parameters, he is looking to optimize the discovery and prioritization of novel drug targets. Ultimately, his work aims to inform more efficient drug development by considering the plethora of genetic and genomic data available to us in today's world.
Bogdan Knezevic

Bojan Komazec

Bojan Komazec is the tutor for Artificial Intelligence: Cloud and Edge Implementations Course at Department of Continuing Education, University of Oxford. He has been working in the IT industry for over 15 years.

He currently holds the position of Director in iotosphere, and Engineer Manager at Avast Software where he develops various security and privacy products.

Bojan's interests span from code craftsmanship and cyber security to Artificial Intelligence and Internet of Things. He is an active blogger and speaker at several IT Meetup groups where he enjoys sharing experience and knowledge.
Bojan Komazec

Shaiyan Keshvari

Dr Shaiyan Keshvari is the Research Fellow at York University Centre for Vision Science and was Research Fellow at MIT Perceptual Science Group, Computer Science & Artificial Intelligence Lab (CSAIL).

His goal is to understand the computational mechanisms that underlie human vision. His approach tests modern machine vision and graphics algorithms as models of human behaviour. Apart from elucidating biological vision, his work has the potential to improve computer vision applications.

He brings together mathematical and machine learning methods with behavioral and neuroscientific data to better our understanding of the mind, and in particular visual perception. His current research program aims to answer three main questions: 
  • What computational mechanism does our visual system implement to piece together and encode the shape of object? 
  • How similar are humans and modern learning-based computer vision algorithms in their extraction and use of object shape? 
  • How does vision encode both objects and textures, particularly in the periphery where fidelity is reduced?
Shaiyan Keshvari

Rob Phillips

Professor Rob Phillips is a Senior Tutor (Associate Professor) at the Royal College of Art, Design Products + Futures Programme.

Rob is a Product Designer with 18 years’ experience, working commercially and independently with broad experience including 5 international patents, large scale manufacture with 3 + million products in retail, live saving fire and ballistics  equipment, revolutionary material developments, user orientated proposals and involvement in design for team GB.

During his PhD he informed a relationship between Open Design and Citizen Science, resulting in methods taught at MIT Media Lab, Stanford, Cornell and the BBC.

His latest collaborative project with Goldsmiths, Interaction Design Research Studio uses distributed design to foster people’s interest in the natural world around them and was broadcast  on the BBC’s SpringWatch programme.
Rob Phillips

Michael Wang

Dr Michael Wang is the Research Fellow at the Centre for Vision Research at York University. He is also the faculty member of Computer Society, Institute of Electrical and Electronic Engineers (IEEE), International Vision Science Society, and International Society for Ecological Psychology.

He is currently using virtual reality (VR) to study the relationship between the 3D space depicted on a screen and the space that an observer is in. He wants to use the findings of this line of research to improve existing telecommunication technology and provide an increasing sense of presence when interacting with digital contents on the screen.

Current projects include: Human Identity Recognition – use machine learning to decompose high dimensional human movement data with Python, used the results to formulate viable hypotheses on human identity recognition, and designed and constructed behavioral experiments with JavaScript.

Direction and Distance Perception in VR - constructed behavioural experiments in VR using Unity and C#, investigated how various types of visual information affect direction and distance perception and its im- plication on video-based telecommunication, and developed a computational model to fit behavioral results with Python.
Michael Wang