Portrait of Andrea Dunbar


Andrea Dunbar - Head of Edge AI & Vision - CSEM

Portrait of Andrea Dunbar - Head of Edge AI & Vision - CSEM

Jury member of the BCN Innovation Prize
Lecturer of digitalization at EPFL

Andrea Dunbar was born in New Hampshire and spent the first ten years of her life in the USA before crossing the Atlantic to settle near London. At age 18, she took a year's sabbatical to travel, initially back to the United States to drive from Boston to San Francisco and then to backpack around Guatemala learning Spanish. On her return to Great Britain, she began studying physics at St Andrews University, Scotland, where she obtained her bachelor’s degree.

She followed this up with an experience as a certified broker at Lloyd’s in London, before returning to physics completing her PhD at Trinity College in Dublin on cavity polaritons, a mixed light-matter state system. It was during this time she first came into contact with EPFL thanks to the samples they provided. After several exchanges with EPFL, she took the plunge in 2003 and moved to Lausanne to work at EPFL as a research assistant; happy to seize the opportunity to learn French. At EPFL she specialized in the field of photonic crystals. In 2007, she was offered the opportunity to join the CSEM, and in 2017 she was appointed Head of the Embedded Vision sector. This sector now renamed "Edge AI & Vision", is a multidisciplinary group (data science, optics, imagers, software, system architecture, data processing) composed of 16 people.

What are the main challenges facing Edge Artificial Intelligence and Vision?

When working at the edge we work at the sensor level and process data locally and not remotely on the cloud. All the intelligence is therefore embedded close to the sensor. Working at the edge allows us to:

  • Create artificial intelligence algorithms tailored for an application’s need and particularly for high speed (low latency), low power, and/or highly secure systems.  
  • Combine these algorithms with sensors (vision, audio, etc.) and microcomputers (processor and communication) to locally process the data. This means that raw data does not need to be stored or transmitted allowing for real-time lightweight systems where privacy is maintained.

A first example is the Witness solution. This is a fully autonomous camera that can be attached like a sticker. This world first opens up new possibilities in the area of the Internet of Things (IoT) and monitoring sensors. This IoT camera is solar powered and has a CMOS image sensor consuming less than 700 µWatt. Embedded processing means no images need be transmitted whilst its purpose, such as giving an alert if someone has fallen, is not impacted.

Optical character recognition (i.e. recognizing letters and numbers) in industry is a good example of where embedded intelligence can be advantageous. Here fast and secure recognition requirements of >99.9% accuracy in sometimes extreme environments (dust, dirt, strong light, etc.) creates a challenge. Having a dedicated imaging setup and intelligence embedded at the edge allows us to respond to these challenges.

Within the group, we are also working on multispectral imaging systems. These systems allow us to
detect colour more finely and infrared light, and thus extract information invisible to the human eye. These systems can be used, for example, in the detection of skin cancer, sorting of waste during recycling or quality control of food production.  In addition to vision systems we add other types of sensors (e.g. sound, temperature and detection of gases such as CO2). Thanks to these multidimensional measurements, we can have more information and create more robust predictions.

I am convinced that these edge-based systems respond to two major challenges in our society today:

  • The amount of energy being used to transmit data: for example, a 3-minute video clip downloaded 2 billion times - like the song Despacito - corresponds to the energy consumption of 40,000 houses in the USA in one year. Being able to process the data locally would allow a great reduction in energy consumption.
  • Data protection: A Google search passes through several data centres located around the world, with different data protection laws in each country. When information is treated locally, this question no longer arises.

Are you currently working on aviation projects?

Indeed yes, we are creating an intelligent vision system that can detect pilot fatigue. It's already installed at Honeywell, and we're in the testing phase. It is notoriously difficult for people to tell if they are tired or not, so we are developing AI algorithms based on extracted features from a computer vision system to detect in real-time the level of drowsiness of the pilot. The pilot’s level of drowsiness is determined by medical experts using the physiological signals e.g. EEG (Electroencephalogram), ECG (Electrocardiogram), Electrodermal activity, etc., which serve as the reference for training the algorithms.

In another aviation project called PEGGASUS, in collaboration with SWISS and ETHZ, we have developed a new type of Human Machine Interface (HMI) that pushes the boundaries of cockpit avionics and provides pilots with new ways of interacting with the onboard system. In this way pilots can easily and efficiently adapt to the changing and complex needs of 21st century avionics.

In these Cleansky projects, we hope to support pilots in their work, both during long journeys where they have to remain attentive for several hours, and during take-off and landing when they have to carry out many tasks quickly.

What collaborations do you currently have in the region?

In Neuchâtel, we are working with G-ray to improve the detection of X-rays dedicated to mammograms. With our technologies, G-Ray is able to offer safer, more readable and more economical radiographic solutions with high-resolution, high-speed colour imaging. 

We are also working with an EPFL startup GlobalID to identify people through their veins using a biometric system coupled with imaging. The CSEM brings to this project expertise in optics, electronics and micro-processing.

We have also collaborated with TESA-Hexagon on a new generation of probes for very high precision coordinate measuring machines (CMMs) that ensure the quality of high precision mechanical parts such as turbine blades or implantable prostheses. It is the spaceCoder technology that allowed the CSEM to develop a probe based on an opto-electro-mechanical micro measuring system built around a specific integrated circuit.

What do you think of the Swiss and Neuchâtel innovation ecosystem?

In my opinion, Switzerland's innovation policy is very well thought out. Schools and Institutes are involved at different levels of the value chain (basic or applied research). Federal tools such as Innosuisse encourage collaboration between the more academic and industrial worlds.

The concentration of innovation players in Neuchâtel is a real plus. Getting in touch is quick and easy, not to mention the canton's exceptional quality of life. I believe nevertheless there is still work to be done to make the most of our region's attractiveness and potential.