Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades
Creating datasets for Neuromorphic Vision is a challenging task. A lack of available recordings from Neuromorphic Vision sensors means that data must typically be recorded specifically for dataset creation rather than collecting and labelling existing data. The task is further complicated by a desir...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2015-11-01
|
Series: | Frontiers in Neuroscience |
Subjects: | |
Online Access: | http://journal.frontiersin.org/Journal/10.3389/fnins.2015.00437/full |
_version_ | 1811259267822387200 |
---|---|
author | Garrick eOrchard Garrick eOrchard Ajinkya eJayawant Gregory Kevin Cohen Nitish eThakor |
author_facet | Garrick eOrchard Garrick eOrchard Ajinkya eJayawant Gregory Kevin Cohen Nitish eThakor |
author_sort | Garrick eOrchard |
collection | DOAJ |
description | Creating datasets for Neuromorphic Vision is a challenging task. A lack of available recordings from Neuromorphic Vision sensors means that data must typically be recorded specifically for dataset creation rather than collecting and labelling existing data. The task is further complicated by a desire to simultaneously provide traditional frame-based recordings to allow for direct comparison with traditional Computer Vision algorithms. Here we propose a method for converting existing Computer Vision static image datasets into Neuromorphic Vision datasets using an actuated pan-tilt camera platform. Moving the sensor rather than the scene or image is a more biologically realistic approach to sensing and eliminates timing artifacts introduced by monitor updates when simulating motion on a computer monitor. We present conversion of two popular image datasets (MNIST and Caltech101) which have played important roles in the development of Computer Vision, and we provide performance metrics on these datasets using spike-based recognition algorithms. This work contributes datasets for future use in the field, as well as results from spike-based algorithms against which future works can compare. Furthermore, by converting datasets already popular in Computer Vision, we enable more direct comparison with frame-based approaches. |
first_indexed | 2024-04-12T18:28:05Z |
format | Article |
id | doaj.art-5f750ce9373245b091c2f7372539870d |
institution | Directory Open Access Journal |
issn | 1662-453X |
language | English |
last_indexed | 2024-04-12T18:28:05Z |
publishDate | 2015-11-01 |
publisher | Frontiers Media S.A. |
record_format | Article |
series | Frontiers in Neuroscience |
spelling | doaj.art-5f750ce9373245b091c2f7372539870d2022-12-22T03:21:10ZengFrontiers Media S.A.Frontiers in Neuroscience1662-453X2015-11-01910.3389/fnins.2015.00437159859Converting Static Image Datasets to Spiking Neuromorphic Datasets Using SaccadesGarrick eOrchard0Garrick eOrchard1Ajinkya eJayawant2Gregory Kevin Cohen3Nitish eThakor4National University of SingaporeNational University of SingaporeIndian Institute of Technology BombayUniversity of Western SydneyNational University of SingaporeCreating datasets for Neuromorphic Vision is a challenging task. A lack of available recordings from Neuromorphic Vision sensors means that data must typically be recorded specifically for dataset creation rather than collecting and labelling existing data. The task is further complicated by a desire to simultaneously provide traditional frame-based recordings to allow for direct comparison with traditional Computer Vision algorithms. Here we propose a method for converting existing Computer Vision static image datasets into Neuromorphic Vision datasets using an actuated pan-tilt camera platform. Moving the sensor rather than the scene or image is a more biologically realistic approach to sensing and eliminates timing artifacts introduced by monitor updates when simulating motion on a computer monitor. We present conversion of two popular image datasets (MNIST and Caltech101) which have played important roles in the development of Computer Vision, and we provide performance metrics on these datasets using spike-based recognition algorithms. This work contributes datasets for future use in the field, as well as results from spike-based algorithms against which future works can compare. Furthermore, by converting datasets already popular in Computer Vision, we enable more direct comparison with frame-based approaches.http://journal.frontiersin.org/Journal/10.3389/fnins.2015.00437/fullBenchmarkingsensory processingComputer VisionNeuromorphic visionDatasets |
spellingShingle | Garrick eOrchard Garrick eOrchard Ajinkya eJayawant Gregory Kevin Cohen Nitish eThakor Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades Frontiers in Neuroscience Benchmarking sensory processing Computer Vision Neuromorphic vision Datasets |
title | Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades |
title_full | Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades |
title_fullStr | Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades |
title_full_unstemmed | Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades |
title_short | Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades |
title_sort | converting static image datasets to spiking neuromorphic datasets using saccades |
topic | Benchmarking sensory processing Computer Vision Neuromorphic vision Datasets |
url | http://journal.frontiersin.org/Journal/10.3389/fnins.2015.00437/full |
work_keys_str_mv | AT garrickeorchard convertingstaticimagedatasetstospikingneuromorphicdatasetsusingsaccades AT garrickeorchard convertingstaticimagedatasetstospikingneuromorphicdatasetsusingsaccades AT ajinkyaejayawant convertingstaticimagedatasetstospikingneuromorphicdatasetsusingsaccades AT gregorykevincohen convertingstaticimagedatasetstospikingneuromorphicdatasetsusingsaccades AT nitishethakor convertingstaticimagedatasetstospikingneuromorphicdatasetsusingsaccades |