Lossless Encoding of Time-Aggregated Neuromorphic Vision Sensor Data Based on Point-Cloud Compression

Neuromorphic Vision Sensors (NVSs) are emerging sensors that acquire visual information asynchronously when changes occur in the scene. Their advantages versus synchronous capturing (frame-based video) include a low power consumption, a high dynamic range, an extremely high temporal resolution, and...

Full description

Bibliographic Details
Main Authors: Jayasingam Adhuran, Nabeel Khan, Maria G. Martini
Format: Article
Language:English
Published: MDPI AG 2024-02-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/24/5/1382
_version_ 1797263867396489216
author Jayasingam Adhuran
Nabeel Khan
Maria G. Martini
author_facet Jayasingam Adhuran
Nabeel Khan
Maria G. Martini
author_sort Jayasingam Adhuran
collection DOAJ
description Neuromorphic Vision Sensors (NVSs) are emerging sensors that acquire visual information asynchronously when changes occur in the scene. Their advantages versus synchronous capturing (frame-based video) include a low power consumption, a high dynamic range, an extremely high temporal resolution, and lower data rates. Although the acquisition strategy already results in much lower data rates than conventional video, NVS data can be further compressed. For this purpose, we recently proposed Time Aggregation-based Lossless Video Encoding for Neuromorphic Vision Sensor Data (TALVEN), consisting in the time aggregation of NVS events in the form of pixel-based event histograms, arrangement of the data in a specific format, and lossless compression inspired by video encoding. In this paper, we still leverage time aggregation but, rather than performing encoding inspired by frame-based video coding, we encode an appropriate representation of the time-aggregated data via point-cloud compression (similar to another one of our previous works, where time aggregation was not used). The proposed strategy, Time-Aggregated Lossless Encoding of Events based on Point-Cloud Compression (TALEN-PCC), outperforms the originally proposed TALVEN encoding strategy for the content in the considered dataset. The gain in terms of the compression ratio is the highest for low-event rate and low-complexity scenes, whereas the improvement is minimal for high-complexity and high-event rate scenes. According to experiments on outdoor and indoor spike event data, TALEN-PCC achieves higher compression gains for time aggregation intervals of more than 5 ms. However, the compression gains are lower when compared to state-of-the-art approaches for time aggregation intervals of less than 5 ms.
first_indexed 2024-04-25T00:19:50Z
format Article
id doaj.art-3695b5fdb7ef45d88724afea854610e4
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-04-25T00:19:50Z
publishDate 2024-02-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-3695b5fdb7ef45d88724afea854610e42024-03-12T16:54:35ZengMDPI AGSensors1424-82202024-02-01245138210.3390/s24051382Lossless Encoding of Time-Aggregated Neuromorphic Vision Sensor Data Based on Point-Cloud CompressionJayasingam Adhuran0Nabeel Khan1Maria G. Martini2Faculty of Engineering, Computing, and the Environment, Kingston University London, Penrhyn Rd., Kingston upon Thames KT1 2EE, UKDepartment of Computer Science, University of Chester, Parkgate Road, Chester CH1 4BJ, UKFaculty of Engineering, Computing, and the Environment, Kingston University London, Penrhyn Rd., Kingston upon Thames KT1 2EE, UKNeuromorphic Vision Sensors (NVSs) are emerging sensors that acquire visual information asynchronously when changes occur in the scene. Their advantages versus synchronous capturing (frame-based video) include a low power consumption, a high dynamic range, an extremely high temporal resolution, and lower data rates. Although the acquisition strategy already results in much lower data rates than conventional video, NVS data can be further compressed. For this purpose, we recently proposed Time Aggregation-based Lossless Video Encoding for Neuromorphic Vision Sensor Data (TALVEN), consisting in the time aggregation of NVS events in the form of pixel-based event histograms, arrangement of the data in a specific format, and lossless compression inspired by video encoding. In this paper, we still leverage time aggregation but, rather than performing encoding inspired by frame-based video coding, we encode an appropriate representation of the time-aggregated data via point-cloud compression (similar to another one of our previous works, where time aggregation was not used). The proposed strategy, Time-Aggregated Lossless Encoding of Events based on Point-Cloud Compression (TALEN-PCC), outperforms the originally proposed TALVEN encoding strategy for the content in the considered dataset. The gain in terms of the compression ratio is the highest for low-event rate and low-complexity scenes, whereas the improvement is minimal for high-complexity and high-event rate scenes. According to experiments on outdoor and indoor spike event data, TALEN-PCC achieves higher compression gains for time aggregation intervals of more than 5 ms. However, the compression gains are lower when compared to state-of-the-art approaches for time aggregation intervals of less than 5 ms.https://www.mdpi.com/1424-8220/24/5/1382neuromorphic vision sensor (NVS)neuromorphic spike eventspoint-cloud compressionsilicon retinasspike encoding
spellingShingle Jayasingam Adhuran
Nabeel Khan
Maria G. Martini
Lossless Encoding of Time-Aggregated Neuromorphic Vision Sensor Data Based on Point-Cloud Compression
Sensors
neuromorphic vision sensor (NVS)
neuromorphic spike events
point-cloud compression
silicon retinas
spike encoding
title Lossless Encoding of Time-Aggregated Neuromorphic Vision Sensor Data Based on Point-Cloud Compression
title_full Lossless Encoding of Time-Aggregated Neuromorphic Vision Sensor Data Based on Point-Cloud Compression
title_fullStr Lossless Encoding of Time-Aggregated Neuromorphic Vision Sensor Data Based on Point-Cloud Compression
title_full_unstemmed Lossless Encoding of Time-Aggregated Neuromorphic Vision Sensor Data Based on Point-Cloud Compression
title_short Lossless Encoding of Time-Aggregated Neuromorphic Vision Sensor Data Based on Point-Cloud Compression
title_sort lossless encoding of time aggregated neuromorphic vision sensor data based on point cloud compression
topic neuromorphic vision sensor (NVS)
neuromorphic spike events
point-cloud compression
silicon retinas
spike encoding
url https://www.mdpi.com/1424-8220/24/5/1382
work_keys_str_mv AT jayasingamadhuran losslessencodingoftimeaggregatedneuromorphicvisionsensordatabasedonpointcloudcompression
AT nabeelkhan losslessencodingoftimeaggregatedneuromorphicvisionsensordatabasedonpointcloudcompression
AT mariagmartini losslessencodingoftimeaggregatedneuromorphicvisionsensordatabasedonpointcloudcompression