Inducing high energy-latency of large vision-language models with verbose images
Large vision-language models (VLMs) such as GPT-4 have achieved exceptional performance across various multi-modal tasks. However, the deployment of VLMs necessitates substantial energy consumption and computational resources. Once attackers maliciously induce high energy consumption and latency tim...
Autori principali: | Gao, K, Bai, Y, Gu, J, Xia, ST, Torr, P, Li, Z, Liu, W |
---|---|
Natura: | Conference item |
Lingua: | English |
Pubblicazione: |
OpenReview
2024
|
Documenti analoghi
-
Energy-latency manipulation of multi-modal large language models via verbose samples
di: Gao, K, et al.
Pubblicazione: (2024) -
The verbosity epidemic.
di: Grais, R, et al.
Pubblicazione: (2008) -
Evaluation of Rust code verbosity, understandability and complexity
di: Luca Ardito, et al.
Pubblicazione: (2021-02-01) -
Head Concepts Selection for Verbose Medical Queries Expansion
di: Mohammed Maree, et al.
Pubblicazione: (2020-01-01) -
The Role of Inhibition in Age-Related Off-Topic Verbosity: Not Access but Deletion and Restraint Functions
di: Shufei eYin, et al.
Pubblicazione: (2016-04-01)