Summary: | Several attack models attempt to describe behaviours of attackers in order to understand and combat them better. However, models are to some degree incomplete. They may lack insight about minor variations about attacks that are observed in the real world, but not described in the model. This may lead to similar attacks being classified as the same one. The appropriate solution would be to modify the attack model (to deal with that particular use case) or replace it entirely. However, doing so may be undesirable as the model may work well for most cases, or, time and resource constraints may factor in as well. This paper investigates the uses of descriptions of minor variations in attacks, as well as how and when it may (and may not) be appropriate to communicate those differences in existing attack models. We propose that such nuances can be appended as annotations to existing attack models. We investigate commonalities across a range of existing models, and identify where and how annotations may be helpful. Using annotations appropriately should enable analysts and researchers to express subtle, but important variations in attacks that may not fit a model that is currently being used. The value of this paper is that we demonstrate how annotations may help analysts communicate and ask better questions during identification of unknown aspects of attacks faster, e.g. as a means of storing mental notes in a structured manner, esp. while facing zero-day attacks when information is incomplete.
|