Model merging and safety alignment: one bad model spoils the bunch

Merging Large Language Models (LLMs) is a cost-effective technique for combining multiple expert LLMs into a single versatile model, retaining the expertise of the original ones. However, current approaches often overlook the importance of safety alignment during merging, leading to highly misaligne...

Full description

Bibliographic Details
Main Authors: Hammoud, HAAK, Michieli, U, Pizzati, F, Torr, P, Bibi, A, Ghanem, B, Ozay, M
Format: Conference item
Language:English
Published: Association for Computational Linguistics 2024
_version_ 1824458821735546880
author Hammoud, HAAK
Michieli, U
Pizzati, F
Torr, P
Bibi, A
Ghanem, B
Ozay, M
author_facet Hammoud, HAAK
Michieli, U
Pizzati, F
Torr, P
Bibi, A
Ghanem, B
Ozay, M
author_sort Hammoud, HAAK
collection OXFORD
description Merging Large Language Models (LLMs) is a cost-effective technique for combining multiple expert LLMs into a single versatile model, retaining the expertise of the original ones. However, current approaches often overlook the importance of safety alignment during merging, leading to highly misaligned models. This work investigates the effects of model merging on alignment. We evaluate several popular model merging techniques, demonstrating that existing methods do not only transfer domain expertise but also propagate misalignment. We propose a simple two-step approach to address this problem: (i) generating synthetic safety and domain-specific data, and (ii) incorporating these generated data into the optimization process of existing data-aware model merging techniques. This allows us to treat alignment as a skill that can be maximized in the resulting merged LLM. Our experiments illustrate the effectiveness of integrating alignment-related data during merging, resulting in models that excel in both domain expertise and alignment.
first_indexed 2025-02-19T04:31:59Z
format Conference item
id oxford-uuid:b1ba750d-1ebe-4950-97d7-9cd116c1dea6
institution University of Oxford
language English
last_indexed 2025-02-19T04:31:59Z
publishDate 2024
publisher Association for Computational Linguistics
record_format dspace
spelling oxford-uuid:b1ba750d-1ebe-4950-97d7-9cd116c1dea62025-01-08T13:29:36ZModel merging and safety alignment: one bad model spoils the bunchConference itemhttp://purl.org/coar/resource_type/c_5794uuid:b1ba750d-1ebe-4950-97d7-9cd116c1dea6EnglishSymplectic ElementsAssociation for Computational Linguistics2024Hammoud, HAAKMichieli, UPizzati, FTorr, PBibi, AGhanem, BOzay, MMerging Large Language Models (LLMs) is a cost-effective technique for combining multiple expert LLMs into a single versatile model, retaining the expertise of the original ones. However, current approaches often overlook the importance of safety alignment during merging, leading to highly misaligned models. This work investigates the effects of model merging on alignment. We evaluate several popular model merging techniques, demonstrating that existing methods do not only transfer domain expertise but also propagate misalignment. We propose a simple two-step approach to address this problem: (i) generating synthetic safety and domain-specific data, and (ii) incorporating these generated data into the optimization process of existing data-aware model merging techniques. This allows us to treat alignment as a skill that can be maximized in the resulting merged LLM. Our experiments illustrate the effectiveness of integrating alignment-related data during merging, resulting in models that excel in both domain expertise and alignment.
spellingShingle Hammoud, HAAK
Michieli, U
Pizzati, F
Torr, P
Bibi, A
Ghanem, B
Ozay, M
Model merging and safety alignment: one bad model spoils the bunch
title Model merging and safety alignment: one bad model spoils the bunch
title_full Model merging and safety alignment: one bad model spoils the bunch
title_fullStr Model merging and safety alignment: one bad model spoils the bunch
title_full_unstemmed Model merging and safety alignment: one bad model spoils the bunch
title_short Model merging and safety alignment: one bad model spoils the bunch
title_sort model merging and safety alignment one bad model spoils the bunch
work_keys_str_mv AT hammoudhaak modelmergingandsafetyalignmentonebadmodelspoilsthebunch
AT michieliu modelmergingandsafetyalignmentonebadmodelspoilsthebunch
AT pizzatif modelmergingandsafetyalignmentonebadmodelspoilsthebunch
AT torrp modelmergingandsafetyalignmentonebadmodelspoilsthebunch
AT bibia modelmergingandsafetyalignmentonebadmodelspoilsthebunch
AT ghanemb modelmergingandsafetyalignmentonebadmodelspoilsthebunch
AT ozaym modelmergingandsafetyalignmentonebadmodelspoilsthebunch