Model Selection in Summary Evaluation
A difficulty in the design of automated text summarization algorithms is in the objective evaluation. Viewing summarization as a tradeoff between length and information content, we introduce a technique based on a hierarchy of classifiers to rank, through model selection, different summa...
Main Authors: | , |
---|---|
Language: | en_US |
Published: |
2004
|
Subjects: | |
Online Access: | http://hdl.handle.net/1721.1/7181 |
_version_ | 1826194705076977664 |
---|---|
author | Perez-Breva, Luis Yoshimi, Osamu |
author_facet | Perez-Breva, Luis Yoshimi, Osamu |
author_sort | Perez-Breva, Luis |
collection | MIT |
description | A difficulty in the design of automated text summarization algorithms is in the objective evaluation. Viewing summarization as a tradeoff between length and information content, we introduce a technique based on a hierarchy of classifiers to rank, through model selection, different summarization methods. This summary evaluation technique allows for broader comparison of summarization methods than the traditional techniques of summary evaluation. We present an empirical study of two simple, albeit widely used, summarization methods that shows the different usages of this automated task-based evaluation system and confirms the results obtained with human-based evaluation methods over smaller corpora. |
first_indexed | 2024-09-23T10:00:30Z |
id | mit-1721.1/7181 |
institution | Massachusetts Institute of Technology |
language | en_US |
last_indexed | 2024-09-23T10:00:30Z |
publishDate | 2004 |
record_format | dspace |
spelling | mit-1721.1/71812019-04-12T08:34:02Z Model Selection in Summary Evaluation Perez-Breva, Luis Yoshimi, Osamu AI A difficulty in the design of automated text summarization algorithms is in the objective evaluation. Viewing summarization as a tradeoff between length and information content, we introduce a technique based on a hierarchy of classifiers to rank, through model selection, different summarization methods. This summary evaluation technique allows for broader comparison of summarization methods than the traditional techniques of summary evaluation. We present an empirical study of two simple, albeit widely used, summarization methods that shows the different usages of this automated task-based evaluation system and confirms the results obtained with human-based evaluation methods over smaller corpora. 2004-10-20T20:48:55Z 2004-10-20T20:48:55Z 2002-12-01 AIM-2002-023 CBCL-222 http://hdl.handle.net/1721.1/7181 en_US AIM-2002-023 CBCL-222 1739841 bytes 1972183 bytes application/postscript application/pdf application/postscript application/pdf |
spellingShingle | AI Perez-Breva, Luis Yoshimi, Osamu Model Selection in Summary Evaluation |
title | Model Selection in Summary Evaluation |
title_full | Model Selection in Summary Evaluation |
title_fullStr | Model Selection in Summary Evaluation |
title_full_unstemmed | Model Selection in Summary Evaluation |
title_short | Model Selection in Summary Evaluation |
title_sort | model selection in summary evaluation |
topic | AI |
url | http://hdl.handle.net/1721.1/7181 |
work_keys_str_mv | AT perezbrevaluis modelselectioninsummaryevaluation AT yoshimiosamu modelselectioninsummaryevaluation |