180 Building an evaluation platform to capture the impact of Frontiers CTSI activities
OBJECTIVES/GOALS: In 2021, Frontiers CTSI revamped its evaluation infrastructure to be comprehensive, efficient, and transparent in demonstrating outputs and outcomes. We sought to build a platform to standardize measures across program areas, integrate continuous improvement processes into operatio...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Cambridge University Press
2024-04-01
|
Series: | Journal of Clinical and Translational Science |
Online Access: | https://www.cambridge.org/core/product/identifier/S2059866124001717/type/journal_article |
_version_ | 1827296127828885504 |
---|---|
author | Maggie Padek Kalman Shellie Ellis Mary Penne Mays Sam Pepper Dinesh Pal Mudaranthakam |
author_facet | Maggie Padek Kalman Shellie Ellis Mary Penne Mays Sam Pepper Dinesh Pal Mudaranthakam |
author_sort | Maggie Padek Kalman |
collection | DOAJ |
description | OBJECTIVES/GOALS: In 2021, Frontiers CTSI revamped its evaluation infrastructure to be comprehensive, efficient, and transparent in demonstrating outputs and outcomes. We sought to build a platform to standardize measures across program areas, integrate continuous improvement processes into operations, and reduce the data entry burden for investigators. METHODS/STUDY POPULATION: To identify useful metrics, we facilitated each Core’s creation of a logic model, in which they identified all planned activities, expected outputs, and anticipated outcomes for the 5-year cycle and beyond. We identified appropriate metrics based on the logic models and aligned metrics across programs against extant administrative data. We then built a data collection and evaluation platform within REDCap to capture user requests, staff completion of requests, and, ultimately, request outcomes. We built a similar system to track events, attendance, and outcomes. Aligning with other hubs, we also transitioned to a membership model. Membership serves as the backbone of the evaluation platform and allows us to tailor communication, capture demographic information, and reduce the data entry burden for members. RESULTS/ANTICIPATED RESULTS: The Frontiers Evaluation Platform consists of 9 redcap projects with distinct functions and uses throughout the Institute. Point-of-service collection forms include the Consultation Request Event Tracking. Annual Forms include a Study Outcome, Impact, and Member Assessment Survey. Set timepoint collections include K & T application, Mock Study Section, and Pilot grant application submission, review, and outcomes. Flight Tracker is used to collect scientific outcomes and integrated with the platform. Using SQL, the membership module has been integrated into all forms to check and collect membership before service access and provide relevant member data to navigators. All relevant data is then synched into a dashboard for program leadership and management to track outputs and outcomes in real-time. DISCUSSION/SIGNIFICANCE: Since the launch of the evaluation platform in Fall 2022, Frontiers has increased its workflow efficiency and streamlined continuous improvement communication. The platform can serve as a template for other hubs to build efficient processes to create comprehensive and transparent evaluation plans. |
first_indexed | 2024-04-24T14:33:46Z |
format | Article |
id | doaj.art-e5d4776ac71d46afa327dca1d4ebe6b1 |
institution | Directory Open Access Journal |
issn | 2059-8661 |
language | English |
last_indexed | 2024-04-24T14:33:46Z |
publishDate | 2024-04-01 |
publisher | Cambridge University Press |
record_format | Article |
series | Journal of Clinical and Translational Science |
spelling | doaj.art-e5d4776ac71d46afa327dca1d4ebe6b12024-04-03T01:59:55ZengCambridge University PressJournal of Clinical and Translational Science2059-86612024-04-018545410.1017/cts.2024.171180 Building an evaluation platform to capture the impact of Frontiers CTSI activitiesMaggie Padek Kalman0Shellie Ellis1Mary Penne Mays2Sam Pepper3Dinesh Pal Mudaranthakam4University of Kansas Medical CenterUniversity of Kansas Medical CenterUniversity of Kansas Medical CenterUniversity of Kansas Medical CenterUniversity of Kansas Medical CenterOBJECTIVES/GOALS: In 2021, Frontiers CTSI revamped its evaluation infrastructure to be comprehensive, efficient, and transparent in demonstrating outputs and outcomes. We sought to build a platform to standardize measures across program areas, integrate continuous improvement processes into operations, and reduce the data entry burden for investigators. METHODS/STUDY POPULATION: To identify useful metrics, we facilitated each Core’s creation of a logic model, in which they identified all planned activities, expected outputs, and anticipated outcomes for the 5-year cycle and beyond. We identified appropriate metrics based on the logic models and aligned metrics across programs against extant administrative data. We then built a data collection and evaluation platform within REDCap to capture user requests, staff completion of requests, and, ultimately, request outcomes. We built a similar system to track events, attendance, and outcomes. Aligning with other hubs, we also transitioned to a membership model. Membership serves as the backbone of the evaluation platform and allows us to tailor communication, capture demographic information, and reduce the data entry burden for members. RESULTS/ANTICIPATED RESULTS: The Frontiers Evaluation Platform consists of 9 redcap projects with distinct functions and uses throughout the Institute. Point-of-service collection forms include the Consultation Request Event Tracking. Annual Forms include a Study Outcome, Impact, and Member Assessment Survey. Set timepoint collections include K & T application, Mock Study Section, and Pilot grant application submission, review, and outcomes. Flight Tracker is used to collect scientific outcomes and integrated with the platform. Using SQL, the membership module has been integrated into all forms to check and collect membership before service access and provide relevant member data to navigators. All relevant data is then synched into a dashboard for program leadership and management to track outputs and outcomes in real-time. DISCUSSION/SIGNIFICANCE: Since the launch of the evaluation platform in Fall 2022, Frontiers has increased its workflow efficiency and streamlined continuous improvement communication. The platform can serve as a template for other hubs to build efficient processes to create comprehensive and transparent evaluation plans.https://www.cambridge.org/core/product/identifier/S2059866124001717/type/journal_article |
spellingShingle | Maggie Padek Kalman Shellie Ellis Mary Penne Mays Sam Pepper Dinesh Pal Mudaranthakam 180 Building an evaluation platform to capture the impact of Frontiers CTSI activities Journal of Clinical and Translational Science |
title | 180 Building an evaluation platform to capture the impact of Frontiers CTSI activities |
title_full | 180 Building an evaluation platform to capture the impact of Frontiers CTSI activities |
title_fullStr | 180 Building an evaluation platform to capture the impact of Frontiers CTSI activities |
title_full_unstemmed | 180 Building an evaluation platform to capture the impact of Frontiers CTSI activities |
title_short | 180 Building an evaluation platform to capture the impact of Frontiers CTSI activities |
title_sort | 180 building an evaluation platform to capture the impact of frontiers ctsi activities |
url | https://www.cambridge.org/core/product/identifier/S2059866124001717/type/journal_article |
work_keys_str_mv | AT maggiepadekkalman 180buildinganevaluationplatformtocapturetheimpactoffrontiersctsiactivities AT shellieellis 180buildinganevaluationplatformtocapturetheimpactoffrontiersctsiactivities AT marypennemays 180buildinganevaluationplatformtocapturetheimpactoffrontiersctsiactivities AT sampepper 180buildinganevaluationplatformtocapturetheimpactoffrontiersctsiactivities AT dineshpalmudaranthakam 180buildinganevaluationplatformtocapturetheimpactoffrontiersctsiactivities |