Summary: | OBJECTIVES/GOALS: An academic medical library evaluated an EHR data abstraction service by assessing uptake and publication metrics, including use by department, purpose of data abstraction requests and publication counts. METHODS/STUDY POPULATION: The evaluation included 167 requests for EHR data processed by the institution’s clinical research data management unit (CRDMU) and recorded in an intake form hosted on REDCap. These requests originated from various departments. The intake forms collected investigator and study information, as well as request completion dates. Information in the intake forms were matched with publications and meeting abstracts that were indexed in a database of faculty publications. Investigators who submitted EHR data requests that could not be readily matched to publications were contacted to verify the status of their studies and any associated publications. RESULTS/ANTICIPATED RESULTS: The evaluation included 167 data requests submitted to the CRDMU between 2016 and 2018. These requests were categorized into the following use cases: retrospective studies (n=93); patient recruitment (n=50); and 'other' (i.e., education, training, or process improvement; feasibility assessments; machine learning (n=14)). By the end of the evaluation period, an average of four years after the data requests were submitted to the CRDMU, 60 of all 167 EHR datasets (35.9%) led to publications as articles or meeting abstracts. 64.5% of the EHR datasets requested for retrospective studies, 56% of the datasets requested for recruitment, and 79.1% of datasets requested for other uses did not lead to publications. DISCUSSION/SIGNIFICANCE: These findings offer evidence that bibliometrics alone provide limited insight into the value of services and data utilized for secondary research. Data ecosystem stakeholders are encouraged to consider—and develop—scalable, reproducible, and more holistic assessments of the impact of their services.
|