Wednesday, June 24, 2015
Prevention research can have a huge impact on population health, but how do we evaluate the impact and translate the research into products for public health practitioners? We have been tackling that question at the Prevention Research Centers (PRC) Program at the Centers for Disease Control and Prevention (CDC). I'm
Erin Lebow-Skelley and I work for the Evaluation and Translation Team that evaluates the impact of the PRCs, and I want to share our approach with you.
The PRC Program directs a national network of 26 academic research centers, each at either a school of public health or a medical school that has a preventive medicine residency program (See figure, below). The centers are committed to conducting prevention research and are leaders in translating research results into policy and public health practice. All PRCs share a common goal of addressing behaviors and environmental factors that affect chronic diseases (e.g. cancer, heart disease, and diabetes), injury, infectious disease, mental health, and global health. Each center conducts at least one core research project; translates and disseminates research results; provides training, technical assistance, and evaluation services to its community partners; and conducts projects funded by other sources (CDC, HHS, and others). As a result, the PRC network conducts hundreds of projects each year.
The Evaluation and Translation Team is tasked with the challenge of demonstrating the impact of this heterogeneous group of research centers. We have spent the last two years developing the evaluation plan for the current 2014-2019 PRC funding cycle, while engaging various stakeholders throughout the process. We started by developing the evaluation purpose, questions, and indicators, and now have a complete and piloted data collection system and qualitative interview guides.
We plan to annually collect quantitative data from each PRC that reflects their centers' inputs (e.g., faculty and staff), activities (e.g., technical assistance, research activities), outputs (e.g., research and practice tools, peer reviewed publications), and impacts (e.g., number of people reached) using a web based data collection system. Having a cohesive system that collects information allows us to link center activities to outputs and impacts (e.g., showing what partners were involved in X project, that contributed to Y impact), which provides a comprehensive understanding of elements that contribute to center and network impact.
Hot Tip: Always start with program logic (after engaging your stakeholders!). No matter how complex the program, determining the overarching program logic will help guide the development of your evaluation indicators and provide a comprehensive picture of how the program is working.
Hot Tip: Consider providing end-users an electronic means of systematically providing feedback within the information system itself pertaining to data entry problems, subject matter questions, and suggestions for improvement.
he American Evaluation Association is celebrating Translational Research Evaluation (TRE) TIG week. All posts this week are contributed by members of the TRE Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.