In The News
Attendees Dive into Evaluation Challenges on Day Two of the Accordia Infectious Diseases Summit
April 16, 2010, Dar es Salaam, Tanzania: Today marked the second day of the 2010 Accordia Infectious Diseases Summit in Dar es Salaam, Tanzania. We are continuing to report from the Summit today as key leaders and experts from the private sector, government, NGOs, foundations, and academia address related to long-term healthcare capacity building in Africa.
At the Accordia Infectious Diseases Summit today, participants gathered with two topics on their minds – “Return on Investment” and how they were going to return home. The news of the Iceland volcanic eruption caused a major stir last night and people spent the dinner hour comparing scheduled flights to or through Amsterdam or London and wondering whether they would be significantly delayed or cancelled. But by this morning, as the estimates of how long European air space would be closed continued to be extended, a sense of “what will be will be” seemed to have taken over and people turned back to the task of figuring out ways to more effectively evaluate the impact of health capacity building programs in Africa.
Following a brief review of yesterday’s presentations and discussions, some examples of ways that evaluations could be conducted were presented as options, not prescriptions. Participants were challenged to think about why they wanted to conduct evaluations, with the recognition that the reason could affect what information was gathered and how it was analyzed and presented.
In a discussion of attribution and contribution, the thought was presented that some evaluations are actually more like what you would do for a trial – pulling evidence together, weighing it, and arriving at a conclusion of impact or lack thereof that is beyond a reasonable doubt. There is a need to involve scientific methodologies, but ultimately the judgment must be based on the evidence. The key is to gather the right evidence, and to gather it before, during, and after the program is conducted.
Another idea about evaluation that made a lot of sense was that evaluating a program can be thought of as putting together a patchwork quilt – taking multiple pieces of information and patching them together in a way that makes a comprehensible pattern.