Research plays a role at every stage of the policy process.
In the traditional conception, research was understood as something that happened at the beginning and end of the process. Research could identify "problems" and the causal factors that drove them. Research could evaluate whatever was implemented to determine its impact on the targeted problem. It wasn't required in between.
But that never made sense and was never how things worked.
Formulating policy means researching what's been tried, how options have been received, and how much difference they've made. Recommendations need to offer assessments of feasibility, costs, risks, possible benefits.
Decision-making itself can be structured as a collaborative research problem.
Implementation requires continuous "formative evaluation", assessing sites, staffing levels, programming choices and delivery methods, marketing and outreach efforts, alignment with budgetary, scheduling and other constraints.
Policies never stand alone. No matter the "problem", there are already policies involved with it - perhaps mitigating it, perhaps contributing to it. The monitoring and evaluation of one policy is part of the needs assessment and environmental scan for others.
And, "problems" don't stand alone either.
Supporting Hamilton's children meant working with schools, hospitals, child care operators, special needs services (like speech-language pathology), the City's Recreation, Housing, Social Assistance, and Public Health divisions, the children's aid societies, and more. Many of these partners lacked their own research capacity, so I and my team played that role.
Most policy research produced by the team of which I was a member is never published. But this is a simple example of population monitoring data (by numbered ward) about Hamilton's children. Monitoring of high-level outcomes can help identify emerging problem areas - and areas of improvement!
In monitoring, one draws comparisons, looks for trends, and tries to explain the differences and changes one sees. With impact evaluations, one comes with a possible explanation and tries to see if it makes a difference.
This example was particularly important, as it offered the first quantitative estimates of the developmental impacts of Ontario's multi-billion dollar Full Day Kindergarten program. The estimates made on the basis of this early sample were confirmed in subsequent waves of the Early Development Instrument on the whole population.
Developmental measures are one way to assess how children are faring. They certainly speak to government and employer concerns about the "quality" of the future labour future (and taxpayers). But it's far from clear that they answer how well children are thriving. They offer little to pedagogical concerns. They offer only deficit-based answers to equity concerns. They rarely even answer all the developmental questions citizens have, most notably around cultural identity (a.k.a. 'religion' and 'morality'). Perhaps most importantly, they tell us nothing about how things look to children themselves.
This is where dissensus comes in. We could just replace developmental measures with something else, reflecting different priorities. Alternatively, we could displace them, to become just one of a series of measures.
Policy-makers are familiar with looking at multiple indicators, which don't necessarily all tell the same story. As much as decision-makers are prone to want "one-handed" advice, if our measures are to address multiple priorities - however conflicting - then multiple indicators seem the obvious way to go. Then the question becomes: what priorities are out there and which indicators will meet them?