A pretrial program, with its risk assessment, diversion and supervision components, should be continually assessed to ensure it is meeting its goals of protecting public safety and targeting justice system resources efficiently. Whether you have a pretrial program in place or are starting from scratch, a system assessment provides an opportunity to determine where you are and how to develop a plan for moving forward. The assessment should include a review of qualitative and quantitative data, as well as a discussion of the goals for pretrial services that incorporates diverse viewpoints from across agencies (courts, sheriff departments, etc.) and within agencies (from executives to frontline staff).
Below are steps recommended by the Crime and Justice Institute to provide the type of knowledge base that will help jurisdictions know if pretrial programs are producing their intended outcomes. Having a clear plan in place to identify and use data to drive decisions can foster data-driven decision-making and, ultimately, provide for greater accountability, efficiency and effectiveness within the justice system.
Step 1: Clearly Define Program Outcomes
Consider the agency's mission, vision statements, national standards and purpose of the program. Clearly define intended outcomes. Outcomes describe the intended result or consequences that will occur from carrying out a program or activity. Outcomes should be defined using the SMART model so that they are specific, measurable, realistic, and time-bound) so as to have specific targets to focus upon. Discuss program goals with key stakeholders to ensure commitment to agreed-upon outcomes.
Step 2: Develop a Program-Level Logic Model
Logic models provide a graphic means of describing what is intended to happen, including resources to utilize activities to perform, and the outputs and outcomes that will be achieved. Logic models aid in the clarification of prretrial service components and the theories behidn their effectiveness. They also guide evaluation. (Visit the BJA Center for Performance Measurement and Program Evaluation for more information about logic models). In addition to program-level logic models, more detailed logic models can be utilized on a narrower scale, such as for a particular component of the pretrial program
Step 3: Determine What Indicators Will Need to Be Measured
Use the logic model as the frame from which to select indicators to measure, then dissect the various elements to pinpoint exactly what needs to be known in order to determine what is working or not working. Prioritize these indicators based on what you need to know first what you have the resources to collect. It is helpful during the prioritization process to consider factors such as the consistency with the research literature, timeliness of data availability, ease of reporting and the level of interest among stakeholders. Consider the utility of indicator(s) and the message that emphasis on particular indicator(s) will send, keeping in mind the items that get measured will be what gets done. Over time, phase in indicators that gradually build proficiency and capacity
Step 4: Decide How to Measure the Indicators
Brainstorm mechanisms that can capture the indicators selected and develop a strategy for how the data will be collected, by whom and how often. Some common mechanisms include management information systems and databases, spreadsheets, supervisory reviews, policy audits, peer reviews, surveys, and/or formal evaluations. Study mechanisms to ensure that they are reliable and valid (i.e. they will measure the right things. If there are too many indicators on which to realistically collect data, another round of prioritization may be needed; this may be also be an opportunity to identify where deeper levels of quality assurance may be needed
Step 5: Document a Plan That Pulls it All Together
The plan should describe how these indicators will be brought together, articular why, how and when they will be collected and reported, as well as who they will be reported to. The plan can be shared with stakeholders and agency employees and may need to be updated on an annual (or more frequent) basis as the agency progresses.
Steph 6: Communicate the Plan Repeatedly
Communicate early and often about the purpose of the plan and how the data will be used for feedback, improvement and to celebrate successes. Multiple forms of communication are often helpful, including letters or emails to stakeholders and employees, blogs, meetings, annual reports and press releases.
Step 7: Collect the Data
Everyone involved in the data collection process should have a clear understanding of the tasks each needs to complete. Training may be needed to be provided upfront and regular checks should be done to ensure data is being collected consistently and accurately. Be mindful of accuracy; data that is trustworthy is much more likely to be acted upon.
Step 8: Analyze and Report the Data
Put the data into a format that can be easily understood and used. It helps to compare present data to baseline data and other benchmarks; the benchmarks should be initially set at realistic levels to ensure they are attainable, then gradually raise the benchmarks as proficiency is established. Test the reporting format to ensure the data is accurate and easily understood, revise is necessary. Be sure to disseminate the data quickly so it can be put to use.
Step 9: Put the Data to Use
What works? What doesn’t work? What are the lessons learned? How can the data base used to improve? Celebrate success and create plans to improve where necessary. Data is most useful when it is applied to improve; create opportunities to discuss the data and how to use it.
Step 10: Repeat the Process
Determine at regular intervals until the outcome(s) have been mastered. Once the mastery of the outcome(s) is achieved, moved on to the next desired outcome(s) and repeat the steps.
Outcome and Performance Measures and Mission-Critical Data
Click here to view recommended outcome and performance measures and mission-critical data, and view examples of pretrial program assessments.