June 28, 2002



Her Excellency, Governor Jeanne Shaheen
State House
Concord, NH 03301

Arthur P. Klemm, Jr., President of the Senate
State House, Room 302
Concord, NH 03301

Gene G. Chandler, Speaker of the House
State House, Room 308
Concord, NH 03301

Senator Edward M. Gordon, Chair
Senate Judiciary Committee
State House, Room 304
Concord, NH 03301

Hon. Henry P. Mock, Chair
House Judiciary Committee
LOB, Room 208
Concord, NH 03301

Re: Judicial Performance Evaluation Program

Dear Governor Shaheen, President Klemm, Speaker Chandler, Senator Gordon, and Representative Mock:

This is our second annual report of the revised judicial performance evaluation program instituted by New Hampshire Supreme Court rule for the entire judicial branch in March 2001. Judicial performance evaluation began in New Hampshire in the trial courts in 1987. During 2000 and early 2001, the then-existing judicial performance evaluation program was examined and revised. For the trial courts, uniform forms were developed for use by the public (Performance Evaluation Questionnaire), the judge being evaluated (Self-Evaluation Form), and the administrative judge conducting the evaluation (Evaluation Summary). The program was extended to include the supreme court and the administrative judges. For the supreme court, a different Performance Evaluation Questionnaire and Self-Evaluation Form were developed. A more detailed description of the enhanced judicial performance evaluation program is contained in our first annual report to you, dated June 29, 2001. This report will cover the evaluations conducted pursuant to the program during 2001.



In September 2001, the supreme court clerk’s office began distributing Performance Evaluation Questionnaires to a sampling of parties and attorneys who filed cases with the court in 2001. Questionnaires were mailed to the parties or attorneys in 10% of the cases filed. In addition, questionnaires were distributed on oral argument days to parties or attorneys arguing cases before the court. Approximately 150 questionnaires were distributed, and forty-nine questionnaires were returned for a response rate of 33%.

The Performance Evaluation Questionnaire and the Self-Evaluation Form for the supreme court are divided into the following three sections:

1. Performance and Judicial Management Skills – 5 questions

2. Temperament and Demeanor – 7 questions

3. Bias and Objectivity – 2 questions

Respondents, who are not permitted to identify themselves in their responses, are asked to evaluate the court’s performance on a scale of 1 through 5 (1 = excellent, 2 = very good, 3 = satisfactory, 4 = fair, and 5 = unsatisfactory).

Respondents gave the supreme court an overall score of 1.9, slightly on the "excellent" side of "very good." By section of the questionnaire the mean scores were as follows:

1. Performance and Judicial Management Skills 2.3

2. Temperament & Demeanor 1.8

3. Bias and Objectivity 1.6

The questionnaire also asked respondents to evaluate the performance of other court personnel. Once again, the respondents were asked to rate the performance of court personnel on a scale of 1 (excellent) to 5 (unsatisfactory). The overall mean score in this category was 1.7.

Each justice completed a Self-Evaluation form that the justices, as a group, used as a basis to discuss their expectations of one another and of the court, their feelings about their work, and their assessment of each justice’s contribution to the judicial process. As a group they analyzed the writing skills, reasoning skills, communication skills of each justice, and the manner in which each justice interacted with parties and attorneys.

For all of the justices, the constraints of time presented the most concern. Each justice would like to have more time to read, research, reflect, and write. Although every justice feels responsible for and is committed to sharing some administrative responsibilities, fulfilling these administrative responsibilities makes their judicial task more difficult.

The justices are especially pleased and proud of the collegiality with which they approach their tasks and the professionalism and civility that they are able to call upon even when they disagree on a particular legal issue. Every justice talked about their individual strengths and the areas which could be improved. They acknowledged their responsibility to continually assess and adjust their performance to meet goals, guidelines, and expectations.

Finally, as part of its judicial performance evaluation process, the supreme court in June 2001 adopted performance standards that included time standards relating to the processing of cases. The time standards are benchmarks for the court’s performance at different stages of the appellate process, such as screening, briefing, decision-making. In setting each time standard, the court decided upon the average length of time that it would be reasonable to expect the court to complete that stage of the appellate process.

In early 2002, the clerk’s office reviewed the data on all cases filed in 2001 to determine whether the time standards had been met. Earlier cases were not used since statistics were not kept for each stage of the appellate process. The length of time that it took to complete each stage of the appellate process governed by the time standards was calculated, with the exception of the time standard relating to the distribution of dissenting opinions because the court did not collect statistics on the distribution of dissents in 2001. The following chart shows that in 2001 the court met or exceeded all of the time standards for which data had been collected:


Time Standard

Average for 2001 Cases


90 days

71 days

Filing of appellant’s brief

60 days after record filed

55 days

Filing of appellee’s brief

50 days after appellant’s brief

36 days

Oral argument

180 days after appellant’s brief

63 days


180 days after submission

41 days

Motions for reconsideration/ rehearing

60 days

40 days

The time standards adopted by the court are averages for the court’s performance at different stages of the appellate process, and the time that it takes to complete a stage in an individual case, for many reasons not in the court's control, may be greater or less than the standard. While the standards do not require that every case be processed within the time periods identified, the standards serve as goals for both the court and staff to process all cases as promptly and efficiently as possible.



During 2001, the Chief Justice of the Superior Court, Walter Murphy, completed nine judicial performance evaluations. Since the performance evaluation program requires that each trial judge be evaluated at least once every three years, this number is roughly one-third of the superior court. At the time the evaluations were completed in 2001, the court consisted of a chief justice and twenty-five associate justices, with three judicial positions vacant.

At the direction of the chief justice, each clerk of court where the judge being evaluated was assigned is furnished a total of sixty Performance Evaluation Questionnaires for each judge being evaluated for random distribution among members of the criminal and civil bar, parties, witnesses, jurors, and court staff, together with postage prepaid envelopes addressed to the Superior Court Center and marked as "Confidential." For the nine judges being evaluated, a total of 540 questionnaires were distributed. 266 were returned for a response rate of 49%.

Once the deadline for return of the completed questionnaires passes, the Center forwards the individual questionnaires to the Administrative Office of the Courts (AOC) for scanning and compilation. A Self-Evaluation Form is furnished the judge and likewise returned to the Center and used to compare how the judge considers his or her own performance with how performance is perceived by others. Upon receipt of the results of the compilation from AOC, the chief justice completes the Evaluation Summary, which contains an expurgated version (to preserve confidentiality) of the comments included in the Performance Evaluation Questionnaires. He then sets up a personal interview with each judge being evaluated at which the results are discussed. Depending upon the result of the interview, which considers, along with the formal questionnaires, complaints and other input received by the chief justice and known grievances filed with the Judicial Conduct Committee or Commission, the chief justice makes recommendations for remedial or other appropriate action.

The Performance Evaluation Questionnaire, the Self-Evaluation Form, and the Evaluation Summary for the trial courts identify seven areas considered in the evaluations:

1. Performance (including ability to identify and analyze issues, judgment, and application of the law) – 11 questions

2. Temperament and Demeanor – 8 questions

3. Judicial Management Skills – 7 questions

4. Legal Knowledge – 3 questions

5. Attentiveness – 2 questions

6. Bias and Objectivity – 3 questions

7. Degree of Preparedness – 2 questions

The same one-to-five scale as utilized by the supreme court is used for each question, with 1 = excellent, 2 = very good, 3 = satisfactory, 4 = fair, and 5 = unsatisfactory.

For the nine judges evaluated in 2001, the mean overall score was 1.9, slightly on the "excellent" side of "very good." Three judges performed better than the mean, ranging from 1.3 to 1.8; two judges were at the mean; and four were below, consisting of three 2.0s and one 2.4. By category, the mean scores for all nine judges were as follows:

1. Performance 1.9

2. Temperament & Demeanor 1.9

3. Judicial Management Skills 2.0

4. Legal Knowledge 1.8

5. Attentiveness 1.8

6. Bias & Objectivity 1.7

7. Degree of Preparedness 1.8

As one of the judges evaluated fell below "very good" to at least some extent, the chief justice has suggested certain remedial measures be taken to include periodic informal mentoring and an additional evaluation in 2003 to monitor his/her performance with the expectation that the judge will meet or exceed the "very good" category.

In light of the results of the evaluations, no corrective action was seen as necessary with respect to the remaining judges, except to the extent that the chief justice has discussed the matter of some judges’ concerns relative to the availability of formal courses in advanced evidence, judicial writing, and stress management. The Superior Court Education Committee is in the process of considering these subjects for future court programs which depend in large part on the availability of adequate funding.

The chief justice is currently engaged in the evaluation of nine additional judges for calendar year 2002. Questionnaires have been processed by the Center and the AOC, and interviews with the evaluated judges are currently being conducted. It is expected that the process will be complete by July 2002.



During 2001, the Administrative Judge of the District Court, Edwin Kelly, completed performance evaluations of twenty-three judges. Since one judge was evaluated a second time, a total of twenty-four performance evaluations were conducted in the district court in 2001. The district court currently consists of sixty-nine judges; therefore, to complete a performance evaluation of each judge at least once every three years, at least twenty-three judges must be evaluated annually.

The evaluation process is the same in the district court as that described above for the superior court. A total of 1,289 Performance Evaluation Questionnaires were distributed for the twenty-three judges, for an average of fifty-six per judge. The return of 746 made for a response rate of 58%.

For the twenty-four performance evaluations conducted in 2001, the mean overall score was 2.0, a rating of "very good." Thirteen evaluations were better than the mean, ranging from 1.5 to 1.9; three evaluations were at the mean; and eight were below, ranging from 2.1 to 3.1. By category, the mean scores for all twenty-three judges were as follows:

1. Performance 2.1

2. Temperament & Demeanor 1.8

3. Judicial Management Skills 2.1

4. Legal Knowledge 2.0

5. Attentiveness 1.8

6. Bias & Objectivity 1.8

7. Degree of Preparedness 2.1

In three cases, the judges’ overall evaluations indicated a need for corrective action in one or more areas. In one of those cases, the judge was assigned temporarily to courts in which multiple judges are present in order to assure monitoring. That judge also met with the administrative judge on a number of occasions, and the evaluation was repeated at the end of several months. The second evaluation showed considerable improvement over the first. In fact, the second evaluation was better than the mean for all the evaluations. In the other two cases, the judges were given specific instructions on behavioral and practice changes, and their evaluations will be repeated during the 2002 cycle.

For 2002, the administrative judge of the district court plans to evaluate an additional twenty-three judges plus the two reevaluations mentioned above.



During 2001, the Administrative Judge of the Probate Court, John Maher, completed three judicial performance evaluations. The probate court consists of ten judges, including the administrative judge. Therefore, to complete a performance evaluation of each judge at least once every three years, three evaluations must be accomplished annually.

The evaluation process is the same in the probate court as that described above for the superior court. For the three judges evaluated in 2001, a total of 105 Performance Evaluation Questionnaires were distributed. Seventy were returned for a response rate of 67%.

The overall score for the three judges evaluated was 1.7, moderately on the "excellent" side of "very good." All three were at the 1.7 level. By category, the mean scores for all three judges were as follows:

1. Performance 1.8

2. Temperament & Demeanor 1.6

3. Judicial Management Skills 2.0

4. Legal Knowledge 1.8

5. Attentiveness 1.6

6. Bias & Objectivity 1.4

7. Degree of Preparedness 1.9

Because the performance evaluations of all three judges concluded in 2001 exceeded "very good", no corrective action was deemed necessary. The administrative judge and the evaluated judges did discuss the need for more administrative days for research and writing.

For 2002, the administrative judge of the probate court will conduct three more judicial performance evaluations. Two of these have already begun.


Rule 56 requires that a panel consisting of the chief justice of the supreme court and two associate justices evaluate the administrative judges of the superior, district, and probate courts a minimum of once every three years. None were conducted in 2001 as the supreme court concentrated its efforts on implementing the new judicial performance evaluation system and conducting the first judicial performance evaluation of itself. The supreme court intends to begin formal performance evaluations of the administrative judges this year and to complete them in 2003. The court notes, however, that it appoints administrative judges for three year terms and has frequent contact with them. While the formal performance evaluation of the administrative judges will provide useful information, in essence the performance of the administrative judges is constantly being evaluated by the supreme court.



Based on feedback from the administrative judges and from respondents, some minor changes have been made to the Performance Evaluation Questionnaires and the Evaluation Summary. These changes will be effective for evaluations occurring after the date of this letter.

In the Performance Evaluation Questionnaires for both the supreme court and the trial court, the section soliciting narrative comments on performance has been moved to immediately follow the performance questions. Language has been inserted before the space for narrative comments informing respondents that the evaluated judges will receive the comments either verbatim or in summary form; however, respondents are also informed that to preserve confidentiality, anything identifying the evaluator will be removed. In addition, the section soliciting background information on the evaluation has been moved to the end of the form. Language has been inserted informing respondents that providing the requested background information is voluntary and will be kept strictly confidential and that the judge being evaluated will not be supplied with any information which could identify the evaluator.

Finally, the administrative judges requested a change in the Evaluation Summary which they give to each evaluated judge. Previously, the form had a place for the administrative judge to mark "Satisfactory" or "Unsatisfactory" next to each evaluation question. The administrative judges asked that a third category be added entitled "Needs Improvement."

The revised forms are attached as an appendix to this report.



As described in this report, the judicial branch fully implemented its revised judicial performance evaluation program in 2001. The Performance Evaluation Questionnaires provide input from participants in the judicial process about the performance of individual judges and, in the case of the supreme court, about the court itself. The Self-Evaluation Forms give judges the opportunity to reflect upon their work and performance. The Evaluation Summaries provide administrative judges the vehicle to review with the individual judges the results of the evaluation.

The supreme court is pleased that the performance of all courts was "very good" or better. The court is equally pleased that the evaluations give the administrative judges a tool to improve the performance of judges in areas where improvement is necessary. The revisions in the judicial branch’s now fifteen-year-old judicial performance evaluation program have made the program much stronger and help guarantee to the citizens of New Hampshire a highly competent judiciary.

                                Respectfully submitted,
                                New Hampshire Supreme Court



                                By: David A. Brock,
                                Chief Justice