June 30, 2003

His Excellency, Governor Craig Benson
State House
Concord, NH 03301

Thomas R. Eaton, President of the Senate
State House, Room 302
Concord, NH 03301

Gene G. Chandler, Speaker of the House
State House, Room 308
Concord, NH 03301

Senator Andrew R. Peterson, Chair
Senate Judiciary Committee
State House, Room 106
Concord, NH 03301

Hon. Henry P. Mock, Chair
House Judiciary Committee
LOB, Room 208
Concord, NH 03301

Re: Judicial Performance Evaluation Program

Dear Governor Benson, President Eaton, Speaker Chandler, Senator Peterson, and Representative Mock:

        This is our third annual report of the revised judicial performance evaluation program instituted by New Hampshire Supreme Court rule for the entire judicial branch in March 2001. Judicial performance evaluation began in New Hampshire in the trial courts in 1987. During 2000 and early 2001, the then-existing judicial performance evaluation program was examined and revised. For the trial courts, uniform forms were developed for use by the public (Performance Evaluation Questionnaire), the judge being evaluated (Self-Evaluation Form), and the administrative judge conducting the evaluation (Evaluation Summary). The program was extended to include the supreme court and the administrative judges. For the supreme court, a different Performance Evaluation Questionnaire and Self-Evaluation Form were developed. A more detailed description of the enhanced judicial performance evaluation program is contained in our first annual report to you, dated June 29, 2001.

        Under the enhanced judicial performance evaluation program, each trial court judge is to be evaluated at least once every three years. Last year's report covered our activities under this program for 2001, the first year it was operational. This report covers 2002, the middle year of the first three-year cycle.

 

SUPREME COURT

        During 2002. the supreme court clerk’s office distributed 152 Performance Evaluation Questionnaires to a sampling of parties and attorneys with cases pending at the court in 2002. Questionnaires were mailed to parties or attorneys who filed cases with the court and were distributed on oral argument days to parties or attorneys arguing cases before the court. Of the one hundred fifty two (152) questionnaires distributed, 50 questionnaires were returned for a response rate of 33%.

        The Performance Evaluation Questionnaire and the Self-Evaluation Form for the supreme court are divided into the following three sections:

  1. Performance and Judicial Management Skills – 5 questions
  2. Temperament and Demeanor – 7 questions
  3. Bias and Objectivity – 2 questions

Respondents, who are not permitted to identify themselves in their responses, are asked to evaluate the court’s performance on a scale of 1 through 5 (1 = excellent, 2 = very good, 3 = satisfactory, 4 = fair, and 5 = unsatisfactory).

        Respondents gave the supreme court an overall score of 2.0, or "very good." By section of the questionnaire the mean scores were as follows:

1. Performance and Judicial Management Skills   2.3
2. Temperament & Demeanor   2.0
3. Bias and Objectivity   1.7

        The questionnaire also asked respondents to evaluate the performance of other court personnel. Once again, the respondents were asked to rate the performance of court personnel on a scale of 1 (excellent) to 5 (unsatisfactory). The overall mean score in this category was 1.6, between "excellent" and "very good."

        The justices of the supreme court are committed to the performance evaluation process. Each justice completed a Self-Evaluation Form that the justices, as a group, then used as a basis to discuss their evaluations of one another. Each justice talked about his or her individual strengths and how the justice could improve. As a group the justices analyzed the writing skills, reasoning skills, communication skills of each justice, and the contributions made by each justice to the judicial process.

        The justices also evaluated their performance as a court. The justices have made a concerted effort to eliminate the large backlog of cases that had been pending with the court. They are proud that their efforts have been successful, and they believe their ability to work collegially was critical to this effort. Although the justices do not always agree on every legal issue, they try always to disagree with professionalism and civility.

        As part of its judicial performance evaluation process, the supreme court evaluated its performance against the performance standards adopted in June 2001. The performance standards include time standards relating to the processing of cases. The time standards are benchmarks for the court’s performance at different stages of the appellate process, such as screening, briefing, decision-making. In setting each time standard, the court decided upon the average length of time that it would be reasonable to expect the court to complete that stage of the appellate process. The time that it takes to complete a stage in any particular case may be, for many reasons not within the court's control, greater or less than the standard. While the standards do not require that every case be processed within the time periods identified, the standards serve as goals for both the court and staff to process all cases as promptly and efficiently as possible.

        The clerk’s office followed a different method this year of evaluating the court’s performance against the time standards. Last year the court limited the sample of cases used to compile the data to 2001 cases. For this report, the clerk’s office compiled data on all of the cases disposed of by the court in 2002. A total of 902 cases were disposed of in 2002. Of this number, 141 cases were filed prior to 2001, and 761 were filed in 2001and 2002. The decision was made to use all cases disposed of during the year so as to give a more accurate picture of the court’s performance throughout the appellate process. If data had been limited to cases filed in 2002, there would have been little data on the court’s performance in the later stages of the appellate process. As in 2001, the court did not maintain statistics on the distribution of written dissents.

        Many of the cases included in the sample that were filed before 2001 had been backlogged for significant periods. As a result of the inclusion of these cases, the court’s performance at certain stages did not fall within the adopted performance standard in two of five areas. Because of the court’s efforts to eliminate the backlog, the delay in the processing of cases has decreased significantly since 2001. The court’s handling of those cases filed during 2001 and 2002 falls within the performance standards in all five categories. The first chart shown below reflects the court’s performance in all 902 cases disposed of during 2002; the second chart reflects the court’s performance in the 761 cases filed in 2001 and 2002.

ALL CASES DISPOSED OF IN 2002.

Stage

Time Standard

Average for All Cases

Screening

90 days

77 days

Filing of appellant’s brief

60 days after record filed

86 days

Filing of appellee’s brief

50 days after appellant’s brief

52 days

Oral argument

180 days after appellant’s brief

90 days

Opinion/Decision

180 days after oral argument or submission

76 days

Motions for reconsideration/ rehearing

60 days

38 days

 

CASES FILED IN 2001 AND 2002 DISPOSED OF IN 2002:

Stage

Time Standard

Average for All Cases

Disposed of in 2002

Screening

90 days

63 days

Filing of appellant’s brief

60 days after record filed

60 days

Filing of appellee’s brief

50 days after appellant’s brief

45 days

Oral argument

180 days after appellant’s brief

99 days

Opinion/Decision

180 days after oral argument or submission

65 days

Motions for reconsideration/ rehearing

60 days

38 days

 

SUPERIOR COURT

        A total of nine judicial performance evaluations were performed by the Chief Justice of the Superior Court, Walter L. Murphy, in calendar year 2002 in accordance with Supreme Court Rule 56 and RSA 490:32, making a total of eighteen for 2001 and 2002. There are currently twenty-eight associate justices, the most recent having been appointed in December 2002. The judicial performance evaluations in 2002 were conducted in the same manner and using the same methodology as in 2001.

        Each judge being evaluated is furnished a Self-Evaluation Form which is returned to the chief justice for comparison with the results of the evaluation by others. Each clerk of court where the judge being evaluated customarily presides randomly distributed sixty Performance Evaluation Questionnaires for each judge being evaluated to lawyers, litigants, staff, court officers, witnesses and jurors and provided additional questionnaires to other members of the public who made inquiry in their office. The names of the judges being evaluated were publicly posted in the clerks’ offices as was a notice relative to the availability of the questionnaires. All the recipients of questionnaires were furnished a postage pre-paid envelope pre-addressed to the Superior Court Center and marked "Confidential." For the nine judges being evaluated, a total of 314 questionnaires were returned, which is 18% greater than last year's return.

        Upon the expiration of the deadline imposed for the return of the completed questionnaires, the evaluations are forwarded to the Administrative Office of the Courts for scanning and compilation. When the results are furnished to the Superior Court Center, the chief justice schedules an individual appointment with each judge at which the results are discussed and an expurgated version of the comments (to preserve the respondents’ confidentiality) is shared with the judge. The interview includes non-questionnaire information relating to the judge received by the chief justice, including letters of complaint and unsolicited letters of commendation, as well as information received relating to grievances filed with judicial conduct authorities, as a result of which the chief justice, if necessary, may take appropriate remedial action.

        The Performance Evaluation Questionnaire, the Self-Evaluation Form, and the Evaluation Summary for the trial courts identify seven areas considered in the evaluations:

  1. Performance (including ability to identify and analyze issues, judgment, and application of the law) – 11 questions
  2. Temperament and Demeanor – 8 questions
  3. Judicial Management Skills – 7 questions
  4. Legal Knowledge – 3 questions
  5. Attentiveness – 2 questions
  6. Bias and Objectivity – 3 questions
  7. Degree of Preparedness – 2 questions

The scale utilized is the same as that used in 2001 and that used by the supreme court, that is:

1 = Excellent
2 = Very Good
3 = Satisfactory
4 = Fair
5 = Unsatisfactory

        As in 2001, the overall mean for the judges evaluated was 1.9, with four scoring above the mean, four scoring below, and one on the mean. A mean overall score of 1.9 puts these judges, like their counterparts evaluated last year, at the "very good" level. By category, the mean scores for all nine judges were the same as in 2001 as follows:

1. Performance 1.9
2. Temperament & Demeanor 1.9
3. Judicial Management Skills 2.0
4. Legal Knowledge 1.8
5. Attentiveness 1.8
6. Bias & Objectivity 1.7
7. Degree of Preparedness 1.8

 

        As one judge's evaluation was somewhat more below the norm (albeit closer to "very good" than merely "satisfactory") and it reflected most significantly on concerns relating to the judge’s temperament and attentiveness, problem areas that the judge acknowledges, it was appropriate for the chief justice to provide for continued monitoring of the judge’s behavior and suggestions relative to modification of the judge’s conduct and sensitivity to the manner in which that conduct is perceived by others.

        The chief justice is currently in the process of the evaluations scheduled for 2003. Nine evaluations will be performed this year, including, as reported to you last year, a reevaluation of a judge evaluated in 2001 for whom the additional evaluation was part of certain remedial measures taken by the chief justice. All of the associate justices will have been evaluated under this procedure between 2001 and 2003, with the exception of the judge appointed in December, 2002, whose initial evaluation will be scheduled in 2004.

 

DISTRICT COURT

        During 2002, the Administrative Judge of the District Court, Edwin W. Kelly, completed performance evaluations of twenty judges. The district court currently consists of seventy judges.

        The evaluation process is the same in the district court as that described above for the superior court. A total of 1,262 Performance Evaluation Questionnaires were distributed for twenty judges, for an average of sixty-three per judge. The return of 708 made for a response rate of 56%.

        The mean overall score for the judges evaluated in 2002 is 1.7, a rating of "very good." This rating is also an improvement on last year's 2.0. Fourteen evaluations of those judges evaluated in 2002 were better than the mean, ranging from 1.3 to 1.8; two evaluations were at the mean; and four were below, ranging from 2.2 to 2.7. By category, the mean scores for all twenty judges were as follows:

 

1. Performance   1.8
2. Temperament & Demeanor   1.7
3. Judicial Management Skills   1.8
4. Legal Knowledge   1.7
5. Attentiveness   1.6
6. Bias & Objectivity   1.6
7. Degree of Preparedness   1.9

        The twenty judges evaluated in 2002 include two who were evaluated in 2001 and for whom reevaluations were scheduled for 2002 as part of a monitoring program. In both cases, the reevaluations showed improvement, and each judge agreed to the administrative judge's suggestions for change.

        For 2003, the administrative judge of the district court plans to evaluate an additional twenty judges, which will complete the cycle such that all district court judges who have been on the job for three years will have been evaluated between 2001 and 2003.

 

PROBATE COURT

        During 2002, the Administrative Judge of the Probate Courts, John R. Maher, completed four judicial performance evaluations. The probate court consists of ten judges, one for each county.

        The evaluation process has worked well. Names and addresses of active practitioners and agencies are provided to the administrative judge and mailings are generated directly from the office of the administrative judge. For the four judges evaluated in 2002, a total of 156 Performance Evaluation Questionnaires were mailed. Eighty five were returned for a response rate of 54%.

        The overall score for the four judges evaluated was 1.8, a rating of "very good." The actual overall scores were 1.5, 1.6, 1.7, and 1.9. By category, the mean scores for all four judges were as follows:

1. Performance   1.7
2. Temperament & Demeanor   1.6
3. Judicial Management Skills   1.8
4. Legal Knowledge   1.6
5. Attentiveness   1.5
6. Bias & Objectivity   1.5
7. Degree of Preparedness   1.9

        Again, as noted in 2001, the judges need more administrative days for writing and research. The cases are becoming more complex and contested. Presently the weighted caseload provides only twelve administrative days in a calendar year.

 

ADMINISTRATIVE JUDGES

        Rule 56 requires that a panel consisting of the chief justice of the supreme court and two associate justices evaluate the administrative judges of the superior, district, and probate courts a minimum of once every three years. The supreme court will conduct formal performance evaluations of the administrative judges this year. The court notes, however, that it appoints administrative judges for three year terms and has frequent contact with them. While the formal performance evaluation of the administrative judges will provide useful information, in essence the performance of the administrative judges is constantly being evaluated by the supreme court.

 

CHANGES IN PROGRAM

        Supreme Court Rule 56(III) was amended to decrease the frequency with which performance evaluation questionnaires are distributed to evaluate the supreme court from annually to every three years. Many attorneys and parties who appear before the court do so on a regular basis. Some of these persons have received questionnaires every year since the judicial performance evaluation program was instituted. The court believes that if questionnaires are distributed every three years, persons receiving the questionnaires are more likely to complete the questionnaire and return it to the court. The amendment also makes the frequency of performance evaluations consistent with the other courts. The supreme court will continue to perform other aspects of the evaluation process on an annual basis, including an analysis of the court’s performance in relation to the judicial performance time standards.

        Based on feedback from the administrative judges and from respondents, changes have been made to the Performance Evaluation Questionnaires and the Evaluation Summary in the trial courts. These changes were effective for evaluations occurring in 2003, which will be reported on next year.

        The greatest change in the trial court questionnaires is that the scale has been reversed such that excellent = 5; very good = 4; satisfactory = 3; fair = 2; and unsatisfactory = 1. This change has been made to put the scale in accord with the common understanding that the higher the score, the greater the rating. Thus, a 1.9 in this report covering 2002, will be the equivalent of a 4.1 in next year's report, covering 2003.

        In addition, other changes reported on last year, took effect in 2003. In the Performance Evaluation Questionnaires for both the supreme court and the trial court, the section soliciting narrative comments on performance has been moved to immediately follow the performance questions. Language has been inserted before the space for narrative comments informing respondents that the evaluated judges will receive the comments either verbatim or in summary form; however, respondents are also informed that to preserve confidentiality, anything identifying the evaluator will be removed. In addition, the section soliciting background information on the evaluation has been moved to the end of the form. Language has been inserted informing respondents that providing the requested background information is voluntary and will be kept strictly confidential and that the judge being evaluated will not be supplied with any information which could identify the evaluator.

        Finally, the administrative judges requested a change in the Evaluation Summary which they give to each evaluated judge. Previously, the form had a place for the administrative judge to mark "Satisfactory" or "Unsatisfactory" next to each evaluation question. The administrative judges asked that a third category be added entitled "Needs Improvement."

        The revised forms, which were first used for the evaluations conducted this year, are attached as an appendix to this report.

 

 

CONCLUSION

        The supreme court continues to be pleased with the operation of its revised judicial performance evaluation program instituted in 2001. Two-thirds of the first round of evaluations are now complete, and overall judges have been evaluated at higher than the "very good" level. Where an evaluation has departed markedly from that standard and/or where other information comes to the attention of an administrative judge that indicates performance issues, this program allows heightened scrutiny of and support for the judge to improve performance to the high level rightfully expected by the public of New Hampshire's judiciary.

                                                                            Respectfully submitted,
                                                                            New Hampshire Supreme Court

 

                                                                            By: David A. Brock,
                                                                            Chief Justice