Prior to creating a rubric we recommend that you consider the following questions:
What does a DEIB rubric assess?
The sample rubric below envisions the evaluation of DEIB as encompassing three main areas: knowledge and understanding (section 1), track record of activities to date (section 2), and plans for contributing at Berkeley (section 3). Committees may wish to adjust this categorization to reflect their particular needs and goals or disciplinary needs, either by altering the categories, adjusting the scores to be awarded among the categories, or adding additional categories.
- We recommend that you consult with OFEW if you wish to add additional categories, to ensure that the assessment follows best practices and falls within permissible legal parameters.
How can a DEIB rubric be scored?
Search committees have found it very useful to assign numerical scores to each section of their DEIB rubric. This is helpful in identifying and analyzing specific areas of agreement or disagreement as the committee discusses each candidate. The sample template below suggests assigning an equal points value to each of the three sections (with a score from 1 to 5 for each section). Some committees may, however, decide that one section or another should be weighted more heavily. Or, committees may decide that a different scoring system for each section more accurately reflects their departmental or disciplinary needs.
-
If a scoring range becomes too wide or a scoring system too complicated, it is difficult to achieve reliability in assessment. The system recommended for this rubric has worked well for past searches.
How should I interpret the examples in each section of the sample rubric?
The sample rubric assists committees in scoring each of the three areas by providing examples of what is commonly seen in applications to faculty searches at Berkeley.
-
These examples are offered as illustrative suggestions; they are neither exhaustive nor ironclad. They can be modified to fit the academic and disciplinary backgrounds of applicants in a particular search. Faculty members in individual units should use their disciplinary expertise to understand what examples are likely most appropriate for their particular department or search.
How can search committees make sure they are using a rubric properly?
To best make use of a DEIB evaluation rubric, we strongly suggest conducting a calibration exercise in advance of reviewing the entire candidate pool.
-
The purpose of the calibration exercise is to be able to apply the tool equitably, consistently, and reliably across all applicants.
How can search committees calibrate their scoring?
We recommend the following calibration exercise, which past search committees have found useful:
-
Discuss, as a committee, the importance and evaluation of contributions to DEIB as one aspect of excellence across research, teaching, and service. As a reminder, candidates do not need to belong to a particular group or demographic, or to hold particular viewpoints, to be successful in this regard. DEIB efforts described by candidates from international institutions may look different from DEIB work conducted in the U.S. but can be equally compelling.
-
Create a rubric for use in the particular search, including categories, examples, scores, use of a standalone statement or integrated statements, etc. Please consult with OFEW for advice if major changes to the rubric are contemplated. OFEW will be able to help you avoid common pitfalls.
-
Discuss ahead of time the kinds of evidence that could motivate low, medium, or high scores.
-
Select a random sample of 8-10 statements (or dossiers) from the applicant pool, redacted for candidate name.
-
Apply the rubric to the statements (or dossiers), with each committee member scoring the statements separately.
-
Analyze the scores assigned to each statement (or dossier) across all categories and by all committee members.
-
Discuss interpretations and discrepancies between reviewer scores.
-
Recalibrate the scoring/assessment system as needed.
-
Apply the agreed upon rubric to the entire applicant pool.
After you have finished the calibration and scoring processes, it is very useful for the search committee to share with the rest of the faculty what was learned during this process of assessing DEIB contributions. OFEW also welcomes hearing from search committees about how the calibration and assessment process went.
We welcome your feedback on the structure and use of these suggestions. Please send comments to ofew@berkeley.edu.