Methodology

We rank computer science departments using a single metric: publication counts at top-tier outlets in several subareas of computer science. The motivation for, and limitations of, these rankings are discussed in a PL Enthusiast blog post.

Categories and venues

A few "top-tier" venues in each area were identified through informal polling. Publication data for all venues were obtained from the DBLP database.

Here's a complete listing of the categories considered in the rankings, and the venues counted as top-tier in each of them.

Professors

Aside from DBLP data, the app uses a roster of professors at American universities constructed by Papoutsaki et al. I changed the dataset in the following ways.

The final roster is available here.

Scores for professors

The user selects a time window within the last 15 years and uses a set of sliders to put weights (numbers between 0 and 1) on various areas. The app assigns a score to each professor by giving him or her w points for each paper in a top-tier venue in an area with weight w. In addition, we identify a set of relevant professors — intuitively, the set of professors who are prospective advisors in the areas of interest. To qualify as a relevant professor, a faculty member must have published 3 or more papers across top venues in areas where the user has put a nonzero weight, within the selected period.

Ranks for departments

Departments are now ranked according to three different metrics. See the PL Enthusiast blog post for more on these metrics.

Limitations

The limitations of the methodology are described in some detail in the accompanying blog post. The implementation has quite a few limitations as well. Here is a partial list:

If you spot a mistake that you would like corrected, please leave a comment on this public Google document.