John Greiner's Research
In general, I'm interested in using theory, particularly programming language
theory such as semantics and type theory, to design, analyze, and implement
practical programming languages.
Overview:
This page outlines, in chronological order, the projects I have worked on
and gives references to my papers.
Scheme and Hygienic macro expansion
First I ported Scheme84 from being implemented in Franz Lisp to
Common Lisp. Then I worked on improvements, primarily of the
macro expansion of Scheme88 and ISWYM, which were unreleased modifications
of Scheme84 and predecessors of Rice Scheme.
This work was under the supervision of Matthias Felleisen
(Rice University, 1988-89).
No published papers.
Using (co)induction and (co)inductive types in programming
This work reviews the theoretical aspects and explores the practical aspects
of using induction and coinduction, rather than recursion, in programming.
Coinduction admits data denoting infinite objects, such as the familiar
stream data type.
(Co)Induction is more restricted than general recursion
(and similarly, (co)inductive types are more restricted than
general ML-style datatypes), as all programs must terminate. But this
work explores what can be programmed with (co)induction, and
some idioms used in such programming.
This was under the guidance of Bob Harper
(CMU, 1991), and served as my Area Qualifier Major.
Published paper:
Weak polymorphism
I explored the implementation of SML/NJ's "weak types", which are used for
typing side-effecting expressions such as "ref" cells, continuations, and
exceptions. This was accomplished by
reverse-engineering the implementation in SML/NJ
and formalizing its key aspects.
In particular, I showed that a simplification of SML/NJ's weak type system
(especially similar to that in Version 0.66) is sound.
This was under the guidances of Bob Harper (CMU, 1992-93).
Published papers:
I looked at pragmatic aspects of several concurrent read concurrent write
(CRCW) algorithms for finding the
connected components of arbitrary graphs. This was the first published
empirical comparison of such algorithms.
This work was under the guidance of Guy Blelloch and with some help from
Marco Zagha (CMU, 1993-94) and included my Area Qualifier Minor.
Published papers:
-
A Comparison of Data-Parallel Algorithms for Connected
Components, by John Greiner, CMU-CS-93-191, August 1993,
includes the source code of the Shiloach & Vishkin,
Awerbuch & Shiloach, and Random Mate algorithms that were
compared.
-
A Comparison of Parallel Algorithms for Connected
Components, by John Greiner, SPAA 1994,
compares the Awerbuch & Shiloach,
Random Mate, and a new Hybrid algorithm and has newer results than the
technical report. In short, the Hybrid algorithm is very simple, yet
is usually the fastest algorithm of those tested.
- Data-Parallel Connected Components Algorithms,
by John Greiner and Guy E. Blelloch,
Chapter 6 of High Performance Computing,
edited by Gary Sabot, 1995,
presents the same algorithms as examples of how irregular
graph problems can be solved in parallel.
Complexity models based on programming languages
I explored the possibilities of adding cost measures to operational
semantics to serve as the foundation for defining complexity in realistic
programming languages, particularly functional languages. I am
concentrating on the theoretical aspects of parallel implementations of
call-by-value and call-by-speculation languages.
This thesis work was under the guidance of Guy Blelloch
(CMU, 1993-1997).
Some of the ideas of this work are as follows:
- The semantic definition of a programming language should come with a
specification of what costs are incurred by the language.
- Such definitions are especially needed for parallel languages, since the
costs are less obvious/intuitive.
- Cost-augmented semantics can be used to prove complexity results, and
such proof can be factored
by proving relations between cost-augmented semantics of different
languages.
- The expressiveness of languages can be compared based on their
complexities.
Published papers:
-
Operational Semantics Models of Complexity, by
John Greiner, is
my thesis proposal, and so outlines my original plan for this work.
It includes the additional goal of showing that
the complexity of a program in an arbitrary cost-augmented
semantics can be analyzed in a general, systematic framework, not overly
specialized to one kind of language. In particular, this can be done
automatically. Of course, no automatic analysis can be complete, by
the Halting Problem.
-
A Parallel Complexity Model for Functional Languages,
by Guy Blelloch and John Greiner,
CMU-CS-94-196, October 1994, looks at using the lambda-calculus as
a simple model of parallelism.
-
Parallelism in Sequential Functional Languages, by
Guy E. Blelloch and John Greiner,
in FPCA 1995, is another version of CMU-CS-94-196.
-
A Provably Time-Efficient Parallel Implementation of Full
Speculation, by John Greiner and Guy E. Blelloch,
POPL 1996.
-
A Provable Time and Space Efficient Implementation
of NESL,
by Guy E. Blelloch and John Greiner, ICFP 1996.
-
Semantics-based parallel cost models and their use in
provably efficient implementations, by John Greiner,
CMU-CS-97-113 (dissertation).