Aditya A. Shrotri
Department of Computer Science
3060 Duncan Hall
6100 Main St., MS 132,
Houston TX 77005-1892
E-mail: Aditya (d.o.t) Aniruddh (d.o.t.) Shrotri (a.t) rice (d.o.t) edu
Alternate 1: as128 (a.t) rice (d.o.t) edu
Alternate 2: aditya (d.o.t) a (d.o.t.) shrotri (a.t) gmail (d.o.t) com
I am a fifth year Ph.D. student at Rice University in the department of Computer Science advised by Prof. Moshe Y. Vardi. I am interested in designing provably efficient algorithms using tools from Formal Methods, for problems arising from Artifical Intelligence, with an emphasis on good empirical performance.
Research in AI has traditionally favored 'quick and dirty' techniques that work reasonably well in practice, over techniques with rigorous proofs of correctness, accuracy and scalability. This was largely due to lack of applications where formal guarantees are indispensable. For example, the price of making a mistake in speech recognition is usually not too high. However, the booming success of Machine Learning has given rise to myriad endeavors where an error could mean the difference between life and death. For instance, automated tools for helping doctors diagnose diseases from symptoms, and for creating reports from medical scans have already been deployed in hospitals across the world. Autonomous vehicles are poised to change the way we commute in the next decade. In all such cases, policy-makers as well as end consumers can rest assured only if the underlying AI engine has been shown to be theoretically sound and mathematically precise. Wearing two hats as a theoretician and a computer engineer, I hope to achieve the holy grail of industrial-scale performance without safety compromises. ...
In my work on approximate model counting [Slides], the goal is to design highly scalable approximation algorithms which do not compromise on accuracy. For example, in my work on counting solutions to Boolean Formulas in Disjunctive Normal Form, I designed a hashing-based randomized approximation algorithm with PAC guarantees (a strong notion of accuracy) and linear running time, which is the gold standard from a theoretical perspective. What set us apart from the then state-of-the-art was robust empirical performance across different benchmarks ensuring wider applicability than the other approaches. This abstract problem of DNF-Counting has direct applications in the real world, such as showing that the electrical grid is resilient to powerlines failing. This work has resulted in two publications including an invited journal paper.
In my latest work, I tackle other related counting problems from the same perspective of ensuring high performance with strong guarantees. The graph theoretical problem of counting perfect-matchings (equivalently, computing the matrix permanent) appears in surprisingly diverse areas such as Molecular Geometry, Partition Function Computation and Constraint Solving to name a few. The problem is also an important benchmark for testing the limits of classical high-performance computing vis-a-vis quantum computing, and is known to be a hard problem for the former. I designed an exact algorithm for counting perfect-matchings, which is able to scale far better than existing tools on a wider class of problem instances. Good performance on classical computers is particularly important now, as claims of quantum supremacy start appearing in literature.
In line with the theme of 'Formal Methods for AI', I have also worked on explaining neural networks and other opaque ML models. Explainability has emerged as an important direction for ML research and with good reason -- there is very little theoretical insight into the overwhelming success of neural networks. A large number of 'explainers' have been proposed in recent years, each with their own notion of what constitutes a good explanation. From a practitioners perspective, it is not easy to gauge how reliable a purported explanation is. In our work, we leverage model counting tools for measuring how faithful an explanation is to the original model. Our approach affords more flexibility to the end user while providing guarantees of fidelity which are not limited to the dataset at hand.
Read more >>
Author names are sorted alphabetically except in *
Before joining Rice, I worked for a year with Prof. S. Akshay on Model Checking of Markov chains as a Research Assistant in IIT Bombay, India. I did my Master's in Web and Data Mining from IIT Bombay advised by Prof. Soumen Chakrabarti.