My research is centered around the topics of Efficiency and Reliability in NLP. I believe that these topics have immense practical significance and are very important, especially for real-world applications.
Specifically, on the efficiency topic, I have worked on improving inference efficiency of NLP systems (how to make inference in a computationally efficient manner without sacrificing the prediction accuracy), training data efficiency (how to leverage procedurally generated data to train a competitive model and how to efficiently crowd-source high-quality data instances), open-domain QA reader efficiency (how to efficiently leverage external knowledge in answering open-domain questions), efficiently indexing external knowledge for open-domain QA, and evaluation efficiency (how to efficiently compare the performance of different competing models).
On the reliability topic, I have worked on Selective Prediction, Post-Abstention, and Clarification questions (ongoing). Selective prediction enables the systems to abstain from making predictions when they are likely to be incorrect and thus improves the systems' reliability.I have published papers on these topics at premier AI and NLP conferences including ACL, EMNLP, EACL, NAACL, AAAI, and AAMAS.
My paper "John is 50 years old, can his son be 65?" Evaluating NLP Models' Understanding of Feasibility got accepted to EACL 2023.
Crossed 100 citationsscholar
Received Graduate College Travel Award FY23 Q3 for AAAI from ASU.
AAAI 2023 Student Scholarship.
Presented my EMNLP 2022 papers in-person in Abu Dhabi.
Reviewed for EACL 2023 for Question Answering Track.
My paper "Can Open-Domain QA Reader Utilize External Knowledge Efficiently like Humans?" has been accepted to appear at the AAAI'23 Workshop on Knowledge Augmented Methods for NLP.
Received ASU GPSA Travel Award for EMNLP 2022.
I am delighted to share that my Two papers have been accepted to appear at the EMNLP 2022 conference.
Received SACI Conference Award for EMNLP from ASU.
Received Graduate College Travel Award FY23 Q2 for EMNLP from ASU.
Crossed 50K views on my medium articles.
Presented Let the Model Decide its Curriculum for Multitask Learning at NAACL 2022 in Seattle.
Received Graduate College Travel Award 2022-23 Q1 from ASU.
My work Let the Model Decide its Curriculum for Multitask Learning has been accepted at DeepLo @NAACL 2022.
My work Benchmarking Generalization via In-Context Instructions on 1,600+ Language Tasks is on arXiv.
Received Spring 2022 ASU GPSA Travel Award.
Received Graduate College Travel Award Q4 from ASU.
My work Towards Improving Selective Prediction Ability of NLP Systems has been accepted at Repl4NLP @ACL 2022.
Received SCAI conference award.
My work ILDAE: Instance-Level Difficulty Analysis of Evaluation Data has been accepted at ACL 2022.
My work NumGLUE: A Suite of Fundamental yet Challenging Mathematical Reasoning Tasks has been accepted at ACL 2022.
My work Unsupervised Natural Language Inference Using PHL Triplet Generation has been accepted at ACL 2022.
My work Investigating Selective Prediction Approaches Across Several Tasks in IID, OOD, and Adversarial Settings has been accepted at ACL 2022.
My work An Architecture for Novelty Handling in a Multi-Agent Stochastic Environment: Case Study in Open-World Monopoly has been accepted at Designing Artificial Intelligence for Open Worlds spring symposium @ AAAI 2022.
Mentered class projects for Natural Language Processing course at ASU.
Started Computer Science Ph.D. program at Arizona State Univeristy.
Joined Microsoft India as a Software Developer.
Completed B.E. in Computer Science with Distinction (GPA: 9.11) at Birla Institute of Technology & Science, Pilani.
Internship at Microsoft.
Internship at Samsung R&D Institute.
Internship at Valuefirst Digital Media.
Started B.E. in Computer Science at Birla Institute of Technology & Science, Pilani.