Located in Boston, USA
Hi! I am currently a researcher at DBMI, Harvard Medical School, working with Pranav Rajpurkar. I’m excited about , focused on applications in healthcare. Throughout my academic journey, I have gained valuable experience in various domains, including , , , and . I am very fortunate to have gotten the opportunity to work with brilliant researchers, who have really inspired me and shaped my attitude towards research.
Previously, I was a research engineer at the amazing HHMI Janelia Research Campus, working on inferring synaptic plasticity rules using deep learning with James Fitzgerald and Jan Funke. I have worked on Neural Architecture Search in Frank Hutter’s AutoML lab in the picturesque town of Freiburg! Before that, I had an amazing time at the Gatsby Computational Neuroscience Unit at UCL, where I was working on evaluating biologically plausible perturbation-based learning algorithms to train deep networks with Tim Lillicrap at DeepMind and Peter Latham. I did my undergraduate thesis at NTU Singapore with Erik Cambria on using large language models for personality prediction. I quit my job as a software developer at Amazon to dive into the world of research ;)
I thoroughly enjoy coding and working on hard algorithmic problems.
|Oct 23, 2023||Shifted to Boston, joined the Rajpurkar lab to work on AI+health|
|Jul 20, 2023||Attended ICML 2023 in Hawaii! ✨🍻|
|Dec 1, 2022||Attended NeurIPS 2022 in New Orleans! ✨🎷|
|Oct 1, 2022||Visiting student researcher in Larry Abbott’s lab at the Zuckerman Institute, Columbia University|
|Sep 14, 2022||Manuscript on node perturbation learning accepted to NeurIPS!|
|May 27, 2022||Got Married!|
|Apr 25, 2022||Presented our paper, NAS-BenchSuite in ICLR’22.|
|Jan 10, 2022||Started at HHMI Janelia Research Campus in the Funke Lab|
- Towards Biologically Plausible Convolutional NetworksIn Advances in Neural Information Processing Systems (NeurIPS) 2021
- NAS-Bench-Suite: NAS Evaluation is (Now) Surprisingly EasyIn International Conference on Learning Representations (ICLR) 2022
- Stability and Scalability of Node Perturbation LearningIn Advances in Neural Information Processing Systems (NeurIPS) 2022