Skip to content
View azmfaridee's full-sized avatar
🎯
Focusing
🎯
Focusing

Highlights

  • Pro

Organizations

@mothur @apertium

Block or report azmfaridee

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
azmfaridee/README.md

Hi there 👋

My professional works and research are focused on building scalable machine learning models that are robust against domain and category shifts with minimal-to-no extra label information. To that end, I primarily work with deep domain adaptation, unsupervised, self-supervised, adversarial, disentangled representation learning & learnable data augmentation techniques with practical text, audio, and video applications. I’m interested in discovering the optimum transferability of the representations between domains, tasks, and modalities and solving real-world ML problems with these ideas.

At Amazon, my team develops the end-to-end neural machine translation pipeline that powers Amazon's next-generation customer service experience where, as an Applied Scientist, I improved the robustness of the NMT models under noisy, out-of-domain inputs using some of the above ideas. In 2021, I also interned with the Audio and Acoustics Research Group in Microsoft Research where I developed novel deep neural architectures to estimate the performance of various types of deep noise suppression models. I received my Ph.D. in Information Systems at the University of Maryland, Baltimore County under the supervision of Dr. Nirmalya Roy in the Mobile, Pervasive, and Sensor Computing (MPSC) Lab.

Before coming back to graduate school, I spent around 8 years in the industry building (and later assembling & leading teams) distributed & scalable back-ends that served millions of users. Between the years 2009–2013, I was also an active contributor to a few open-source NLP/ML projects through the Google Summer of Code program (both as a participant and later in mentoring roles).

Pinned Loading

  1. codem-smartcomp-2022 codem-smartcomp-2022 Public

    PyTorch Implementation of IEEE SmartComp 2022 paper "CoDEm: Conditional Domain Embeddings for Scalable Human Activity Recognition"

  2. strangan-chase-2021 strangan-chase-2021 Public

    PyTorch Implementation of IEEE/ACM CHASE 2021 paper "STranGAN: Adversarially-Learnt Spatial Transformer for Scalable Human Activity Recognition"

    Python 4

  3. nano-dl-docker nano-dl-docker Public

    Codes for my blog entry "Deep Learning on Jetson Nano: Streamlining Docker Builds"

    Dockerfile 3 1

  4. mpsc-lab-umbc/docker-scripts mpsc-lab-umbc/docker-scripts Public

    Simple scripts to get started with ML development within a docker container.

    Jupyter Notebook 2

  5. mothur mothur Public

    Forked from mothur/mothur

    This is GSoC2012 fork of 'Mothur'. We are trying to implement a number of 'Feature Selection' algorithms for microbial ecology data and incorporate them into mother's main codebase.

    C++ 3 1

  6. apertium-bn-en apertium-bn-en Public

    Github Mirror of Apertium's bn-en Language Pair. This repository is the combined work done in GSoC09 (student: azmfaridee [githhub], mentor ftyers [githhub]) and GSoC11 (student: ragib06 [githhub],…

    Python 3 2