Check out my past experiences below.
DRAGN
Jan 2022 - Present
Provo, UT
Co-author & Team Lead
“Adnan has consistently exceeded expectations at this position. He is thoughtful, responsible, skilled at his work, and willing to make personal sacrifices to get the job done right. I have been impressed both by how quickly he familiarized himself with the machine learning models used in our laboratory, as well as by his speed of execution on requested tasks.”
-Dr. Nancy Fulda, Director of DRAGN Lab, BYU
Check out my past experiences below.
Udemy
June 2021 - Aug 2021
San Francisco, CA
Software Engineering Intern
“Adnan is a highly motivated and passionate developer who has a keen insight for product improvements. In addition to handling his day to day work as a developer, Adnan would also suggest possible improvements to the product that he is working on. Adnan asks great questions and has strong communication skills. He has also demonstrated strong leadership skills during his internship by coordinating/driving a tech presentation with the other interns. I can see Adnan emerging as a strong engineer or product manager in the near future.”
- Kevin Zhang, Senior Software Engineer @ Udemy
Check out my past experiences below.
Human-Centered Machine Intelligence Lab (HCMI)
Jan 2020 - June 2021
Provo, UT
Student Researcher
Check out my past experiences below.
Sandbox
September 2022
Provo, UT
Product Manager
Check out my past experiences below.
Neighbor
June 2022 - Aug 2022
Lehi, UT
Software Engineering Intern
[Coming Soon]
[Current]
Pushing AI to its limit
Reconstructed an LSTM model that predicts time-series traffic volumes on large augmented data. The LSTM model is trained on volume-only, time-augmented, and multivariable-augmented (weather, crashes, road conditions, etc..) datasets spanning 30,000+ rows.
Simulate traffic for the next day, week, or decade
I was part of the BYU Transportation team collaborating on a paper of assessing how an LSTM model performs in predicting traffic volumes when trained on datasets featuring pure traffic volume, time-series traffic volume, and time-series traffic volume augmented with weather data and road conditions. Results compared testing accuracy, performance, and recurrent prediction performance.
98%
Faster training speed. From 180s / epoch to 3s / epoch.
40,000+ data points processed in seconds
The initial model was taking 2 hours for training, thus progressing our research was unfeasible. I optimized LSTM model’s accuracy while reducing the running time using multi-GPU training, embedders, multi-threaded data loading, batching, and minimizing CPU-GPU synchronization.
A powerful model with an industry-level benchmark
Constructed linear models for performance comparisons with MSE as low as 60 when trained on 30,000 batches of time-augmented traffic data.
Pushing AI to its limit
Reconstructed an LSTM model that predicts time-series traffic volumes on large augmented data. The LSTM model is trained on volume-only, time-augmented, and multivariable-augmented (weather, crashes, road conditions, etc..) datasets spanning 30,000+ rows.
Simulate traffic for the next day, week, or decade
I was part of the BYU Transportation team collaborating on a paper of assessing how an LSTM model performs in predicting traffic volumes when trained on datasets featuring pure traffic volume, time-series traffic volume, and time-series traffic volume augmented with weather data and road conditions. Results compared testing accuracy, performance, and recurrent prediction performance.
40,000+ data points processed in seconds
The initial model was taking 2 hours for training, thus progressing our research was unfeasible. I optimized LSTM model’s accuracy while reducing the running time using multi-GPU training, embedders, multi-threaded data loading, batching, and minimizing CPU-GPU synchronization.
98%
Faster training speed. From 180s / epoch to 3s / epoch.
A powerful model with an industry-level benchmark
Constructed linear models for performance comparisons with MSE as low as 60 when trained on 30,000 batches of time-augmented traffic data.
Pushing AI to its limit
Reconstructed an LSTM model that predicts time-series traffic volumes on large augmented data. The LSTM model is trained on volume-only, time-augmented, and multivariable-augmented (weather, crashes, road conditions, etc..) datasets spanning 30,000+ rows.
Simulate traffic for the next day, week, or decade
I was part of the BYU Transportation team collaborating on a paper of assessing how an LSTM model performs in predicting traffic volumes when trained on datasets featuring pure traffic volume, time-series traffic volume, and time-series traffic volume augmented with weather data and road conditions. Results compared testing accuracy, performance, and recurrent prediction performance.
40,000+ data points processed in seconds
The initial model was taking 2 hours for training, thus progressing our research was unfeasible. I optimized LSTM model’s accuracy while reducing the running time using multi-GPU training, embedders, multi-threaded data loading, batching, and minimizing CPU-GPU synchronization.
98%
Faster training speed. From 180s / epoch to 3s / epoch.
A powerful model with an industry-level benchmark
Constructed linear models for performance comparisons with MSE as low as 60 when trained on 30,000 batches of time-augmented traffic data.
Ditch the poor performance
A big pain noted was the poor performance experienced by users in East Asia and Africa - Udemy’s top demographic for user engagement. Thus, it was decided to migrate web pages from ReactJS to NextJS.
Things I did
framework migration; unit, integration, and render testing; A/B testing; and CI/CD-verified release into production.
40%
Increase in site performance
16%
Increase in conversion rate
2,000,000+ Better Experiences
Migrated busy web pages with +2M visits daily from pure client-side rendering to isomorphic rendering (hybrid CSR and SSR) through NextJS in Typescript.
A true reminder that we learn from nature
Bees. A marvelous species with a complex communication system. The research was modeled after how bees communicate during a hunt for a new nesting site. The hive splits into different roles, such as explorers and observers, that get dynamically transferred among the hive according to supply and demand. After a subset of bees return back to the hive after site exploration, they dance. The more vigorous their “dance” is, the higher the quality of the beehive, and the more bees follow to assess that site. This goes on back and forth until bees form a “pipeline” in which they start migrating.

A communication model inspired from bees, ants, or termites can be used in military use, inter-drone communication, or search-and-rescue missions
Conducted Human-Swarm interaction experiments
Conducted user studies to analyze communication among the swarm and deduce the most effective level of human control. The result and analysis was later presented to the Student Research Conference at Brigham Young University.
50,000 bee agents simulated in 30 frames per second
Enhanced the simulation to handle up to 50,000 bee agents that constantly change, send, and receive status according to state diagram and environmental evaluation while still maintaining 30 FPS rendering performance.