My Portfolio

I am a versatile self-starter experienced in programming with Python, C++, R; with a background in machine learning, statistical modeling, signal processing; and engineering skills such as embedded systems and circuit design.

My role as a graduate student in neuroscience evolved around computational modeling of hearing phenomena through analyzing large amounts of brain signals by parallelized implementation of machine learning methods (such as K-SVD), and statistical modeling of human sound localization data using nonlinear mixed-effects model (NLME). Meanwhile, I enjoyed assisting Prof. Antje Ihlefeld in setting up an entire animal research lab (from designing hardware to writing software) and mentoring undergraduate students towards obtaining research grants.

Education

Ph.D. in Biomedical Engineering
Sep 2015 - Aug 2021
Received the 2020 NCE Outstanding Graduate Student Award
Focus: Auditory neuroscience, data science, signal processing of brain waves, statistical modeling
Dissertation: “Towards Understanding the Role of Central Processing in Release from Masking”
Advisor: Prof. Antje Ihlefeld
Program offered by MIT on the edX platform
Focus: Data analysis, probability, statistics, and machine learning
B.S. in Electrical Engineering
Sep 2009 - Aug 2013
Isfahan, Iran
Summa cum laude: Ranked 1st among all Electrical Engineering undergraduate students
Microchip design, multi-objective optimization, genetic algorithm, parallelized simulation
Thesis: “Design of a Safe Low-Power Neural Micro-Stimulator with Gaussian Waveform”

Experience

Data Scientist
Nov 2021 - Current
Research & Development, Spotify Inc.
New York, NY
Data Science Intern
Jun 2021 - Aug 2021
Research & Development, Spotify Inc.
New York, NY

Exploratory analysis of multidimensional user data, and A/B testing of machine learning models in production.

Graduate Research Assistant
Sep 2015 - Current

By non-linear mixed-effects (NLME) modeling of human sound localization data we uncovered level dependence of perceived sound direction and proposed a unified computational model, overhauling the established view in the field. Further mining of the data has revealed even more interesting effects of short- and long-term adaptation when localizing with interaural time differences. Analysis in R with nlme package. Link Code

We discovered a reversal in coding scheme of target sounds in presence of background interference (the cocktail party problem) at negative signal-to-noise ratios using a combination of signal processing, pattern recognition, unsupervised and supervised learning methods to process electrical activity of neurons. Code

Electrophysiology Auditory Recording System (EARS) project: I wrote an open source package in Python and GLSL (OpenGL Shading Language) for online processing, visualization and recording of brain signals, synchronized in microseconds with behavior of animals while implanted with microelectrodes. Code

Mentored a team of six undergraduate students towards obtaining research grants and completion of multiple projects ranging from neural signal processing and animal behavioral training to human psychoacoustics.

Assisted in setting up an entire animal research lab from the ground up under supervision of Prof. Antje Ihlefeld.

Adjunct Instructor
May 2020 - Jul 2020

Instructed BME 301: Electrical Fundamentals of Biomedical Engineering.

For optimal learning experience during the COVID pandemic, Dr. Ihlefeld and I designed a remote laboratory curriculum for students to experiment with electrical circuits at home using commodity equipment. Link

Graduate Research Assistant
Jan 2016 - Jun 2016

Custom implementation of the K-SVD unsupervised dictionary learning (inspired by image processing literature) in combination with matching pursuit and Gabor atom decomposition for analysis of brain waves and unsupervised recognition of sparse events. Initially written in Matlab, reimplemented in C++ with CUDA GPU acceleration.

Software Engineer
Jul 2013 - May 2014
Research & Development, Novin Medical Engineering Co.
Isfahan, Iran

Firmware development for embedded systems.

Design of a modern ARM-based platform running on Windows Embedded along with a software framework and touch graphical user interface written in C# for the next generation devices of Novin Company.

Low level drivers written in C++ facilitating real-time interaction of the software framework with different hardware peripherals for stimulator, interferential, laser and magnet therapy devices.

Software Engineer Intern
May 2013 - Jun 2013
Research & Development, Novin Medical Engineering Co.
Isfahan, Iran

High speed transfer of real-time data from an ARM microcontroller to another microcontroller or PC via the bulk transfer protocol of universal serial bus (USB) written in C++.

Cofounder & Lead Engineer
Jul 2012 - March 2014
Mahoor Engineering Co. (Startup)
Isfahan Sci. & Tech. Town, Iran

Software development and electronic circuit design.

Proposed and partially implemented a robotic system with two arms for automation of steel processing laboratory of Mobarakeh Steel Company (Isfahan, Iran). Data from a PrimeSense depth camera installed on the first robotic arm combined with inverse kinematics was utilized to scan and construct a 3D cloud point of steel samples and the lab environment. This information was further used to automate steel sample preparation via rule-based artificial intelligence implemented on the second arm.

Projects

Publications

Hemodynamic Responses Link Individual Differences in Informational Masking to the Vicinity of Superior Temporal Gyrus

M Zhang, N Alamatsaz, A Ihlefeld
Frontiers in Neuroscience, 2021

Suppressing unwanted background sound is crucial for aural communication. A particularly disruptive type of background sound, informational masking (IM), often interferes in social settings. However, IM mechanisms are incompletely understood. At present, IM is identified operationally: when a target should be audible, based on suprathreshold target/masker energy ratios, yet cannot be heard because target-like background sound interferes. We here confirm that speech identification thresholds differ dramatically between low- vs. high-IM background sound. However, speech detection thresholds are comparable across the two conditions. Moreover, functional near infrared spectroscopy recordings show that task-evoked blood oxygenation changes near the superior temporal gyrus (STG) covary with behavioral speech detection performance for high-IM but not low-IM background sound, suggesting that the STG is part of an IM-dependent network. Moreover, listeners who are more vulnerable to IM show increased hemodynamic recruitment near STG, an effect that cannot be explained based on differences in task difficulty across low- vs. high-IM. In contrast, task-evoked responses near another auditory region of cortex, the caudal inferior frontal sulcus (cIFS), do not predict behavioral sensitivity, suggesting that the cIFS belongs to an IM-independent network. Results are consistent with the idea that cortical gating shapes individual vulnerability to IM.

Teaching Electronic Circuit Fundamentals via Remote Laboratory Curriculum

N Alamatsaz, A Ihlefeld
Biomedical Engineering Education, pp. 1-4, 2020

The course “Electrical Fundamentals” (EF) is a core requirement for all undergraduate students in our biomedical engineering program. The curriculum introduces general principles of device development for electronics-based bioinstrumentation, comprehensively covering foundational acquisition concepts for bioelectric signals. Laboratory-based learning modules provide hands-on experience with circuit fundamentals. We here introduce our six-hour lab curriculum for distance learning, which we developed in response to the COVID-19 outbreak.

Signal-to-Noise Ratio Shapes Dip Listening in Auditory Cortex

N Alamatsaz, A Ihlefeld
The Association for Research in Otolaryngology (ARO), vol. 43, pp. 204, 2020

Background: When background noise fluctuates slowly over time, both humans and animals can listen in the dips of the noise envelope to detect target sound. Detection of target sound is facilitated by a central neuronal mechanism called envelope locking suppression. At both positive and negative signal- to-noise ratios (SNRs), the presence of target energy can suppress the strength by which neurons in auditory cortex track background sound, at least in anesthetized animals. However, in humans and animals, most of the perceptual advantage gained by listening in the dips of fluctuating noise emerges when a target is softer than the background sound. This raises the possibility that different mechanisms may underlie suppression of background sound at positive vs negative SNRs, a hypothesis tested here in awake behaving animals.
Methods: Normal-hearing Mongolian gerbils were placed on controlled water access. Using a Go/NoGo appetitive procedure, animals were trained to detect a target tone in the presence of modulated background sound at super-threshold SNRs of -10, 0 and +10 dB. At these SNRs, animals were able to detect the target sound with comparable behavioral sensitivity (d’=2). To record multiunit activity of neurons in core auditory cortex, two trained gerbils were then surgically implanted with chronic microelectrode probes. In a total of 9 recording sessions, neural responses at +10, 0 and -10 dB SNR were collected while animals were actively trying to hear out the target from the acoustic mixture vs listen passively to the same sounds.
Results: A total of 25 single- and 8 multi-units showed a target-evoked response in their firing rate. Pooling across NoGo trials during active and passive listening, these target-responsive units were then further classified as tonic or phasic, based on whether they significantly phase-locked to the modulation rate of the masker. In both types of units, sensory information at the population level could predict behavioral sensitivity. For phasic units at 0 and 10 dB SNR, average firing rates decreased in Go vs NoGo trials, consistent with envelope locking suppression. However, at -10 dB SNR, firing rate increased during Go trials as compared to NoGo, for both tonic and phasic units, a phenomenon not predicted by envelope locking suppression.
Conclusions: Preliminary neurophysiological data hint that the neuronal mechanisms that enable an individual to listen in the temporary dips of background sound may differ across positive vs negative SNRs.

Population Rate-Coding Predicts Correctly that Human Sound Localization Depends on Sound Intensity

A Ihlefeld, N Alamatsaz, RM Shapley
eLife, 8, p.e47027, 2019

Human sound localization is an important computation performed by the brain. Models of sound localization commonly assume that sound lateralization from interaural time differences is level invariant. Here we observe that two prevalent theories of sound localization make opposing predictions. The labelled-line model encodes location through tuned representations of spatial location and predicts that perceived direction is level invariant. In contrast, the hemispheric-difference model encodes location through spike-rate and predicts that perceived direction becomes medially biased at low sound levels. Here, behavioral experiments find that softer sounds are perceived closer to midline than louder sounds, favoring rate-coding models of human sound localization. Analogously, visual depth perception, which is based on interocular disparity, depends on the contrast of the target. The similar results in hearing and vision suggest that the brain may use a canonical computation of location: encoding perceived location through population spike rate relative to baseline.

Leveraging Adaptation to Study Perceptual Weighting of Interaural Time Differences

N Alamatsaz, A Ihlefeld
International Congress on Acoustics (ICA), vol. 23, no. 1, pp. 8253-8256, 2019
PDF

An important question in auditory cognition is how we perceive the location of an object in space. Converging evidence from animal models and humans suggests that when judging sound direction, the central nervous system weighs the anticipated reliability of binaural cues. Here, we used short-term adaptation to bias normal-hearing listeners towards source direction favoring either the left or the right frontal quadrant. Listeners rated perceived laterality of tokens of band-pass filtered noise (300 Hz-1200 Hz) with interaural time differences that were randomly selected from a uniform distribution spanning either -375 to 0 µs or 0 to 375 µs. Using non-linear mixed effects modeling of behavioral laterality reports, we tested how exposure to source quadrant affects how listeners weigh the reliability of interaural time differences. The cue reliability hypothesis predicts that perceived direction should be skewed, such that unreliable frontal source angles are more affected by short-term adaptation than the more reliable lateral source angles. Alternatively, short-term adaptation may affect all source angles equally, predicting an overall shift in perceived direction. Results show that frontal angles are more strongly affected by short-term adaptation than lateral angles, supporting the cue reliability hypothesis.

Circling Back on Theories of Sound Localization

A Ihlefeld, N Alamatsaz, RM Shapley
The Journal of the Acoustical Society of America (ASA), vol. 145, no. 3, pp. 1759, 2019

An important question of human perception is how we localize target objects in space. Through our eyes and skin, activation patterns on the sensory epithelium suffice to cue us about a target’s location. However, for our ears, the brain has to compute where a sound source is located. One important cue for computing sound direction is the time difference in arrival of acoustic energy reaching each ear, the interaural time difference (ITD). With behavioral experiments on sound lateralization as a function of sound intensity, we tested how the computation of sound location with ITDs is done. We tested twelve naïve normal-hearing listeners (ages 18–27, five females). Stimuli consisted of low-frequency noise tokens that were bandlimited from 300 to 122 Hz, from 5 to 25 dB sensation level. Without response feedback, listeners were initially trained to reliably judge the direction of a sound source and then tested on where they heard the sound. We found that softer sounds tend to be localized closer to midline as compared to louder sound. This finding raises doubt on one major theory of sound localization, the labeled-line theory, and supports another main contender, population rate based coding.

Interaural Time Differences: Lateralization Adapts to Stimulus Space

N Alamatsaz, A Ihlefeld
The Association for Research in Otolaryngology (ARO), vol. 42, pp. 411, 2019
PDF

A wide range of species relies on sound localization for both navigation and auditory scene analysis. For low-frequency sound, interaural time differences (ITDs) are the dominant cue for determining the direction of a source in the horizontal plane, a phenomenon called sound lateralization. The mechanisms by which the nervous system maps ITD into perceived sound direction are incompletely understood.
In anechoic spaces, a given source angle in the horizontal plane typically gives rise to the same ITD, across a wide range of source distances. However, in everyday environments where background sound and reverberant energy are often present, ITDs are much less reliable indicators of source direction. This raises the possibility that when estimating source direction, a listener’s interpretation of ITD changes depending on the context of the listening environment. Indeed, previous work shows plasticity in perceived sound direction across a wide range of conditions, including ear plugging, modifications to shape of the pinnae, prolonged exposure to constant interaural delays, in presence of preceding distractors, long-term procedural learning and short-term stimulus history. However, we have an incomplete understanding of short-term adaptation of sound lateralization based on the overall range of ITDs.
Here, we randomly assigned 34 naïve normally-hearing listeners to 4 groups. Using a target-pointer matching task without visual feedback, listeners of three groups were trained on ITD lateralization for 3 sessions. A fourth naïve control group was tested without training. All 4 groups of listeners then performed a lateralization task which asked them to identify and plot the internal image of low-frequency noise tokens. For both training and testing, one trained group (RIGHT HEMI) was only presented with positive ITDs (right hemisphere), the second trained group (LEFT HEMI) only with negative ITDs (left hemisphere), and the third trained group (BOTH HEMIS) with the full range of bilateral ITDs (both hemispheres). The naïve group was tested with positive ITDs only (NAÏVE RIGHT HEMI). Listener responses across different groups and stimulus conditions were analyzed with a Nonlinear Mixed-Effects Model (NLME). Results show robust response expansion in all unilaterally tested listeners towards the contralateral side, with a larger effect for the trained groups versus the naïve. In contrast, bilaterally trained listeners did not display any response bias. Together, these results show that perceived direction adapts rapidly to stimulus space. Results will be discussed in the context of assessing spatial perception in patient groups with impaired binaural cues, including bilateral cochlear implant users and cochlear implant users with single-sided deafness.

The Role of Central Processing in Modulation Masking Release

N Alamatsaz, A Ihlefeld
The Journal of the Acoustical Society of America (ASA), vol. 144, no. 3, pp. 1900, 2018

When background sound is present, hearing impaired (HI) individuals and cochlear-implant (CI) listeners typically are worse at hearing out target sound as compared to normal-hearing (NH) listeners. This perceptual deficit occurs both when the background consists of noise that fluctuates over time (“modulated”) and for stationary background noise (“unmodulated”). In addition, the difference in thresholds between tone detection in modulated and unmodulated noise, referred to as modulation masking release (MMR), is much reduced or absent in HI and CI as compared to NH. Both peripheral and central processing mechanisms contribute to MMR. We previously showed that central MMR is reduced in human CI listeners, and that sound deprivation reduces central MMR in Mongolian gerbils. Here, we began to explore the neurobiological basis of central MMR. NH gerbils were trained to hear out target tones (1 kHz) in modulated (10-Hz rectangularly gated) versus unmodulated bandlimited background noise, and chronically implanted with recording electrodes in core auditory cortex. Neural discharge was analyzed as a function of the broadband energy ratio between target and background sound to determine how different types of background sound affect neural information transmission in awake behaving gerbil. Preliminary results will be discussed in the context of how hearing loss may affect central MMR.

Traffic Volume Reduction in Smart Grid Networks by a Cooperative Intelligent Interpolation Technique

A Boustani, N Alamatsaz, N Alamatsaz, A Boustani
Consumer Communications & Networking Conference (CCNC), 15th IEEE Annual, pp. 1-7, 2018

Leveraging a modern communication network, the power industry is moving towards the next generation power grid, the smart grid. This new communication-based power grid is expected to change the way electricity is generated, distributed, and transmitted to the consumers by enhancing the reliability, efficiency, sustainability, and economics of the grid. However, due to the high volume, high granularity, and frequency of the data generated by smart electricity meters, careful planning and management of this communication network is essential. Given the potential large-scale future deployment of the smart grid, power companies face possible network capacity limitations. Therefore, efficient utilization of the Smart Grid Network (SGN) should be studied. In this paper, we introduce a smart interpolation scheme for reducing the volume of information transmitted in a smart grid backhaul network without any precision reduction or loss of benefit. Utilizing concepts of Spread Spectrum Communications, smart nodes at utility control centers are able to intelligently infer omitted data and interpolate the original message. By means of extensive evaluations, we show that our scheme significantly improves network utilization and decreases volume of the traffic in a smart grid network.

The Effect of Sound Intensity on Lateralization with Interaural Time Differences

N Alamatsaz, RM Shapley, A Ihlefeld
The Journal of the Acoustical Society of America (ASA), vol. 141, no. 5, pp. 3639, 2017

Previous studies examining the effect of sound intensity on ITD lateralization disagree on whether ITD lateralization changes with increasing sound level. We tested how sound intensity affects lateralization in three experiments. In all experiments, normal-hearing listeners judged the lateralization of band-limited target noise tokens (300 to 1200 Hz, 1 s duration, 10-ms cos-squared ramp, presented with insert earphones). For each ear and target noise, sensation level (SL) was estimated using two-down one-up adaptive tracking. Each target stimulus contained an ITD of 0, 75, 150, 225, 300, or 375 µs and was presented at 10, 25, or 40 dB SL. In experiment 1, listeners matched the ITD of a variable-ITD pointer (25 dB SL, 300-1200 Hz, 1 s duration, 10-ms cos-squared ramp) to each of the target tokens. In experiment 2, in each two-interval trial of a 2-AFC paradigm, the standard stimulus consisted of the same noise token as in experiment 1 and the signal stimulus had a randomly chosen ITD of +/− 0, 25, 50 or 75 µs relative to the target ITD. Listeners reported whether the sound moved to the left or to the right, and thresholds were estimated at the “50%-right” point. In experiment 3, listeners indicated the perceived laterality by visually pointing on a graphical user interface. Preliminary data suggest that sound level affects lateralization, but that individual differences require testing of a greater number listeners than have historically been assessed.

Towards Efficient Privacy-Preserving Data Aggregation for Advanced Metering Infrastructure

N Alamatsaz, A Boustani, N Alamatsaz, A Boustani
The International Journal of Computer Networks & Communications (IJCNC), vol. 9, no. 5, 2017

Recent changes to the existing power grid are expected to influence the way energy is pro-vided and consumed by customers. Advanced Metering Infrastructure (AMI) is a tool to incorporate these changes for modernizing the electricity grid. Growing energy needs are forcing government agencies and utility companies to move towards AMI systems as part of larger smart grid initiatives. The smart grid promises to enable a more reliable, sustainable, and efficient power grid by taking advantage of information and communication technologies. However, this information-based power grid can reveal sensitive private information from the user’s perspective due to its ability to gather highly-granular power consumption data. This has resulted in limited consumer acceptance and proliferation of the smart grid. Hence, it is crucial to design a mechanism to prevent the leakage of such sensitive consumer usage information in smart grid. Among different solutions for preserving consumer privacy in Smart Grid Networks (SGN), private data aggregation techniques have received a tremendous focus from security researchers. Existing privacy-preserving aggregation mechanisms in SGNs utilize cryptographic techniques, specifically homomorphic properties of public-key cryp- tosystems. Such homomorphic approaches are bandwidth-intensive (due to large output blocks they generate), and in most cases, are computationally complex. In this paper, we present a novel and efficient CDMA-based approach to achieve privacy-preserving aggregation in SGNs by utilizing ran- dom perturbation of power consumption data and with limited use of traditional cryptography. We evaluate and validate the efficiency and performance of our proposed privacy-preserving data ag-gregation scheme through extensive statistical analyses and simulations.

Neural Correlates of Modulation Masking Release: The Role of Sound Deprivation

A Ihlefeld, M Ning, SS Chaubal, N Alamatsaz
The Journal of the Acoustical Society of America (ASA), vol. 141, no. 5, pp. 3894, 2017

It is well documented that for tone detection in background noise, normally-hearing (NH) listeners have better behavioral thresholds when that noise is temporally modulated as compared to temporally unmodulated, a perceptual phenomenon referred to as Modulation Masking Release (MMR). However, hearing impaired listeners often do not show a dramatic difference in performance across these two tasks. Behavioral evidence from Mongolian gerbils (Meriones unguiculatus) with conductive hearing loss (CHL) supports the idea that sound deprivation alone can reduce MMR. Here, MMR was assessed in core auditory cortex in three NH animals, and one animal with CHL. Trained, awake gerbils listened passively to a target tone (1 kHz) embedded in modulated or unmodulated noise while a 16-channel chronically implanted microelectrode array recorded multi-unit neural spike activity in core auditory cortex. Results reveal that rate code correlates with behavioral thresholds at positive, but not negative Signal-to-Noise ratios. Effect of sound deprivation on MMR will be discussed using a Wilson-Cowan neural network model of cortical function.

Towards Improved Safety, Selectivity and Energy Efficiency of Retinal Stimulation

N Alamatsaz, S Moradi, P Moallem
Iranian Conference on Biomedical Engineering (ICBME), vol. 21, 2014
PDF