• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Individualization of Head Related Impulse Responses Using Division Analysis

    2018-06-07 05:22:26WeiChenRuiminHuXiaochenWangChengYangLianMeng
    China Communications 2018年5期

    Wei Chen, Ruimin Hu*, Xiaochen Wang, Cheng Yang, Lian Meng

    1 National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan 430072, China

    2 Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University, Wuhan 430072, China

    3 Collaborative Innovation Center of Geospatial Technology, Wuhan 430079, China

    4 School of Physics and Electronic Science, Guizhou Normal University, Guiyang 550001, China

    I. INTRODUCTION

    HRIR involves the impulse information which relates the sound source location and the ear location, and it describes the changes in the sound pressure from a point in the free field to listener’s ear drum [1]. The corresponding frequency domain information of HRIR is called head-related transfer function (HRTF).

    A monaural sound can be transformed into an arbitrary source sound by convoluting with HRIR. Hence, HRIRs are generally used to generate virtual three-dimensional (3D) auditory. However, the HRIR varies from subject to subject due to the difference in anthropometric features [2]. Ideal HRIR information can be obtained by measurement but, unfortunately, the measurements are time-consuming and need expensive equipment [3]. Virtual 3D auditory with non-individualized HRIR may cause errors in sound localization such as front-back reversals, up-down confusions and inside of head localization [2,4-6]. Therefore,the research on HRIR customization has great significance.

    Several methods for HRIR or HRTF individualization have been proposed, such as HRTF database matching [7-9], HRTF scaling in frequency [6,10], boundary element method[11-14], customized HRTF through a subjective tuning [15-17] and build mathematical model based on regression analysis [18-21].

    One of the most promising method for HRTF individualization is based on building structural models [22-25], which aims to establish the mapping relations between anthropometric parameters and HRTF or HRIR.Although interaural time difference (ITD) and interaural level differences (ILD) are known to be the primary localization cues in the horizontal plane [26-28], there still have additional cues arising due to the various shadowing and diffraction effects caused by head, pinna, and torso [29,30].

    Recently, several novel methods aimed establish relationships between the HRTF and anthropometric features were provided. Iida and Ishii et al. proposed a method for HRTF individuation by estimating the frequencies of the two lowest spectral notches based on the anthropometry of pinna [31]. Bilinski, Ahrens et al. synthesized the HRTF phase frequency response from the sparse representation of anthropometric features [32]. Grijalva et al.reconstructed low-dimensional HRTFs from anthropometric features and then reconstruct the HRTF using a neighborhood-based reconstruction approach [33].

    But the effect of each anthropometric measurement on HRIR is not yet conclusive. From a physical point of view, objects with different sizes have varied effects on sound wave reflection and diffraction. Thus, different anatomical parts may provide different contribute to the temporal and spectral features in HRIR and HRTF.

    In this paper, we studied the relationship between anthropometric measurements and different HRIR parts by dividing the HRIRs into several segments and performing regression analysis between each segment and anthropometric measurements. Furthermore,we propose an effective method of generating individual HRIR using anthropometric measurements.

    In our method, the HRIR data is preprocessed to remove the time delay before the arrival of first pulse to subject’s ear drum.Consequently, we reserve 128 samples (about 3 ms) from each HRIR for further analysis, in which contains the effects of pinna, head, and torso [34]. Then each truncated HRIR is divided into 3 segments, and regrouped according to the directions and segments. Subsequently,the grouped HRIRs are estimated using multiple linear regression (MLR) analysis to obtain the mapping relationship between the HRIRs and anthropometric measurements. Finally,we can estimate the individual HRIR with the subject’s anthropometric measurements and individual coefficients obtained from the mapping relationship.

    The whole proposed approach is presented in section II, and the objective simulation and subjective test are described in section III. The last section presents the conclusions of our work and further issues.

    In this paper, the authors propose a simple and effective method for modeling relationships between anthropometric measurements and HRIRs.

    II. PROPOSED APPROACH

    The outline of individualization process is shown in figure 1. The rest of this section discusses the key steps of HRIR Individualization process.

    2.1 HRTF Database

    Fig. 1. Process of Individualization.

    The labeled S0 in figure 1 represents the HRTF database to be analyzed, the database is provided by CIPIC Interface Laboratory of California University at Davis [35,36]. The Release 1.2 for CIPIC HRTF database involves 45 subjects, including 43 human subjects and 2 KEMAR with small pinnae and large pinnae respectively. We choose 37 out of 45 subjects from the database, due to some measurements of the rest 8 subjects are not available. Each subject’s data contains 1250 HRIRs of each ear and the length of each HRIR is 200. So,we extracted a total of 92,500 original HRIRs from 37 subjects, which is marked by S1 in figure 1. And also, we extracted the anthropometric measurements of the 37 subjects from the HRTF database. Detailed selection process of anthropometric measurements is explained in the subsection 2.3.

    2.2 HRIR preprocessing

    Although each original HRIR has 200 samples, many samples are considered approximately equal to 0. Figure 2 shows the left ear’s HRIR amplitudes of KEMAR with small pinnae (subject_165) from CIPIC’s database.On the left panel, all the horizontal HRIRs are superimposed upon the image. And on the right panel, only left HRIR with azimuth equal-80o is shown. The zero values before the first pulse are related to the measurement distance and the ITD, and the approximately zero values at the end part of the waveform are caused by the attenuation of the signal.

    In our model, we removed the time delay,and reserved 128 samples (about 3 ms) of each HRIR data since the first pulse arrived at ear drum. Our process of determine the main direct pulse is described as follows:

    Fig. 2. HRIR amplitudes of KEMAR with small pinnae.

    1) locating the max positive amplitude Ai,θ,φof the i-th subject’s HRIR, with the azimuth θ and elevation φ.

    2) locating HRIR’s first direct pulse peak.We set a threshold of 0.3×Ai,θ,φ, and find out the first peak with larger amplitude as the first direct pulse peak. Although CIPIC provide the exact arrival time of each impulse response in their database, from which the direct pulse peak can be calculated more accurately, we are inclined to find out the first peak with generic approach. We examined all the HRIR set of horizontal-plane extracted from CIPIC’s database and finally chose 0.3 as a reasonable compromise between the accuracy and universality.

    3) searching from the first direct pulse peak back to front until there is an increase in the HRIR amplitude, which is labelled as direct pulse’s first sampling point.

    In order to confirm the validity of the trunked HRIR, we calculated the energy proportion between the original HRIR and trunked HRIR.The proportion P is calculated as follows:

    where hi,θ,φ(n) corresponds to the n-th HRIR amplitude of the i-th subject with the azimuth θ and elevation φ, h^i,,θ?is the corresponding trunked results of HRIR.

    We calculated the proportion of all the 92,500 HRIRs’, and got an average proportion of 99.74%, with the lowest proportion 95.49%appear in the direction with azimuth angle 40o and elevation angle -39.375o from subject_135. Thus, HRIR truncation to a length of 128 should be sufficient for the subsequent processing.

    2.3 Selection of anthropometric measurements

    The CIPIC HRTF database [36] provides a set of 27 anthropometric measurements including 17 for the head and 10 for the pinna (Figure 3).

    Correlation analysis has been used by several authors to select appropriate anthropometric measurements [3,4,37,38]. However,the effect of anthropometric measurements on HRIR is still an unsolved problem. In our model, we focus on relationships between the HRIR segments and anthropometric measurements, which follow the assumption that anthropometric measurements have different effect on each HRIR segments.

    Autocorrelation between anthropometric parameters may lead to flaw regression model.So, we applied correlation analysis on anthropometric parameters to determine which parameters should be excluded. Due to the obvious scale difference between pinna size and head and torso size, we computed the correlation coefficient of pinna features and head and torso features, respectively. It should be noted x15(height) and x16(seated height) were excluded beforehand, because these measurements are not available for all subjects.

    Fig. 3. Subject’s anthropometric measurements [36].

    Fig. 4. Correlation coefficients of anthropometric parameters.

    Table I. Selected anthropometric parameters.

    Fig. 5. Left ear’s HRIRs of subject_003 in median plane.

    Suppose that matrix A (N×R) is composed of R×1 column vectors with each vector is a certain kind of anthropometric measurements of all subject’s, N is the count of samples and R is the count of anthropometric features.

    Let fn,rdenote the r-th feature of the n-th sample. And for the r-th feature Frcan be written as follows:

    Then the correlation coefficient ρx,ybetween x-th feature and y-th feature can be calculated as follows:

    Then we sum up each feature’s correlation coefficient:

    where Sxis the x-th feature’s correlation score.The bigger Sxmeans the x-th anthropometric feature has stronger correlation with other features. For the pinna’s features, we set a threshold of 2.6, and removed 3 strongest correlated features: d5, d6, d8. And for the head and torso features, we set a threshold of 5.2, and removed 7 strongest correlated features: x3, x6,x8, x9, x12, x16, x17. Figure 4 in page 95 shows the correlation coefficients of the anthropometric parameters.

    Finally, we selected 15 anthropometric parameters as candidates in our model, for details see table I.

    2.4 HRIR division strategy

    The truncated HRIRs are categorized into 2500 groups according to the measurement directions (2 ears, 1250 directions each ear), and each HRIR is divided into 3 segments of different lengths. Mark S3 in figure 1 shows the results after this step. And then we will discuss the division strategy of HRIR.

    The HRIR data from CIPIC’s database are obtained by blocked-meatus measurement[36]. The records are limited to a length of 200 sample points (about 4 ms). Figure 5. shows HRIR image corresponding to azimuth 0o with all elevations for subject_003 in CIPIC’s database.

    There have been many researches about composition of the HRIR response in terms of pinna effects, head reflection and diffraction,torso reflection, etc. [30,34,39] And it is generally accepted that there have four distinct areas, which marked as 1, 2, 3 and 4, as shown in Figure 5, corresponding to the effects of different anthropometric features.

    The HRIR data from CIPIC’s database was measured when the subject is seated in the center of a 1m radius hoop whose axis is aligned with the subject’s interaural axis[35,36]. Thus, the direct sound wave from loudspeakers will reach the ear channel first,and then followed by other composition of responses in terms of pinna effects, torso reflection and knee reflection [30].

    There has a faint initial pulse area before area 1, which is attributable to the probe microphone’s nature factors [30,36]. Area 2(from 0.8 ms to 1.2 ms) depicts the effects of direct sound wave reaches the pinna and the diffraction effects of head and pinna. Area 3(from about 1.2 ms to 2 ms) depicts the reflection due to torso. The faint ridges after 2 ms is caused by knee reflections [36].

    In our model, we discarded the HRIR data before the direct wave’s arrival to pinna and only reserved 128 samples (detail in subsection 2.2), which covered area 2, 3 and 4 marked in Figure 5.

    Based on the above discussion, anthropometric features may play different roles in the view of temporal sequence of HRIR data.Thus, we divided the 128-length HRIR (about 3 ms) into 3 segments according to the main effecting of anthropometric features, and then the relationships between each segment and anthropometric features will be evaluated.

    2.5 MLR model of HRTF individualization

    Then at S4 in figure 1, we use multiple linear regression (MLR) to analyze the relationship between anthropometric measurements and HRIR segments. The model for multiple linear regression is given by,

    in which S represents current HRIR segment and matrix X represents anthropometric measurements derived from CIPIC’s database and matrix β represents regression coefficients and E is the estimation error.

    For each particular HRIR segment of given direction, we will choose relevant anthropometric measurements to set up matrix X. For example, there will be a 37×13 matrix X for the 16-length HRIR segment, the row count of X corresponds to the sample size (37 HRTF samples chose from CIPIC’s database, detail in section 2.1) and the number of columns corresponds to the feature count of given HRIR segment (detail in table II), and it should be noted that, the value of the first column of X is all 1 for calculation requirement.

    Then β can be estimated using least square estimates, which is given by,

    in which X′ represents the transpose of the matrix X and -1 represents the matrix inverse.The MRL model can now be estimated by,

    As discussed in subsection 2.4, we divided the HRIR response into 3 segments and each segment corresponding to the effects of a series anthropometric features. Thus, for each segment, we selected most likely relevant anthropometric features from the candidates list(detail in table I) according to the corresponding effects.

    And consequently, after MLR process we will get the individual coefficients, whose structure is illustrated by S6 in Figure 1.

    2.6 Generate Individualized HRIR

    The dotted lines in figure 1 illustrate the process of generating individualized HRIRs.With Individual coefficients achieved from the MLR process, we can get the subject’s individual “HRIR Groups” (S7 in figure 1)in packet format. Then the segments can be reconstructed into individualized HRIRs in accordance with the location information.The removed delay time mentioned before are inserted respectively to the left-ear HRIR and right-ear HRIR, and we also fill up each HRIR to 200-length with zeros to keep the same length with original HRIRs. Eventually we will get the individualized HRIRs (S8 in figure 1) from the particular subject’s anthropometric measurements (S5 in figure 1).

    Table II. Anthropometric features for each HRIR segments.

    Fig. 6. Reconstructed results of subject 165 (azimuth = -80o; elevation = 0o).

    Fig. 7. Original and estimated HRIRs with all elevations.

    The main processes are as follows:

    a) calculate the subject’s individual “HRIR Groups” of given location.

    b) connect the “HRIR Groups” of given location into a 128-length HRIR (shown in Figure 1, S7).

    c) reconstruct ITD information by adding delay time to the reconstructed left-HRIR and right-HRIR according to the given location.

    d) fill up each reconstructed HRIR to 200-length with zeros to keep the same length with CIPIC’s original HRIRs.

    Figure 6 shows the left ear’s HRIR amplitudes of Subject_165 and its reconstructed results from 15 anthropometric measurements.The top panel shows the intermediate results of estimated HRIR from anthropometric measurements, in which the delay time hasn’t been added and the total HRIRs length is still 128. And on the bottom panel shows the final results. The results indicate that the estimated HRIRs are quite similar as the original ones.

    III. EXPERIMENTS’ RESULTS AND DISCUSSION

    The CIPIC HRTF database contains 45 subjects’ HRIR data including two KEMAR subjects, which Subject_021 is KEMAR with large pinnae and Subject_165 is KEMAR with small pinnae. We select Subject_003 as our references in the objective simulation experiments, and Subject_165 as our reference in the subjective sound localization test.

    3.1 Objective Simulation and Results

    Figure 7 shows the estimated HRIRs results of Subject_003. The images on the top, middle and bottom panel shows the comparison of original HRIRs and estimated HRIRs of Subject_003 with azimuth angle -65o, 0o, and 65o, respectively. It can be seen the estimated HRIRs can well approximate the corresponding original ones.

    We also estimated objective simulation performance by comparing the spectral distortion(SD) between the estimated HRIRs and the referenced HRIRs. The SD is calculated as follows

    where Hθ,?( fn) corresponds to the n-th frequency of the HRTF calculated from measured HRIR with azimuth θ and elevation φ,is the corresponding frequency of estimated HRIR. And the larger result SD(θ,φ) means the estimated HRIRs have worse performance.

    Table III shows the comparison results between several different regression approaches [40,41]. There are Principal Component Analysis (PCA) [18,42], two dimensional PCA (2DPCA) [18,42], Partial Least Squares Regression (PLSR) [43,44], Tensor-Singular Value Decomposition (T-SVD) [45]. The results in table III shows the left ear’s average spectral distortion values of all angles over the whole frequency spectrum, and all the results are in dB. As shown in table III that SD values have an average reduction of 0.88 dB compared with other methods.

    Figure 8 shows the comparison results of Subject_003’s left estimated and original HRTF. As is shown, the first spectral peak around 4 kHz and the two lowest spectral notches, which are considered to be the key cues of localization [31], could be recognized clearly.

    3.2 Subjective localization test and results

    We evaluated the performance of the individualization method by subjective test. Eight subjects with normal hearing sensitivity and balanced pure-tone hearing thresholds participated in the subjective localization tests. Table IV shows the basic information of the subjects.

    For the collection of listeners’ anthropometric parameters, a system about digital image acquisition system is designed. The system uses a Nikon D800 camera with AF-S NIKKOR 24-70mm f/2.8G ED lens as the image capture device. Each listener will be taken four pictures (the front, left and right view of upperbody, and the top view of head), and then pictures will be analyzed by the system to extract necessary anthropometric parameters. In order to improve the measurement precision, each listener will be measured three times, and the mean value is used as measurement result.

    Table III. Spectral distortion scores of different methods.

    Fig. 8. Original and estimated HRTFs with all elevations.

    Fig. 9. Extract pinna parameters from pictures.

    Figure 10 shows the GUI of the localization test. We adopted a 250 ms bust of white Gaussian noise as test sound, and selected 10 azimuths which approximately uniform distribute in the horizontal plane. Each subject took two rounds subjective test, and we used Subject_165’s (KEMAR with small pinnae)measured HRIRs as reference HRIRs, and filtered the white noise with measured HRIRs and individualized HRIRs respectively to generate a test sequence of 20 listening test samples. Then these test samples were presented to test listeners in random order. At each run,the subjects clicked “PLAY” button to play a stimulus, and submitted the perceived direction by clicking corresponding button on the left panel. There had no restriction on the repetitions of playing the stimulus before making decision, and the subjects could play the original mono white noise at any time by clicking“Original” button.

    Fig. 11. Result of subjective localization test.

    The results are shown in Figure 11, the line with the positive slope refers to perfect responses and the two negatively sloped lines refer to front-back confusions (FBCs). The size of circle is proportional to the number of times the subject indicates that direction, and the counts are printed inside the corresponding circles. The average correct rate with Subject_165’s HRIRs is 35.63%, while the correct rate is 58.13% using the individualized HRIRs.And the count of FBCs using Subject_165’s HRIRs is 50 while only 27 with individualized HRIRs. The detailed results of each subject are listed in table V. It can be seen, all the correct rates have improved compared with the KEMAR HRIRs, and the correct rate has been increased by 22.5%.

    Overall, the results of the test indicate a statistically significant improvement in localization.

    Table IV. Basic information of the subjects.

    Table V. Detailed results of subjective localization test.

    IV. CONCLUSIONS

    In this paper, we proposed a simple and effective method for modeling relationships between anthropometric measurements and HRIRs. We suggest that the relationship between anthropometric features and different HRIR parts is fairly complicated, so we divided the HRIRs into small segments and carried out regression analysis between anthropometric measurements and each HRIR segment to establish relationship model. The results of objective simulation and subjective test indicate that the model can generate individualized HRIRs from a series of anthropometric measurements, and with the individualized HRIRs,we can get more accurate acoustic localization than using non-individualized HRIRs.

    ACKNOWLEDGEMENTS

    This work is supported by the National Key R&D Program of China (No.2017YFB1002803), the National Nature Science Foundation of China (No. 61671335, No.U1736206, No.61662010), the Hubei Province Technological Innovation Major Project (No.2016AAA015). The authors thank CIPIC Interface Laboratory of California University at Davis for providing the CIPIC HRTF Database.

    [1] A. Kan, C. Jin, and S. A. Van, “A psychophysical evaluation of near-field head-related transfer functions synthesized using a distance variation function,” Journal of the Acoustical Society of America, Vol. 125, no. 4, 2009, pp. 2233-2242.

    [2] E. M. Wenzel, et al., “Localization using nonindividualized head-related transfer functions,”Journal of the Acoustical Society of America, Vol.94, no. 1, 1993, pp. 111-123.

    [3] L. Wang, F. Yin, Z. Chen, “A hybrid compression method for head-related transfer functions,”Applied Acoustics, Vol. 70, no. 9, 2009, pp. 1212-1218.

    [4] B. G. Shinn-Cunningham, S. Santarelli, N. Kopco,“Tori of confusion: Binaural localization cues for sources within reach of a listener,” Journal of the Acoustical Society of America, Vol. 107, no. 3,2000, pp. 1627–1636.

    [5] H. Moller, M. F. Sorensen, et al., “Head-related transfer functions of human subjects,” AES:Journal of the Audio Engineering Society, Vol. 43,1995, pp. 300-321.

    [6] J. C. Middlebrooks, “Virtual localization improved by scaling nonindividualized external-ear transfer functions in frequency,” Journal of the Acoustical Society of America, Vol. 106,no. 1, 1999, pp. 1493-5100.

    [7] X. Y. Zeng, S. G. Wang, L. P. Gao, “A hybrid al-gorithm for selecting head-related transfer function based on similarity of anthropometric structures,” Journal of Sound & Vibration, Vol.329, no. 19, 2010, pp. 4093-4106.

    [8] D. N. Zotkin, J. Hwang, et al., “HRTF personalization using anthropometric measurements,”Proc. Applications of Signal Processing To Audio and Acoustics, 2003, pp. 157-160.

    [9] D. N. Zotkin, R. Duraiswami, et al., “Virtual audio system customization using visual matching of ear parameters,” Proc. International Conference on Pattern Recognition, 2002, pp. 1003-1006.

    [10] J. C. Middlebrooks, “Individual differences in external-ear transfer functions reduced by scaling in frequency,” Journal of the Acoustical Society of America, Vol. 106, no. 3, 1999, pp. 1480-1492.

    [11] B. F. G. Katz, “Measurement and calculation of individual head-related transfer functions using a boundary element model including the measurement and effect of skin and hair impedance,” Dissertation Abstracts International, Vol.59, no. 06, 1998, pp. 2798.

    [12] V. R. Algazi, R. O. Duda, et al., “Approximating the head-related transfer function using simple geometric models of the head and torso”,Journal of the Acoustical Society of America, Vol.112, no. 1, 2002, pp. 2053-2064.

    [13] B. F. Katz, “Boundary element method calculation of individual head-related transfer function. I. Rigid model calculation,” Journal of the Acoustical Society of America, Vol. 110, no. 1,2001, pp. 2440-2448.

    [14] W. Kreuzer, P. Majdak, Z. Chen, “Fast multipole boundary element method to calculate head-related transfer functions for a wide frequency range,” Journal of the Acoustical Society of America, Vol. 126, no. 3, 2009, pp. 1280-1290.

    [15] K. H. Shin, Y. Park, “Enhanced Vertical Perception through Head-Related Impulse Response Customization Based on Pinna Response Tuning in the Median Plane,” Ieice Transactions on Fundamentals of Electronics Communications &Computer Sciences, Vol. 91, no. 1, 2008, pp. 345-356.

    [16] K. J. Fink, L. Ray, “Individualization of head related transfer functions using principal component analysis,” Applied Acoustics, Vol. 87, no. 87,2015, pp. 162-173.

    [17] P. Runkle, A. Yendiki, G. H. Wakefield, “Active Sensory Tuning for Immersive Spatialized Audio,” Proc. ICAD, 2000.

    [18] H. Hu, L. Zhou, et al., “Head Related Transfer Function Personalization Based on Multiple Regression Analysis,” Proc. International Conference on Computational Intelligence and Security,2007, pp. 1829-1832.

    [19] T. Nishino, N. Inoue, et al., “Estimation of HRTFs on the horizontal plane using physical features,”Applied Acoustics, Vol. 68, no. 8, 2007, pp. 897-908.

    [20] Hugeng, et al., “Improved Method for Individualization of Head-Related Transfer Functions on Horizontal Plane Using Reduced Number of Anthropometric Measurements,” Computer Science, Vol. 2, no. 2, 2010, pp. 31-41.

    [21] Hugeng, et al., “Effective Preprocessing in Modeling Head-Related Impulse Responses Based on Principal Components Analysis”, Signal Processing An International Journal, Vol. 4, no. 4,2010, pp. 201-212.

    [22] V. R. Algazi, R. O. Duda, et al., “Approximating the head-related transfer function using simple geometric models of the head and torso,” Journal of the Acoustical Society of America, Vo. 112,no. 1, 2002, pp. 2053-2064.

    [23] C. P. Brown, R. O. Duda, “A structural model for binaural sound synthesis,” Speech & Audio Processing IEEE Transactions on, Vol. 6, no. 5, 1998,pp. 476-488.

    [24] V. C. Raykar, et al., “EXTRACTING SIGNIFICANT FEATURES FROM THE HRTF,” Georgia Institute of Technology, 2003, pp. 115-118.

    [25] R. Bomhardt, J. Fels, “Individualization of head-related transfer functions by the principle component analysis based on anthropometric measurements,” Journal of the Acoustical Society of America, Vol. 140, no. 4, 2016, pp. 3277-3277.

    [26] J. Blauert, “Spatial hearing : The psychophysics of human sound localization,” Physiology, 1983.

    [27] G. F. Kuhn, “Model for interaural time differences in the azimuthal plane,” Journal of the Acoustical Society of America, Vol.62, no. 1, 1977, pp.157-167.

    [28] J. W. Strutt, “On our perception of sound direction,” Philos. Mag., Vol. 13, 1907, pp. 214–232.

    [29] M. R. Bai, K. Y. Ou, “Head-related transfer function (HRTF) synthesis based on a three-dimensional array model and singular value decomposition,” Journal of Sound & Vibration, Vol.281,2005, pp. 1093–1115.

    [30] V. C. Raykar, R. Duraiswami, B. Yegnanarayana,“Extracting the frequencies of the pinna spectral notches in measured head related impulse responses,” Journal of the Acoustical Society of America, Vol. 118, no. 1, 2005, pp. 364-374.

    [31] K. Iida, Y. Ishii, S. Nishioka, “Personalization of head-related transfer functions in the median plane based on the anthropometry of the listener’s pinnae,” Journal of the Acoustical Society of America, Vol. 136, no. 1, 2014, pp. 317-333.

    [32] P. Bilinski, J. Ahrens, et al., “HRTF magnitude synthesis via sparse representation of anthropometric features,” Proc. ICASSP, 2014, pp.4501-4505.

    [33] F. Grijalva, L. Martini, et al., “A manifold learning approach for personalizing HRTFs from anthropometric features,” IEEE/ACM Transactions on Audio Speech & Language Processing, Vol. 24,no. 3, 2016, pp. 559-570.

    [34] S. Hwang, Y. Park, “HRIR Customization in the Median Plane via Principal Components Analysis,” Proc. Audio Engineering Society Conference,International Conference: New Directions in High Resolution Audio, 2007, pp. 638-648.

    [35] “Cipic hrtf database files, realase 1.2,” 2004,http://interface.cipic.ucdavis.edu/.

    [36] V. R. Algazi, R. O. Duda, et al., “The CIPIC HRTF database,” Proc. Applications of Signal Processing to Audio and Acoustics, 2001 IEEE Workshop on the. IEEE, 2001, pp. 99-102.

    [37] Hugeng, et al., “The Effectiveness of Chosen Partial Anthropometric Measurements in Individualizing Head-Related Transfer Functions on Median Plane,” Itb Journal of Information &Communication Technology, Vol. 5, no. 1, 2011,pp. 35-56.

    [38] C. S. Reddy, R. M. Hegde, “A Joint Sparsity and Linear Regression Based Method for Customization of Median Plane HRIR,” Proc. Asilomar Conference on Signals, 2015, pp. 785-789.

    [39] V. R. Algazi, R. O. Duda, et al., “Structural composition and decomposition of HRTFs,” Proc.Applications of Signal Processing to Audio and Acoustics, 2001 IEEE Workshop on the, 2001, pp.103-106.

    [40] M. Rothbucher, M. Durkovic, et al., “HRTF customization using multiway array analysis,” Proc.Signal Processing Conference, 2010, pp. 229-233.

    [41] Alexander, Rothbucher, et al., “HRTF Customization by Regression,” Technische Universit?t München, 2014.

    [42] T. Nishino, N. Inoue, et al., “Estimation of HRTFs on the horizontal plane using physical features,”Applied Acoustics, Vol. 68, no. 8, 2007, pp. 897-908.

    [43] H. M. Hu, et al., “Head-Related Transfer Function Personalization Based on Partial Least Square Regression,” Journal of Electronics & Information Technology, Vol. 30, no. 1, 2011, pp. 154-158.

    [44] Q. Huang, L. Li, “Modeling individual HRTF tensor using high-order partial least squares,” EURASIP Journal on Advances in Signal Processing,Vol. 1, 2014, pp. 1-14.

    [45] G. Grindlay, M. A. O. Vasilescu, “A Multilinear(Tensor) Framework for HRTF Analysis and Synthesis,” Proc. IEEE International Conference on Acoustics, Speech and Signal Processing, 2007,pp. I-161-I-164.

    欧美成狂野欧美在线观看| 无人区码免费观看不卡| 老司机午夜福利在线观看视频| 色尼玛亚洲综合影院| 国产精品自产拍在线观看55亚洲| 丝袜美腿诱惑在线| 99久久国产精品久久久| 成人精品一区二区免费| 午夜福利高清视频| 变态另类丝袜制服| 99在线人妻在线中文字幕| 日本成人三级电影网站| 久久亚洲真实| 国内毛片毛片毛片毛片毛片| 久久亚洲精品不卡| 日本黄色视频三级网站网址| 99精品在免费线老司机午夜| 国产亚洲欧美精品永久| av福利片在线| 性色av乱码一区二区三区2| 无遮挡黄片免费观看| 窝窝影院91人妻| 一卡2卡三卡四卡精品乱码亚洲| 色av中文字幕| 色尼玛亚洲综合影院| 观看免费一级毛片| 日日夜夜操网爽| 精品高清国产在线一区| 香蕉久久夜色| 超碰成人久久| 日韩精品中文字幕看吧| 国产精品98久久久久久宅男小说| 亚洲成人免费电影在线观看| 亚洲精品中文字幕在线视频| 久久久久亚洲av毛片大全| 91麻豆精品激情在线观看国产| 亚洲七黄色美女视频| 人人妻人人澡人人看| 黄色a级毛片大全视频| 亚洲专区国产一区二区| 一进一出抽搐gif免费好疼| 国产91精品成人一区二区三区| 两个人视频免费观看高清| 免费在线观看亚洲国产| 亚洲国产欧美网| 色哟哟哟哟哟哟| 一进一出好大好爽视频| 亚洲五月婷婷丁香| 黄色a级毛片大全视频| 一边摸一边做爽爽视频免费| 欧美黄色片欧美黄色片| 午夜精品久久久久久毛片777| 久久人妻av系列| av免费在线观看网站| 97人妻精品一区二区三区麻豆 | 亚洲精品国产区一区二| 男女做爰动态图高潮gif福利片| 国产精品av久久久久免费| 好男人在线观看高清免费视频 | 亚洲精品美女久久av网站| а√天堂www在线а√下载| 97超级碰碰碰精品色视频在线观看| 啦啦啦观看免费观看视频高清| 亚洲成人免费电影在线观看| 亚洲av成人av| 国产熟女午夜一区二区三区| 窝窝影院91人妻| 老司机靠b影院| 中文字幕人妻丝袜一区二区| 黄片大片在线免费观看| 两个人看的免费小视频| ponron亚洲| 天堂动漫精品| 国产麻豆成人av免费视频| 欧美最黄视频在线播放免费| 男人舔女人的私密视频| 国产精品精品国产色婷婷| 午夜老司机福利片| 91九色精品人成在线观看| 波多野结衣高清无吗| 国产亚洲欧美在线一区二区| 夜夜夜夜夜久久久久| 中文资源天堂在线| 国产三级在线视频| 久久久久久久精品吃奶| 亚洲欧美精品综合久久99| 日韩国内少妇激情av| 2021天堂中文幕一二区在线观 | 成年版毛片免费区| 免费在线观看完整版高清| 又大又爽又粗| 亚洲在线自拍视频| 韩国精品一区二区三区| 搡老妇女老女人老熟妇| 色综合欧美亚洲国产小说| 亚洲一区二区三区不卡视频| 啦啦啦免费观看视频1| 久久精品国产亚洲av高清一级| 精品久久蜜臀av无| 国产精品免费视频内射| 十八禁网站免费在线| 青草久久国产| 麻豆一二三区av精品| 久久久久久九九精品二区国产 | 一进一出抽搐动态| 97碰自拍视频| 久久中文字幕人妻熟女| 人妻久久中文字幕网| 又黄又爽又免费观看的视频| av中文乱码字幕在线| 亚洲人成77777在线视频| 欧美人与性动交α欧美精品济南到| 亚洲欧美日韩高清在线视频| 亚洲av成人不卡在线观看播放网| 国产亚洲精品久久久久5区| 欧美丝袜亚洲另类 | 亚洲欧美日韩高清在线视频| 99国产极品粉嫩在线观看| 天堂√8在线中文| 久久香蕉激情| 国产亚洲精品一区二区www| 最好的美女福利视频网| 欧美成人性av电影在线观看| 久久人妻av系列| 国产一级毛片七仙女欲春2 | 欧美日韩精品网址| 精品乱码久久久久久99久播| 18禁黄网站禁片午夜丰满| 国产成人精品久久二区二区免费| 美女免费视频网站| 久久久久久久精品吃奶| 淫妇啪啪啪对白视频| 老汉色∧v一级毛片| 国产欧美日韩精品亚洲av| 操出白浆在线播放| 不卡av一区二区三区| 亚洲国产看品久久| 国产精华一区二区三区| 国产精品电影一区二区三区| 色在线成人网| 久久精品91无色码中文字幕| 免费在线观看日本一区| 久久国产乱子伦精品免费另类| 午夜亚洲福利在线播放| 校园春色视频在线观看| 精品一区二区三区视频在线观看免费| 亚洲人成77777在线视频| 在线av久久热| 欧美性长视频在线观看| 丰满的人妻完整版| 久99久视频精品免费| 午夜日韩欧美国产| 久久亚洲精品不卡| tocl精华| 一本综合久久免费| 久久精品国产清高在天天线| 在线观看日韩欧美| 欧美精品啪啪一区二区三区| 欧美黄色片欧美黄色片| 亚洲成a人片在线一区二区| 国产熟女xx| av在线播放免费不卡| 丝袜人妻中文字幕| netflix在线观看网站| 97人妻精品一区二区三区麻豆 | 欧美绝顶高潮抽搐喷水| 欧美午夜高清在线| 一级作爱视频免费观看| 少妇熟女aⅴ在线视频| 草草在线视频免费看| 午夜免费激情av| 国产99久久九九免费精品| 午夜福利在线在线| 18美女黄网站色大片免费观看| 高潮久久久久久久久久久不卡| 亚洲专区中文字幕在线| 亚洲黑人精品在线| 国产99久久九九免费精品| 日本五十路高清| 日本a在线网址| 级片在线观看| 99国产极品粉嫩在线观看| 两个人视频免费观看高清| 正在播放国产对白刺激| 婷婷精品国产亚洲av在线| 满18在线观看网站| 一级毛片精品| 一区福利在线观看| 国产国语露脸激情在线看| 国产精品亚洲美女久久久| 国产高清激情床上av| 成年人黄色毛片网站| netflix在线观看网站| 国产不卡一卡二| 在线观看免费日韩欧美大片| 搞女人的毛片| 十八禁网站免费在线| aaaaa片日本免费| 国产精品国产高清国产av| 99热只有精品国产| 最新在线观看一区二区三区| 国产99白浆流出| a级毛片a级免费在线| 午夜福利成人在线免费观看| ponron亚洲| 18禁美女被吸乳视频| 久久人妻av系列| 日韩欧美国产一区二区入口| 精品久久久久久久末码| 亚洲午夜理论影院| 精品乱码久久久久久99久播| 免费看日本二区| 日本成人三级电影网站| 亚洲狠狠婷婷综合久久图片| 最近最新中文字幕大全电影3 | a在线观看视频网站| 日日摸夜夜添夜夜添小说| 国产在线观看jvid| 在线观看日韩欧美| 黄色成人免费大全| 91成年电影在线观看| 99久久综合精品五月天人人| 国产99白浆流出| 丝袜美腿诱惑在线| 免费观看精品视频网站| 日本 av在线| 在线观看免费午夜福利视频| 亚洲色图av天堂| АⅤ资源中文在线天堂| www.精华液| 12—13女人毛片做爰片一| 国产伦人伦偷精品视频| 日日夜夜操网爽| 99re在线观看精品视频| 很黄的视频免费| 亚洲欧美精品综合一区二区三区| 免费看美女性在线毛片视频| 一级毛片女人18水好多| 婷婷六月久久综合丁香| 日韩一卡2卡3卡4卡2021年| 国产黄片美女视频| 极品教师在线免费播放| 看免费av毛片| 国产精品久久久人人做人人爽| 国产成人av教育| 国产精品一区二区精品视频观看| 精品久久久久久久毛片微露脸| 亚洲国产精品久久男人天堂| 免费看a级黄色片| 丝袜人妻中文字幕| 一本大道久久a久久精品| 国产精品免费一区二区三区在线| 久久国产精品人妻蜜桃| av在线播放免费不卡| 一级作爱视频免费观看| 国产亚洲av高清不卡| av在线天堂中文字幕| 最新美女视频免费是黄的| 日韩精品青青久久久久久| 亚洲精品中文字幕一二三四区| 人人澡人人妻人| 美女大奶头视频| 麻豆国产av国片精品| 日本熟妇午夜| 国产亚洲欧美在线一区二区| 正在播放国产对白刺激| 中文字幕人成人乱码亚洲影| 欧美色欧美亚洲另类二区| 又紧又爽又黄一区二区| 国产伦人伦偷精品视频| 国产激情久久老熟女| 每晚都被弄得嗷嗷叫到高潮| 免费看十八禁软件| 欧美乱码精品一区二区三区| 在线天堂中文资源库| 一卡2卡三卡四卡精品乱码亚洲| 欧美一区二区精品小视频在线| cao死你这个sao货| 欧美乱码精品一区二区三区| 亚洲av成人av| 日韩欧美国产在线观看| 日韩精品免费视频一区二区三区| 久久久久久亚洲精品国产蜜桃av| 一进一出抽搐gif免费好疼| 久久亚洲真实| 日韩国内少妇激情av| 国产黄a三级三级三级人| 国产久久久一区二区三区| 成人国产一区最新在线观看| 久久性视频一级片| 一二三四在线观看免费中文在| 国产精品自产拍在线观看55亚洲| 亚洲精品av麻豆狂野| 男女做爰动态图高潮gif福利片| 亚洲欧美激情综合另类| 欧美亚洲日本最大视频资源| 一个人观看的视频www高清免费观看 | 久久久水蜜桃国产精品网| 午夜福利成人在线免费观看| 老熟妇乱子伦视频在线观看| 好男人电影高清在线观看| 亚洲第一青青草原| 国产成人影院久久av| 正在播放国产对白刺激| 亚洲精品中文字幕在线视频| 天堂√8在线中文| 欧美日韩一级在线毛片| 亚洲国产日韩欧美精品在线观看 | 久久中文看片网| 男人的好看免费观看在线视频 | 在线国产一区二区在线| 国产av在哪里看| 欧美日韩亚洲综合一区二区三区_| 欧美丝袜亚洲另类 | 国产精品九九99| x7x7x7水蜜桃| 欧美日韩瑟瑟在线播放| 国产精品一区二区三区四区久久 | 伊人久久大香线蕉亚洲五| 99riav亚洲国产免费| 日本熟妇午夜| 制服丝袜大香蕉在线| 成人欧美大片| 久久精品影院6| 正在播放国产对白刺激| 久久婷婷人人爽人人干人人爱| 非洲黑人性xxxx精品又粗又长| 精品国产亚洲在线| 久久久久九九精品影院| 日本成人三级电影网站| 精品久久久久久久末码| 精品福利观看| 日韩av在线大香蕉| 免费在线观看视频国产中文字幕亚洲| 免费高清在线观看日韩| 中文字幕高清在线视频| 日韩三级视频一区二区三区| 午夜久久久久精精品| 国产又色又爽无遮挡免费看| 久久久久国产一级毛片高清牌| 自线自在国产av| 成年人黄色毛片网站| 一级黄色大片毛片| 男女视频在线观看网站免费 | 精品国产一区二区三区四区第35| 可以免费在线观看a视频的电影网站| 大型黄色视频在线免费观看| 一级作爱视频免费观看| 欧美亚洲日本最大视频资源| 一本一本综合久久| 亚洲国产高清在线一区二区三 | 欧美 亚洲 国产 日韩一| 女性生殖器流出的白浆| 怎么达到女性高潮| 亚洲av美国av| 国产在线精品亚洲第一网站| 国产91精品成人一区二区三区| 精品久久久久久成人av| 美女午夜性视频免费| 桃色一区二区三区在线观看| 人人妻人人澡欧美一区二区| 国产日本99.免费观看| 少妇粗大呻吟视频| 在线观看日韩欧美| 日本免费一区二区三区高清不卡| 好男人在线观看高清免费视频 | 在线看三级毛片| 999久久久国产精品视频| 后天国语完整版免费观看| 侵犯人妻中文字幕一二三四区| 18禁美女被吸乳视频| 国产亚洲精品一区二区www| 亚洲无线在线观看| 看黄色毛片网站| 国产三级黄色录像| 久久久久久久久免费视频了| 日本精品一区二区三区蜜桃| 国产亚洲欧美98| 亚洲av成人不卡在线观看播放网| 久久这里只有精品19| 99久久久亚洲精品蜜臀av| 国产精华一区二区三区| 亚洲中文字幕一区二区三区有码在线看 | 国内揄拍国产精品人妻在线 | 伊人久久大香线蕉亚洲五| 欧美人与性动交α欧美精品济南到| 精品国产超薄肉色丝袜足j| 俺也久久电影网| 黄色成人免费大全| 丝袜人妻中文字幕| 午夜两性在线视频| 给我免费播放毛片高清在线观看| 亚洲一区二区三区色噜噜| 99riav亚洲国产免费| 日本成人三级电影网站| 久久久久九九精品影院| 妹子高潮喷水视频| 久久香蕉激情| 成人国产一区最新在线观看| 免费在线观看成人毛片| 久久人人精品亚洲av| x7x7x7水蜜桃| 91成人精品电影| 在线看三级毛片| 婷婷精品国产亚洲av在线| 精品无人区乱码1区二区| 韩国精品一区二区三区| 亚洲成a人片在线一区二区| 国产男靠女视频免费网站| 免费在线观看成人毛片| 久久久国产欧美日韩av| 国产高清激情床上av| 99久久精品国产亚洲精品| 日本五十路高清| 老司机午夜十八禁免费视频| 欧美又色又爽又黄视频| 欧美黄色淫秽网站| 99re在线观看精品视频| 国产精品 欧美亚洲| 久久99热这里只有精品18| www.熟女人妻精品国产| 国产91精品成人一区二区三区| 欧美大码av| 国产91精品成人一区二区三区| 欧美大码av| 婷婷六月久久综合丁香| 欧美大码av| 免费在线观看完整版高清| 精品久久久久久久久久免费视频| 日韩三级视频一区二区三区| 亚洲国产欧美网| 国产精品av久久久久免费| 亚洲专区字幕在线| 日日干狠狠操夜夜爽| 少妇熟女aⅴ在线视频| 69av精品久久久久久| 777久久人妻少妇嫩草av网站| 亚洲精华国产精华精| 国产亚洲av高清不卡| 午夜免费激情av| 久久久久久免费高清国产稀缺| 真人一进一出gif抽搐免费| 悠悠久久av| 久99久视频精品免费| 此物有八面人人有两片| 在线av久久热| 亚洲片人在线观看| 久久午夜综合久久蜜桃| 天堂动漫精品| 色播在线永久视频| 男女之事视频高清在线观看| 91国产中文字幕| 亚洲av成人一区二区三| 欧美激情高清一区二区三区| 欧美激情久久久久久爽电影| 日本免费a在线| 黑人操中国人逼视频| 亚洲av成人av| 99热这里只有精品一区 | 老司机福利观看| 亚洲精品久久成人aⅴ小说| 在线播放国产精品三级| 99精品欧美一区二区三区四区| 国内精品久久久久精免费| 19禁男女啪啪无遮挡网站| 在线国产一区二区在线| 亚洲av第一区精品v没综合| 老熟妇乱子伦视频在线观看| 高清毛片免费观看视频网站| 欧美一区二区精品小视频在线| 国产成人av教育| 91成人精品电影| 一卡2卡三卡四卡精品乱码亚洲| 国产蜜桃级精品一区二区三区| 国产aⅴ精品一区二区三区波| 99精品欧美一区二区三区四区| 日日摸夜夜添夜夜添小说| 国产精品自产拍在线观看55亚洲| 久久久久国产精品人妻aⅴ院| 国产精品 国内视频| 国产精品98久久久久久宅男小说| 精品免费久久久久久久清纯| 99国产精品99久久久久| 免费电影在线观看免费观看| 国产精品 欧美亚洲| 精品久久久久久,| 无人区码免费观看不卡| 欧美日韩黄片免| 色精品久久人妻99蜜桃| 国产1区2区3区精品| 午夜福利免费观看在线| 一本综合久久免费| 免费高清在线观看日韩| 天天一区二区日本电影三级| 日韩大码丰满熟妇| 欧美一区二区精品小视频在线| 国产亚洲欧美98| 男女下面进入的视频免费午夜 | 男人舔女人下体高潮全视频| 日韩欧美国产在线观看| 国产一区二区激情短视频| 国产熟女xx| 啪啪无遮挡十八禁网站| 亚洲中文字幕一区二区三区有码在线看 | 亚洲精品美女久久av网站| 精品国产国语对白av| av视频在线观看入口| 国产视频一区二区在线看| 国产真实乱freesex| 最好的美女福利视频网| 国产久久久一区二区三区| 波多野结衣高清作品| 亚洲av片天天在线观看| 欧美zozozo另类| 熟女电影av网| 非洲黑人性xxxx精品又粗又长| www日本黄色视频网| 最新美女视频免费是黄的| 久久久久久国产a免费观看| 亚洲专区字幕在线| 日韩大码丰满熟妇| 嫩草影视91久久| 午夜福利成人在线免费观看| 日韩 欧美 亚洲 中文字幕| 欧美日韩福利视频一区二区| 国产单亲对白刺激| 99久久99久久久精品蜜桃| 老汉色av国产亚洲站长工具| 日韩欧美国产在线观看| 久久亚洲精品不卡| 亚洲成国产人片在线观看| 午夜免费成人在线视频| 精品欧美国产一区二区三| 日韩中文字幕欧美一区二区| 久久久久国内视频| 婷婷精品国产亚洲av| 免费人成视频x8x8入口观看| 国产精品国产高清国产av| а√天堂www在线а√下载| 日韩国内少妇激情av| 国产伦人伦偷精品视频| 中文字幕精品免费在线观看视频| 美女高潮喷水抽搐中文字幕| 婷婷六月久久综合丁香| 真人一进一出gif抽搐免费| 在线观看www视频免费| 国产精品久久视频播放| 麻豆成人av在线观看| 一级片免费观看大全| 757午夜福利合集在线观看| 啦啦啦免费观看视频1| 亚洲欧美激情综合另类| 国产爱豆传媒在线观看 | 久久亚洲真实| 国产黄片美女视频| 免费人成视频x8x8入口观看| 曰老女人黄片| 亚洲黑人精品在线| 婷婷精品国产亚洲av| 一级片免费观看大全| 国产精品99久久99久久久不卡| 日本 欧美在线| 一级a爱视频在线免费观看| 18美女黄网站色大片免费观看| av免费在线观看网站| 1024视频免费在线观看| 在线观看免费视频日本深夜| 日韩高清综合在线| 精品国产超薄肉色丝袜足j| 国产色视频综合| 不卡一级毛片| av福利片在线| 亚洲 国产 在线| 三级毛片av免费| 丁香欧美五月| 免费在线观看日本一区| 最近最新中文字幕大全免费视频| 亚洲国产欧美日韩在线播放| 婷婷丁香在线五月| 变态另类丝袜制服| 曰老女人黄片| 黄色片一级片一级黄色片| 久久精品aⅴ一区二区三区四区| 黄色毛片三级朝国网站| 日韩av在线大香蕉| 男人舔女人的私密视频| 国产久久久一区二区三区| 90打野战视频偷拍视频| 一级黄色大片毛片| 97超级碰碰碰精品色视频在线观看| 亚洲片人在线观看| 热re99久久国产66热| 在线观看午夜福利视频| 2021天堂中文幕一二区在线观 | 人人妻,人人澡人人爽秒播| 伦理电影免费视频| 极品教师在线免费播放| 动漫黄色视频在线观看| 久久草成人影院| 一二三四社区在线视频社区8| 色婷婷久久久亚洲欧美| 日韩有码中文字幕| 美女大奶头视频| 怎么达到女性高潮| 大型av网站在线播放| 天天躁夜夜躁狠狠躁躁| 国产av一区在线观看免费| 淫秽高清视频在线观看| 少妇 在线观看| 夜夜夜夜夜久久久久| 亚洲男人天堂网一区| 欧美精品啪啪一区二区三区| 欧美日韩福利视频一区二区| 黄色女人牲交| 91九色精品人成在线观看|