Face-to-face speech data collection has been next to impossible globally due to COVID-19 restrictions. To address this problem, simultaneous recordings of three repetitions of the cardinal vowels were made using a Zoom H6 Handy Recorder with external microphone (henceforth H6) and compared with two alternatives accessible to potential participants at home: the Zoom meeting application (henceforth Zoom) and two lossless mobile phone applications (Awesome Voice Recorder, and Recorder; henceforth Phone). F0 was tracked accurately by all devices; however, for formant analysis (F1, F2, F3) Phone performed better than Zoom, i.e. more similarly to H6. Zoom recordings also exhibited unexpected drops in intensity. The results suggest that lossless format phone recordings present a viable option for at least some phonetic studies.

### 相关内容

The majority of multichannel speech enhancement algorithms are two-step procedures that first apply a linear spatial filter, a so-called beamformer, and combine it with a single-channel approach for postprocessing. However, the serial concatenation of a linear spatial filter and a postfilter is not generally optimal in the minimum mean square error (MMSE) sense for noise distributions other than a Gaussian distribution. Rather, the MMSE optimal filter is a joint spatial and spectral nonlinear function. While estimating the parameters of such a filter with traditional methods is challenging, modern neural networks may provide an efficient way to learn the nonlinear function directly from data. To see if further research in this direction is worthwhile, in this work we examine the potential performance benefit of replacing the common two-step procedure with a joint spatial and spectral nonlinear filter. We analyze three different forms of non-Gaussianity: First, we evaluate on super-Gaussian noise with a high kurtosis. Second, we evaluate on inhomogeneous noise fields created by five interfering sources using two microphones, and third, we evaluate on real-world recordings from the CHiME3 database. In all scenarios, considerable improvements may be obtained. Most prominently, our analyses show that a nonlinear spatial filter uses the available spatial information more effectively than a linear spatial filter as it is capable of suppressing more than $D-1$ directional interfering sources with a $D$-dimensional microphone array without spatial adaptation.

The $\pi$ -calculus is used as a model for programminglanguages. Its contexts exhibit arbitrary concurrency, makingthem very discriminating. This may prevent validating desir-able behavioural equivalences in cases when more disciplinedcontexts are expected.In this paper we focus on two such common disciplines:sequentiality, meaning that at any time there is a single threadof computation, and well-bracketing, meaning that calls toexternal services obey a stack-like discipline. We formalise thedisciplines by means of type systems. The main focus of thepaper is on studying the consequence of the disciplines onbehavioural equivalence. We define and study labelled bisim-ilarities for sequentiality and well-bracketing. These relationsare coarser than ordinary bisimilarity. We prove that they aresound for the respective (contextual) barbed equivalence, andalso complete under a certain technical condition.We show the usefulness of our techniques on a number ofexamples, that have mainly to do with the representation offunctions and store.

We present a framework facilitating the implementation and comparison of text compression algorithms. We evaluate its features by a case study on two novel compression algorithms based on the Lempel-Ziv compression schemes that perform well on highly repetitive texts.

Automatic Speech Recognition (ASR) systems generalize poorly on accented speech. The phonetic and linguistic variability of accents present hard challenges for ASR systems today in both data collection and modeling strategies. The resulting bias in ASR performance across accents comes at a cost to both users and providers of ASR. We present a survey of current promising approaches to accented speech recognition and highlight the key challenges in the space. Approaches mostly focus on single model generalization and accent feature engineering. Among the challenges, lack of a standard benchmark makes research and comparison especially difficult.

In March 2020, the UK and Scottish Governments imposed a lockdown restricting everyday life activities to only the most essential. These Governmental measures together with individual choices to refrain from travelling during the COVID-19 pandemic have had a profound effect on transport related activity. In the current investigation an online questionnaire was distributed to 994 Scottish residents in order to identify travel habits, attitudes and preferences during the different phases of the COVID-19 pandemic outbreak and anticipated travel habits after the pandemic. Quota constraints were enforced for age, gender and household income to ensure the sample was representative of the Scottish population as a whole. Perceptions of risk, trust in information sources and compliance with COVID-19 regulations were determined together with changes in levels of life satisfaction and modal choice following the onset of COVID-19. In addition, survey responses were used to identify anticipated travel mode use in the future. Consideration was also given to the effects of COVID-19 on transport related lifestyle issues such as working from home, online shopping and the expectations of moving residences in the future. As part of the analysis, statistical models were developed to provide an insight into both the relationships between the levels of non compliance with COVID-19 regulations and demographic variables and the respondent attributes which might affect future public transport usage. In general, the study confirmed significant reductions in traffic activity, among respondents during the COVID-19 pandemic associated with walking, driving a car and either using a bus or train. The respondents also indicated that they anticipated they would continue to make less use of buses and trains at the end of the pandemic.

Finding communities in networks is a problem that remains difficult, in spite of the amount of attention it has recently received. The Stochastic Block-Model (SBM) is a generative model for graphs with "communities" for which, because of its simplicity, the theoretical understanding has advanced fast in recent years. In particular, there have been various results showing that simple versions of spectral clustering using the Normalized Laplacian of the graph can recover the communities almost perfectly with high probability. Here we show that essentially the same algorithm used for the SBM and for its extension called Degree-Corrected SBM, works on a wider class of Block-Models, which we call Preference Frame Models, with essentially the same guarantees. Moreover, the parametrization we introduce clearly exhibits the free parameters needed to specify this class of models, and results in bounds that expose with more clarity the parameters that control the recovery error in this model class.

Modern neural network training relies heavily on data augmentation for improved generalization. After the initial success of label-preserving augmentations, there has been a recent surge of interest in label-perturbing approaches, which combine features and labels across training samples to smooth the learned decision surface. In this paper, we propose a new augmentation method that leverages the first and second moments extracted and re-injected by feature normalization. We replace the moments of the learned features of one training image by those of another, and also interpolate the target labels. As our approach is fast, operates entirely in feature space, and mixes different signals than prior methods, one can effectively combine it with existing augmentation methods. We demonstrate its efficacy across benchmark data sets in computer vision, speech, and natural language processing, where it consistently improves the generalization performance of highly competitive baseline networks.

Person identification in the wild is very challenging due to great variation in poses, face quality, clothes, makeup and so on. Traditional research, such as face recognition, person re-identification, and speaker recognition, often focuses on a single modal of information, which is inadequate to handle all the situations in practice. Multi-modal person identification is a more promising way that we can jointly utilize face, head, body, audio features, and so on. In this paper, we introduce iQIYI-VID, the largest video dataset for multi-modal person identification. It is composed of 600K video clips of 5,000 celebrities. These video clips are extracted from 400K hours of online videos of various types, ranging from movies, variety shows, TV series, to news broadcasting. All video clips pass through a careful human annotation process, and the error rate of labels is lower than 0.2%. We evaluated the state-of-art models of face recognition, person re-identification, and speaker recognition on the iQIYI-VID dataset. Experimental results show that these models are still far from being perfect for task of person identification in the wild. We further demonstrate that a simple fusion of multi-modal features can improve person identification considerably. We have released the dataset online to promote multi-modal person identification research.

Voice conversion (VC) aims at conversion of speaker characteristic without altering content. Due to training data limitations and modeling imperfections, it is difficult to achieve believable speaker mimicry without introducing processing artifacts; performance assessment of VC, therefore, usually involves both speaker similarity and quality evaluation by a human panel. As a time-consuming, expensive, and non-reproducible process, it hinders rapid prototyping of new VC technology. We address artifact assessment using an alternative, objective approach leveraging from prior work on spoofing countermeasures (CMs) for automatic speaker verification. Therein, CMs are used for rejecting fake' inputs such as replayed, synthetic or converted speech but their potential for automatic speech artifact assessment remains unknown. This study serves to fill that gap. As a supplement to subjective results for the 2018 Voice Conversion Challenge (VCC'18) data, we configure a standard constant-Q cepstral coefficient CM to quantify the extent of processing artifacts. Equal error rate (EER) of the CM, a confusability index of VC samples with real human speech, serves as our artifact measure. Two clusters of VCC'18 entries are identified: low-quality ones with detectable artifacts (low EERs), and higher quality ones with less artifacts. None of the VCC'18 systems, however, is perfect: all EERs are < 30 % (the ideal' value would be 50 %). Our preliminary findings suggest potential of CMs outside of their original application, as a supplemental optimization and benchmarking tool to enhance VC technology.

Limited capture range, and the requirement to provide high quality initialization for optimization-based 2D/3D image registration methods, can significantly degrade the performance of 3D image reconstruction and motion compensation pipelines. Challenging clinical imaging scenarios, which contain significant subject motion such as fetal in-utero imaging, complicate the 3D image and volume reconstruction process. In this paper we present a learning based image registration method capable of predicting 3D rigid transformations of arbitrarily oriented 2D image slices, with respect to a learned canonical atlas co-ordinate system. Only image slice intensity information is used to perform registration and canonical alignment, no spatial transform initialization is required. To find image transformations we utilize a Convolutional Neural Network (CNN) architecture to learn the regression function capable of mapping 2D image slices to a 3D canonical atlas space. We extensively evaluate the effectiveness of our approach quantitatively on simulated Magnetic Resonance Imaging (MRI), fetal brain imagery with synthetic motion and further demonstrate qualitative results on real fetal MRI data where our method is integrated into a full reconstruction and motion compensation pipeline. Our learning based registration achieves an average spatial prediction error of 7 mm on simulated data and produces qualitatively improved reconstructions for heavily moving fetuses with gestational ages of approximately 20 weeks. Our model provides a general and computationally efficient solution to the 2D/3D registration initialization problem and is suitable for real-time scenarios.

Kristina Tesch,Timo Gerkmann
0+阅读 · 4月22日
Daniel Hirschkoff,Enguerrand Prebet,Davide Sangiorgi
0+阅读 · 4月22日
Patrick Dinklage,Johannes Fischer,Dominik Köppl,Marvin Löbel,Kunihiko Sadakane
0+阅读 · 4月22日
Arthur Hinsvark,Natalie Delworth,Miguel Del Rio,Quinten McNamara,Joshua Dong,Ryan Westerman,Michelle Huang,Joseph Palakapilly,Jennifer Drexler,Ilya Pirkin,Nishchal Bhandari,Miguel Jette
0+阅读 · 4月21日
Lucy Downey,Achille Fonzone,Grigorios Fountas,Torren Semple
0+阅读 · 4月21日
Yali Wan,Marina Meila
0+阅读 · 4月21日
Boyi Li,Felix Wu,Ser-Nam Lim,Serge Belongie,Kilian Q. Weinberger
10+阅读 · 2020年2月25日
Yuanliu Liu,Peipei Shi,Bo Peng,He Yan,Yong Zhou,Bing Han,Yi Zheng,Chao Lin,Jianbin Jiang,Yin Fan,Tingwei Gao,Ganwen Wang,Jian Liu,Xiangju Lu,Danming Xie
4+阅读 · 2018年11月19日
Tomi Kinnunen,Jaime Lorenzo-Trueba,Junichi Yamagishi,Tomoki Toda,Daisuke Saito,Fernando Villavicencio,Zhenhua Ling
3+阅读 · 2018年9月4日
Benjamin Hou,Bishesh Khanal,Amir Alansary,Steven McDonagh,Alice Davidson,Mary Rutherford,Jo V. Hajnal,Daniel Rueckert,Ben Glocker,Bernhard Kainz
3+阅读 · 2018年1月23日

43+阅读 · 2020年11月24日

56+阅读 · 2020年8月4日

43+阅读 · 2020年5月22日

79+阅读 · 2020年2月1日

48+阅读 · 2019年10月12日

31+阅读 · 2019年10月10日

31+阅读 · 2019年9月29日

Call4Papers
7+阅读 · 2019年8月13日
Call4Papers
5+阅读 · 2019年5月23日

5+阅读 · 2018年7月25日
Top