<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.2 20120330//EN" "http://jats.nlm.nih.gov/publishing/1.2/JATS-journalpublishing1.dtd">
<!--<?xml-stylesheet type="text/xsl" href="article.xsl"?>-->
<article article-type="research-article" dtd-version="1.2" xml:lang="en" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<front>
<journal-meta>
<journal-id journal-id-type="issn">1868-6354</journal-id>
<journal-title-group>
<journal-title>Laboratory Phonology: Journal of the Association for Laboratory Phonology</journal-title>
</journal-title-group>
<issn pub-type="epub">1868-6354</issn>
<publisher>
<publisher-name>Ubiquity Press</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.5334/labphon.237</article-id>
<article-categories>
<subj-group>
<subject>Journal article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>A review of data collection practices using electromagnetic articulography</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Rebernik</surname>
<given-names>Teja</given-names>
</name>
<email>t.rebernik@rug.nl</email>
<xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Jacobi</surname>
<given-names>Jidde</given-names>
</name>
<xref ref-type="aff" rid="aff-1">1</xref>
<xref ref-type="aff" rid="aff-2">2</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Jonkers</surname>
<given-names>Roel</given-names>
</name>
<xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Noiray</surname>
<given-names>Aude</given-names>
</name>
<xref ref-type="aff" rid="aff-3">3</xref>
<xref ref-type="aff" rid="aff-4">4</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Wieling</surname>
<given-names>Martijn</given-names>
</name>
<xref ref-type="aff" rid="aff-1">1</xref>
<xref ref-type="aff" rid="aff-4">4</xref>
</contrib>
</contrib-group>
<aff id="aff-1"><label>1</label>Center for Language and Cognition, University of Groningen, NL</aff>
<aff id="aff-2"><label>2</label>Department of Cognitive Science, Macquarie University, AU</aff>
<aff id="aff-3"><label>3</label>Laboratory for Oral Language Acquisition, Department of Linguistics, University of Potsdam, DE</aff>
<aff id="aff-4"><label>4</label>Haskins Laboratories, New Haven, CT, US</aff>
<pub-date publication-format="electronic" date-type="pub" iso-8601-date="2021-03-01">
<day>01</day>
<month>03</month>
<year>2021</year>
</pub-date>
<pub-date pub-type="collection">
<year>2021</year>
</pub-date>
<volume>12</volume>
<issue>1</issue>
<elocation-id>6</elocation-id>
<history>
<date date-type="received" iso-8601-date="2019-10-15">
<day>15</day>
<month>10</month>
<year>2019</year>
</date>
<date date-type="accepted" iso-8601-date="2020-12-10">
<day>10</day>
<month>12</month>
<year>2020</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright: &#x00A9; 2021 The Author(s)</copyright-statement>
<copyright-year>2021</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License (CC-BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. See <uri xlink:href="http://creativecommons.org/licenses/by/4.0/">http://creativecommons.org/licenses/by/4.0/</uri>.</license-p>
</license>
</permissions>
<self-uri xlink:href="http://www.journal-labphon.org/articles/10.5334/labphon.237/"/>
<abstract>
<p>This paper reviews data collection practices in electromagnetic articulography (EMA) studies, with a focus on sensor placement. We first introduce electromagnetic articulography as a method. We then focus on existing data collection practices. Our overview is based on a literature review of 905 publications from a large variety of journals and conferences, identified through a systematic keyword search in Google Scholar. The review shows that experimental designs vary greatly, which in turn may limit researchers&#8217; ability to compare results across studies. Finally, we describe an EMA data collection procedure that includes an articulatory-driven strategy for determining where to position sensors on the tongue without causing discomfort to the participant. We also evaluate three approaches for preparing (NDI Wave) EMA sensors reported in the literature with respect to the duration the sensors remain attached to the tongue: 1) attaching out-of-the-box sensors, 2) attaching sensors coated in latex, and 3) attaching sensors coated in latex with an additional latex flap. Results indicate no clear general effect of sensor preparation type on adhesion duration. A subsequent exploratory analysis reveals that sensors with the additional flap tend to adhere for shorter times than the other two types, but that this pattern is inverted for the most posterior tongue sensor.</p>
</abstract>
<kwd-group>
<kwd>Electromagnetic articulography</kwd>
<kwd>articulation</kwd>
<kwd>speech kinematics</kwd>
<kwd>EMA</kwd>
<kwd>NDI WAVE</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec>
<title>1. Introduction</title>
<p>Electromagnetic articulography (EMA) is a popular technique for the study of speech production that supports the tracking of articulatory kinematics using sensors attached primarily to the tongue, lips, and jaw. This paper provides a comprehensive overview of studies that have used EMA as a method for the investigation of speech-related topics, with the ultimate goal of characterizing various data collection procedures and comparing them to our own practices. In Section 2, we introduce electromagnetic articulography and address some methodological considerations, such as device safety and accuracy, usage, and general sensor placement guidelines. Section 3 continues with a discussion of data collection practices drawn from a systematic literature review of 905 publications from conferences and journals published since 1987. In this contribution, we focus on 412 journal publications. Sections 4 and 5 of this paper are practical, as we describe our own data collection procedure in detail, and we evaluate the adhesion duration of three different types of sensors through a sensor adhesion experiment. We hope this paper will be of help to those starting out with EMA data collection.</p>
</sec>
<sec sec-type="methods">
<title>2. An Introduction to Electromagnetic Articulography: Methodological Considerations</title>
<p>We first focus on introducing electromagnetic articulography (EMA) as a method. This section addresses some methodological considerations, including the method&#8217;s advantages and limitations, device accuracy and safety, various uses, compatibility with other experimental methods, and participants who are suitable for EMA studies.</p>
<sec>
<title>2.1. Advantages and limitations of EMA</title>
<p>Electromagnetic articulography (EMA)<xref ref-type="fn" rid="n1">1</xref> is a point tracking method, whereby sensors placed on target articulators (including tongue, lips, and jaw) are used to track movement in real time in 3D. As with any method, there are both advantages and disadvantages to EMA (<xref ref-type="bibr" rid="B79">Kochetov, 2020</xref>; <xref ref-type="bibr" rid="B30">Earnest &amp; Max, 2003</xref>; <xref ref-type="bibr" rid="B91">Maeda et al., 2006</xref>; <xref ref-type="bibr" rid="B101">Mennen, Scobbie, de Leeuw, Schaeffler, &amp; Schaeffler, 2010</xref>; <xref ref-type="bibr" rid="B146">Stone, 2010</xref>; <xref ref-type="bibr" rid="B168">Whalen et al., 2005</xref>). We first discuss some advantages of EMA. The data collected within the oral cavity has high spatial accuracy and temporal resolution (see Section 2.4 below), yielding relatively precise information on articulatory gestures. Unlike with some other methods (such as ultrasound tongue imaging), it is possible to measure multiple articulators simultaneously and therefore allows the investigation of inter-articulatory interactions. It is one of the few methods that allows researchers to study movements of articulators directly, as opposed to more indirect acoustic methods. EMA is biologically safe (contrary to some methods used in the past, such as x-ray cineradiography or microbeam) and minimally invasive. Furthermore, the sensors are mostly well-tolerated by adult participants and only moderately interfere with speech production (speakers adapt within 10 minutes; <xref ref-type="bibr" rid="B29">Dromey, Hunter, &amp; Nissen, 2018</xref>). Compared to other methods used to track speech articulators, articulographs restrict the participants&#8217; movement less, they are not line-of-sight (such as, e.g., VICON or OptoTrak), and they are not restricted to in-plane visualization (such as, e.g., real-time magnetic resonance imaging or ultrasound tongue imaging).</p>
<p>However, several limitations should be considered when employing EMA for speech-related investigations. For example, the positioning of sensors is limited to the anterior oral tract. It is more problematic to place sensors on the more posterior part of the tongue (e.g., tongue dorsum) than its anterior part, and it is not possible to track velum movements without discomfort to the participants (see exceptions below). Furthermore, depending on the size and location of the articulator of interest, it is not possible to place many sensors on an articulator at the same time due to mutual electrical interference and increased perturbation of articulation. Additionally, sensors still cannot be placed too close to each other without disturbing their measurement accuracy (the Carstens AG500 manual, for example, states that the minimum distance between sensors should be 8 mm), which again limits the number of points that can be tracked on the articulators. Furthermore, because EMA is a fixed point-tracking technique, it does not capture the global movements of articulators, for instance the full midsagittal tongue shape (as obtained using rtMRI).</p>
<p>Additionally, the equipment is expensive and requires a relatively high level of technical knowledge, prior training, and practice to use successfully. Finally, as sensors are firmly affixed to orofacial structures, they constitute a form of articulatory perturbation. While articulation does return to nearly normal after a while (see below), the acoustics are changed when sensors are attached (<xref ref-type="bibr" rid="B97">Meenakshi, Yarra, Yamini, &amp; Ghosh, 2014</xref>). Nevertheless, some earlier problems (such as restricted head movement, the need for extensive calibration, and data being restricted to the midsagittal plane only) were present for previous articulographs, but have largely been eliminated with the newer devices (see more details below).</p>
</sec>
<sec>
<title>2.2. EMA devices</title>
<p>EMA systems have been used for speech-related research since the 1980s (see Figure <xref ref-type="fig" rid="F1">1</xref> for an overview of EMA market releases). In the past, the MIT system articulograph (<xref ref-type="bibr" rid="B119">Perkell et al., 1992</xref>), the Movetrack system (<xref ref-type="bibr" rid="B11">Branderud, 1985</xref>), and the Aurora system (NDI; <xref ref-type="bibr" rid="B82">Kr&#246;ger et al., 2000</xref>) were used as some of the first available commercial articulographs.<xref ref-type="fn" rid="n2">2</xref> For the past two decades and up until recently, there were two main manufacturers with a continuing production of EMA devices, namely Carstens Medizinelektronik (Bovenden, Germany) and Northern Digital Inc. (Waterloo, Canada). Carstens Medizinelektronik has manufactured several articulography devices over time spanning from the late 1980s until now, including models AG100, AG200, AG500, and the most recent AG501. Northern Digital Inc. (NDI) has manufactured the Wave articulograph, which came to the market in 2009 and was discontinued with the arrival of their latest articulograph, the NDI Vox in early 2020. The NDI Vox has since then likewise been discontinued, as NDI decided to reduce their product portfolio (<xref ref-type="bibr" rid="B115">Northern Digital Inc., 2020</xref>). Consequently, at present only Carstens offers a commercial articulograph that has not been discontinued.</p>
<fig id="F1">
<label>Figure 1</label>
<caption>
<p>Timeline of articulographs. Note that the AG200 is not included as it was a combination of the AG500 with the helmet from the AG100. The Aurora system is not included because it was a point-tracking tool but not one meant exclusively for the study of speech production.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="/article/id/6289/file/75942/"/>
</fig>
<p>As articulographs are costly, it is not uncommon for a lab to use an older system despite a new version being available on the market. Regardless, considerable advancements have been made since the first commercial articulograph. Technological advances have made it possible to collect more comprehensive data, going from 2D EMMA (midsagittal) systems to 3D (or rather 5D) systems collecting three Cartesian coordinates and two angular coordinates (<xref ref-type="bibr" rid="B60">Hoole &amp; Zierdt, 2010</xref>). Thus, although early articulographs only measured in one plane (i.e., the midsagittal plane), modern devices track data in three isotropic spatial and two angular dimensions, and sensor orientation is tracked in addition to position. Furthermore, early articulographs required extensive calibration before testing and restricted the participants&#8217; head movement, while modern systems permit free head movement.</p>
</sec>
<sec>
<title>2.3. Uses of EMA</title>
<p>Starting in the 1980s, EMA was designed as a way to track points both inside and outside the vocal tract (<xref ref-type="bibr" rid="B130">Sch&#246;nle et al., 1987</xref>). Early studies evaluated the suitability of EMA for tracking speech movements (e.g., <xref ref-type="bibr" rid="B54">H&#246;hne et al., 1987</xref>; <xref ref-type="bibr" rid="B57">Hoole &amp; Gfoerer, 1990</xref>; <xref ref-type="bibr" rid="B93">Maurer, Gr&#246;ne, Landis, Hoch, &amp; Sch&#246;nle, 1993</xref>) as well as for clinical use (e.g., <xref ref-type="bibr" rid="B131">Sch&#246;nle, M&#252;ller, &amp; Wenig, 1989</xref>; <xref ref-type="bibr" rid="B34">Engelke, Sch&#246;nle, Kring, &amp; Richter, 1989</xref>; <xref ref-type="bibr" rid="B32">Engelke, W., Engelke, D., &amp; Schwetska, 1990</xref>). Nowadays, EMA is predominantly employed for the study of speech motor control&#8212;in individuals with and without speech disorders&#8212;but its uses remain broad. For example, it can be used for the study of orofacial processes in which articulators are actively involved, such as mastication (e.g., <xref ref-type="bibr" rid="B120">Peyron, Mioche, Renon, &amp; Abouelkaram, 1996</xref>; <xref ref-type="bibr" rid="B38">Fuentes et al., 2018</xref>; <xref ref-type="bibr" rid="B55">Hoke et al., 2019</xref>) or swallowing (e.g., <xref ref-type="bibr" rid="B62">Horn, K&#252;hnast, Axmann-Krcmar, &amp; G&#246;z, 2004</xref>; <xref ref-type="bibr" rid="B141">Steele &amp; van Lieshout, 2009</xref>; <xref ref-type="bibr" rid="B1">Alvarez, Dias, Lezcano, Arias, &amp; Fuentes, 2019</xref>; see also <xref ref-type="bibr" rid="B140">Steele, 2015, for a short overview of EMA and other instrumental techniques for the study of swallowing</xref>).</p>
<p>The uses of EMA in the study of speech production are likewise varied. Beyond collecting parallel acoustic data, there has been a continued interest in supplementing articulographic data with other speech data, either by collecting data with two devices simultaneously in the same session (if technically possible) or by collecting data from the same participants in separate sessions and coupling the data afterwards. Some of the methods that have been used to collect data in the same session as EMA include: ultrasound tongue imaging (UTI) (e.g., <xref ref-type="bibr" rid="B2">Aron et al., 2016</xref>; <xref ref-type="bibr" rid="B7">Benu&#353; &amp; Gafos, 2007</xref>), electropalatography (EPG; <xref ref-type="bibr" rid="B166">West, 1999</xref>; <xref ref-type="bibr" rid="B137">Simonsen, Moen, &amp; Cowen, 2008</xref>; <xref ref-type="bibr" rid="B46">Harper, Lee, Goldstein, &amp; Byrd, 2018</xref>), electromyography (EMG; e.g., <xref ref-type="bibr" rid="B125">Rong, Loucks, Kim, &amp; Hasegawa-Johnson, 2012</xref>), and motion capture (e.g., <xref ref-type="bibr" rid="B84">Kroos, Bundgaard-Nielsen, &amp; Best, 2012</xref>; <xref ref-type="bibr" rid="B81">Krivokapi&#263;, Tiede, &amp; Tyrone, 2017</xref>). EMA and UTI, especially, are frequently used together, as EMA sensors can be used to provide a fixed reference for ultrasound recordings (e.g., <xref ref-type="bibr" rid="B155">Tiede, Chen, &amp; Whalen, 2019</xref>). Methods whose data can be coupled with EMA data after recording additionally include real-time magnetic resonance imaging (rtMRI; e.g., <xref ref-type="bibr" rid="B76">Kim, Lammert, Ghosh, &amp; Narayanan, 2014</xref>). Successful attempts have also been made to collect data from two speakers simultaneously using a dual EMA setup (e.g., <xref ref-type="bibr" rid="B40">Geng et al., 2013</xref>; <xref ref-type="bibr" rid="B154">Tiede et al., 2010</xref>).</p>
<p>Some researchers have made their EMA databases publicly available, sometimes concurrently with other kinematic data collection methods (e.g., rtMRI and UTI data collected from the same participants). Notable articulatory corpora include the USC-TIMIT multimodal speech production database (<xref ref-type="bibr" rid="B109">Narayanan et al., 2014</xref>), the MOCHA-TIMIT multi-channel articulatory database (<xref ref-type="bibr" rid="B174">Wrench, 2000</xref>), the TORGO database of acoustic and articulatory speech from dysarthric speakers (<xref ref-type="bibr" rid="B127">Rudzicz et al., 2012</xref>), the EMA-MAE corpus of Mandarin-Accented English (<xref ref-type="bibr" rid="B68">Ji, Berry, &amp; Johnson, 2014</xref>), the mngu0 articulatory corpus (<xref ref-type="bibr" rid="B123">Richmond, Hoole, &amp; King, 2011</xref>), the Haskins rate contrast database (<xref ref-type="bibr" rid="B156">Tiede et al., 2017</xref>), the MSPKA articulatory corpus of Italian (<xref ref-type="bibr" rid="B19">Canevari, Badino, &amp; Fadiga, 2015</xref>), the DKU-JNU-EMA database on Mandarin and Chinese dialects (<xref ref-type="bibr" rid="B18">Cai et al., 2018</xref>), the Mandarin-Tibetan speech corpus (<xref ref-type="bibr" rid="B90">Lobsang et al., 2016</xref>), and the database of Norwegian speech sounds (<xref ref-type="bibr" rid="B103">Moen, Gram Simonsen, &amp; Lindstad, 2004</xref>).</p>
<p>EMA has been used to provide accurate information on movements inside the vocal tract for animating talking heads (e.g., <xref ref-type="bibr" rid="B3">Badin, Tarabalka, Elisei, &amp; Bailly, 2010</xref>; <xref ref-type="bibr" rid="B42">Gilbert, Olsen, Leung, &amp; Stevens, 2015</xref>), synthesizing speech (e.g., <xref ref-type="bibr" rid="B10">Bocquelet, Hueber, Girin, Savariaux, &amp; Yvert, 2016</xref>) or acoustic-to-articulatory inversion (e.g., <xref ref-type="bibr" rid="B43">Girin, Hueber, &amp; Alameda-Pineda, 2017</xref>; <xref ref-type="bibr" rid="B138">Sivaraman, Espy-Wilson, &amp; Wieling, 2017</xref>), and improving automatic speech recognition (ASR) software (e.g., <xref ref-type="bibr" rid="B27">Demange &amp; Ouni, 2011</xref>; <xref ref-type="bibr" rid="B163">Wang, Samal, Green, &amp; Rudzicz, 2012</xref>; <xref ref-type="bibr" rid="B102">Mitra et al., 2017</xref>). It can additionally be used to provide real time video feedback of articulatory movements and thus has advantages in second language acquisition to help with target pronunciation (<xref ref-type="bibr" rid="B148">Suemitsu, Dang, Ito, &amp; Tiede, 2015</xref>) as well as in speech therapy as a biofeedback device (<xref ref-type="bibr" rid="B108">Murdoch, 2011</xref>; <xref ref-type="bibr" rid="B160">van Lieshout, 2007</xref>). Katz, Carter, and Levitt (<xref ref-type="bibr" rid="B72">2007</xref>), for example, used EMA for treatment of buccofacial apraxia, McNeil et al. (<xref ref-type="bibr" rid="B95">2010</xref>) used it to study acquired apraxia of speech, and Yunusova et al. (<xref ref-type="bibr" rid="B176">2017</xref>) used it to provide feedback to patients with Parkinson&#8217;s disease.</p>
</sec>
<sec>
<title>2.4. Accuracy and safety of EMA devices</title>
<p>Since the advent of EMA devices on the market, their sampling rate and number of channels have increased, and the accuracy has improved. Regarding the recording capabilities of the most recent articulographs, the NDI Wave and NDI Vox have a maximum sampling rate of up to 400 samples/s and can track 16 channels simultaneously (i.e., up to 16 sensors can be used). The AG500 can record 200 samples/s in 12 channels, while the AG501 can record 1250 samples/s of up to 24 channels (<xref ref-type="bibr" rid="B136">Sigona et al., 2018</xref>; <xref ref-type="bibr" rid="B128">Savariaux, Badin, Samson, &amp; Gerber, 2017</xref>). The speed of current devices is more than enough to capture speech movements from the articulators. For example, Tasko and McClean (<xref ref-type="bibr" rid="B150">2004</xref>) indicated that the maximum speed of the tongue body during connected speech was 200 mm/s, and controlled (non-ballistic movements are much slower). A sampling rate of 400 Hz thus has sufficient temporal resolution to track the fastest known articulatory movements.</p>
<p>Several studies have investigated the spatial accuracy of articulographs. Berry (<xref ref-type="bibr" rid="B8">2011</xref>) reported that the Wave system showed &lt; 0.5 mm errors for 95% of position samples recorded during human jaw movement for nine out of ten participants. A study on the Carstens AG500 has reported a median error of &lt; 0.5 mm across different types of recordings, including manual movements and various speech tasks, with the error magnitude being dependent on calibration and on the location of the sensors in the electromagnetic field as well as on the proximity between the sensors (<xref ref-type="bibr" rid="B175">Yunusova, Green, &amp; Mefferd, 2009</xref>). In addition, the AG500 was found to display some numerical instabilities and anomalies (<xref ref-type="bibr" rid="B145">Stella, M., Stella, A., Grimaldi, &amp; Fivela, 2012</xref>) which were not predictable (<xref ref-type="bibr" rid="B83">Kroos, 2012</xref>). Finally, a comparison between the Wave and several Carstens systems (namely the AG200, AG500, and AG501) revealed that all four devices showed a local precision of around 1 mm, but a large range of global precision, spanning from 3 mm to 21.8 mm (<xref ref-type="bibr" rid="B128">Savariaux et al., 2017</xref>), with the AG501 as the most accurate device with precision of 0.3 mm (RMS; <xref ref-type="bibr" rid="B31">Electromagnetic Articulograph, 2019</xref>). Comparisons of the AG500 and AG501 additionally revealed that the AG501 was found to be more accurate, stable, and user friendly (<xref ref-type="bibr" rid="B144">Stella et al., 2013</xref>; <xref ref-type="bibr" rid="B136">Sigona et al., 2018</xref>) than the AG500. A recent study on the newest NDI articulography&#8212;namely, the NDI Vox, which has been discontinued recently&#8212;has shown it to be significantly more accurate than the NDI Wave, with an average sensor pair tracking error of 0.1 mm, although a direct side-by-side device comparison would be necessary to establish how the Vox compares with the AG501 (<xref ref-type="bibr" rid="B121">Rebernik, Jacobi, Tiede, &amp; Wieling, in revision</xref>).</p>
<p>In general, electromagnetic articulographs are safe to use (<xref ref-type="bibr" rid="B48">Hasegawa-Johnson, 1998</xref>). The AG500, AG501, NDI Wave, and NDI Vox articulographs fulfil the safety requirements for electrical equipment as set by the International Electrotechnical Commission and the American Federal Communications Commission (<xref ref-type="bibr" rid="B21">Carstens AG500 Manual, 2006</xref>; <xref ref-type="bibr" rid="B22">Carstens AG501 Manual, 2014</xref>; Wave User Guide, <xref ref-type="bibr" rid="B113">Northern Digital Inc., 2009, rev. 2016</xref>; Vox User Guide, <xref ref-type="bibr" rid="B114">Northern Digital Inc., 2019</xref>). Note, however, that little research has been targeted specifically at the electromagnetic frequency ranges of EMA systems (<xref ref-type="bibr" rid="B59">Hoole &amp; Nguyen, 1999</xref>; <xref ref-type="bibr" rid="B30">Earnest &amp; Max, 2003</xref>). Furthermore, due to the moderate strength magnetic field<xref ref-type="fn" rid="n3">3</xref> a few exclusion criteria must be considered that impact participant recruitment, predominantly the use of implanted devices that might be prone to electromagnetic interference. These include (as discussed in the Wave User Guide, <xref ref-type="bibr" rid="B113">Northern Digital Inc., 2009, rev. 2016</xref>, and <xref ref-type="bibr" rid="B21">Carstens AG500 manual, 2006</xref>):</p>
<list list-type="simple">
<list-item><p>&#8211;	the use of a pacemaker (the magnetic field of the EMA may interfere with pacemaker operation; see <xref ref-type="bibr" rid="B139">Smith &amp; Assen, 1992, for a description of how electromagnetic fields affect cardiac pacemakers</xref>);</p></list-item>
<list-item><p>&#8211;	large metal objects in or around the head (such as a hearing aid or cochlear implant; see <xref ref-type="bibr" rid="B25">Crose, Kuk, &amp; Bindeballe, 2011</xref>, and <xref ref-type="bibr" rid="B157">Tognola, Parazzini, Sibella, Paglialonga, &amp; Ravazzani, 2007, for electromagnetic interference in hearing aids and cochlear implants, respectively</xref>);</p></list-item>
<list-item><p>&#8211;	the use of insulin pumps (see <xref ref-type="bibr" rid="B178">Zhang, Jones, &amp; Jetley, 2010, for a hazard analysis of insulin pumps</xref>).</p></list-item>
</list>
<p>Some studies have tested the potential adverse effects of the EMA magnetic fields on metal objects in the field and, vice versa, the effect of metal objects on the integrity of the collected EMA data. Katz et al. (<xref ref-type="bibr" rid="B71">2003</xref>) tested compatibility of the Clarion 1.2 S-Series cochlear implant with the Carstens AG100 articulograph in order to determine whether EMA affects the functioning of the implant and the participants&#8217; speech perception on the one hand, and whether the implant could potentially affect the accuracy of EMA data on the other hand. They determined that the tested cochlear implant was compatible with the AG100, as no adverse effects could be observed.</p>
<p>Joglar, Nguyen, Garst, and Katz (<xref ref-type="bibr" rid="B69">2009</xref>) tested potential interference between pacemakers/implantable cardioverter-defibrillators with the Carstens AG100. They determined that devices from Medtronic (type D154VRC), St. Jude (types 5172 and V-193), and Guidant (types 1860, T180, 1852 and 1853) were compatible with the Carstens AG100. Finally, M&#252;cke et al. (<xref ref-type="bibr" rid="B107">2018</xref>; see also <xref ref-type="bibr" rid="B50">Hermes, M&#252;cke, Thies, &amp; Barbe, 2019</xref>) tested Essential Tremor patients who had undergone thalamic deep brain stimulation (DBS) surgery. Participants were tested using the Carstens AG501 while the implant was active and inactive, with no reported adverse effects. However, as new articulographs and medical devices are introduced, it is necessary to verify their field strength and electromagnetic frequency before doing any testing on participants. Additionally, some researchers advise against including pregnant women in empirical studies using EMA (<xref ref-type="bibr" rid="B59">Hoole &amp; Nguyen, 1999</xref>; <xref ref-type="bibr" rid="B146">Stone, 2010</xref>) as the effect of the magnetic field is not entirely clear and it is better to err on the side of caution.</p>
</sec>
<sec>
<title>2.5. Participants</title>
<p>Due to the high time demands of the method&#8212;including long participant preparation times as well as data processing and analysis steps&#8212;EMA studies frequently limit their number of participants. Our literature review (see description below) showed that around 75% of studies published in journals included ten participants or fewer; around 46% included five participants or fewer. This is also in line with Kochetov (<xref ref-type="bibr" rid="B79">2020</xref>), who reported the median number of participants in an EMA study to be five. Early studies (e.g., earlier than 2003) have often only included one or two participants, and it was not uncommon for one of the authors to be a participant. With EMA&#8217;s increasing popularity, however, there has also been an increase in the number of studies with more participants, with the largest participant samples including around 50 participants (e.g., <xref ref-type="bibr" rid="B132">Sch&#246;tz, Frid, &amp; L&#246;fqvist, 2013, N = 50</xref>; <xref ref-type="bibr" rid="B23">Cheng, Murdoch, Gooz&#233;e, &amp; Scott, 2007, N = 48</xref>; <xref ref-type="bibr" rid="B171">Wieling et al., 2016, N = 48</xref>).</p>
<p>In general, most participants tested with EMA are healthy adults (around 80% of the studies). Nevertheless, several studies have tested children from five years of age onwards (e.g., <xref ref-type="bibr" rid="B70">Katz &amp; Bharadwaj, 2001</xref>; <xref ref-type="bibr" rid="B23">Cheng et al., 2007</xref>; <xref ref-type="bibr" rid="B132">Sch&#246;tz et al., 2013</xref>), giving important insights into the development of individual articulators during the process of early speech acquisition. Articulographs have also frequently been used to study disordered speech in individuals suffering from various conditions that can impact speech production and/or speech motor control, ranging from speech disorders such as stuttering and cluttering (<xref ref-type="bibr" rid="B28">Didirkova &amp; Hirsch, 2019</xref>; <xref ref-type="bibr" rid="B94">McClean, Tasko, &amp; Runyan, 2004</xref>; <xref ref-type="bibr" rid="B47">Hartinger &amp; Mooshammer, 2008</xref>) or apraxia or speech (e.g., <xref ref-type="bibr" rid="B6">Bartle-Meyer, Gooz&#233;e, &amp; Murdoch, 2009</xref>; <xref ref-type="bibr" rid="B112">Nijland, Maassen, Hulstijn, &amp; Peters, 2004</xref>); hypokinetic dysarthria (e.g., <xref ref-type="bibr" rid="B75">Kearney et al., 2018</xref>; <xref ref-type="bibr" rid="B100">Mefferd &amp; Dietrich, 2019</xref>) or Amyotrophic Lateral Sclerosis (e.g., <xref ref-type="bibr" rid="B89">Lee &amp; Bell, 2018</xref>; <xref ref-type="bibr" rid="B134">Shellikeri et al., 2016</xref>) to congenital conditions such as cleft lip (e.g., <xref ref-type="bibr" rid="B161">van Lieshout, Rutjes, &amp; Spauwen, 2002</xref>) or congenital blindness (e.g., <xref ref-type="bibr" rid="B159">Trudeau-Fisette, Tiede, &amp; M&#233;nard, 2017</xref>). Using EMA to study disordered speech (more studies can be found in the Appendix) is important to provide insight into the underlying issues of speech motor control that cannot be detected through acoustics only. However, as a method, EMA can also be more fatiguing, and researchers should thus distinguish between what they <italic>can</italic> and <italic>should</italic> ask of their participants (<xref ref-type="bibr" rid="B41">Gibbon, 2008</xref>; <xref ref-type="bibr" rid="B160">van Lieshout, 2007</xref>; see below).</p>
</sec>
</sec>
<sec>
<title>3. Literature review</title>
<p>Section 3 of the paper is intended as a review and discussion of the prevalent trends in EMA data collection of the past three decades. To identify these practices and trends, we performed a systematic literature review.<xref ref-type="fn" rid="n4">4</xref> Using Google Scholar, we collected journal publications, conference proceedings papers, and other academic writings by employing the search terms &#8216;articulography,&#8217; &#8216;articulograph,&#8217; &#8216;articulometry,&#8217; and &#8216;articulometer,&#8217; between the years of 1987 and 2019. We excluded publications that were less than four pages long, publications that did not describe participant studies (e.g., because the authors used an existing database, focused on a new analysis procedure or assessed the more technical aspects of the EMA such as device accuracy), and publications that were written in languages other than English.<xref ref-type="fn" rid="n5">5</xref> These search criteria led to 905 identified publications, which likely encompasses the large majority of published works utilizing articulographs. It should thus provide a representative overview of EMA data collection procedures. The present review considers 412 journal publications, 413 conference papers, and 80 other writings (most frequently doctoral dissertations).</p>
<p>During the reviewing process, we identified the following parameters: type of EMA device used, number of participants, population, total number of sensors, number of tongue sensors, sensor placement, sensor preparation, and adhesive used for sensor placement. Not all publications reported all information. For example, while most publications mention the device type (especially after several manufacturers started producing articulographs) and number of sensors, few of them mention the adhesive in use.</p>
<p>In the Appendix, we have provided a table with all identified studies. Please note that for this paper, we have analyzed the trends and practices based on journal publications only (N = 412). This prevents us from counting the same study multiple times, because studies described in journal publications have often already been presented at one or more conferences but are rarely published in more than one journal.</p>
<sec>
<title>3.1. Data collection practices</title>
<p>To draw valid conclusions about speech kinematics and speech motor control based on EMA data, it is necessary to ensure between-subjects and between-studies comparability. On the one hand, it is important to correctly place EMA sensors on the speech articulators depending on the specific goals of the study and to optimize sensor adhesion time to ensure cross-trial comparability (after re-attachment, a sensor might not be in the exact same position as before). On the other hand, it is necessary to make the experimental procedure as comfortable as possible for participants while not impeding scientific accuracy.</p>
<p>In the sections below, we lean on our literature review to report some general information on sensor placement, followed by information on certain anatomical considerations that might result in a different sensor attachment strategy, and finally information on the placement and preparation of specific sensor categories (including reference sensors, jaw movement sensors, tongue sensors, and lip sensors).</p>
<p>At this point, we would like to emphasize that most authors follow a certain template when reporting on their EMA study. Such a template is usually of the form:</p>
<disp-quote>
<p>Articulatory data was collected using [<italic>device name, device manufacturer</italic>] at a sampling rate of [<italic>sampling rate</italic>, often 100, 200 or 400 Hz]. Acoustic data was simultaneously collected using [<italic>microphone device</italic>] at [<italic>sampling frequency</italic>, often 16 kHz]. [<bold><italic>Number</italic></bold>] sensors were attached to the tongue, lips and jaw using the non-toxic adhesive [<bold><italic>name adhesive</italic></bold>]. Specifically, [<bold><italic>number</italic></bold>] sensors were affixed to the tongue: one on the tongue tip, [<bold><italic>location</italic></bold>, often &#8220;about 1 cm from the anatomical tip&#8221;], one on the back of the tongue [<bold><italic>location</italic></bold>, often &#8220;as far back as comfortable&#8221;], and one [<bold><italic>location</italic></bold>, with three sensors often &#8220;midway between the tongue tip and tongue back sensor&#8221;]. One sensor affixed to [<bold><italic>location</italic></bold>, often the lower incisor] tracked jaw movements and two sensors were placed on the vermillion border of the upper and lower lips. [<bold><italic>Number</italic></bold>] reference sensors were additionally placed on [<bold><italic>location</italic></bold>, often the left and right mastoid, nasion and/or upper incisor] to correct for head movement. A recording of the bite plane was made using [<bold><italic>description of the process</italic></bold>] and a palate trace was made [<bold><italic>description of the process</italic></bold>].</p>
</disp-quote>
<p>In the following sections, we discuss the variables that are indicated in this template in bold. Some of the other parts (such as devices and sampling rates) have already been discussed above. Finally, the following sections do not provide information on the EMA data analysis process: The reader is directed to consult Gafos, Kirov, and Shaw (<xref ref-type="bibr" rid="B39">2010</xref>) who provided guidelines for using mview, the frequently-used EMA data analysis programme developed by Mark Tiede at Haskins Laboratories (<xref ref-type="bibr" rid="B153">Tiede, 2005</xref>); Hoole (<xref ref-type="bibr" rid="B56">2012</xref>) who provides a tutorial on his software for processing AG500/AG501 data; and Kolb (<xref ref-type="bibr" rid="B80">2015</xref>) who details some other existing software tools and analysis methods. A tutorial on how to analyze EMA data using non-linear regression techniques is provided by Wieling (<xref ref-type="bibr" rid="B170">2018</xref>).</p>
</sec>
<sec>
<title>3.2. General sensor placement information</title>
<p>Articulographs can be used to study the behaviour of both extraoral (i.e., the lips and the jaw) and intraoral (i.e., the tongue) articulators. The exact choice of sensors depends on several factors, including the studied population (clinical versus healthy, see below; impacts the number of intraoral sensors) and the sounds that are to be investigated (e.g., apical versus lateral; impacts sensor placement). Researcher preference also plays a role: Some prefer to adhere the minimum number of sensors (to decrease the time necessary for participant preparation), while others prefer to adhere more sensors (to collect additional data, using it to answer more research questions). With few exceptions, sensors are almost always placed midsagitally.</p>
<p>The number of intraoral sensors is an important consideration in EMA studies. On the one hand, having more sensors on the tongue allows the tracking of more points and thus yields a better picture of the movement of the tongue. On the other hand, when also including the intraoral jaw movement sensor and reference sensor on the upper and lower incisors, respectively, speakers frequently have five or more wires in their mouth. This may lead to discomfort and affect participants&#8217; speech. More tongue sensors are especially problematic where sensitive populations are concerned. These individuals may be more prone to fatigue (e.g., <xref ref-type="bibr" rid="B36">Friedman et al., 2007, on fatigue in PD patients</xref>), more likely to drool (<xref ref-type="bibr" rid="B122">Reddihough &amp; Johnson, 1999</xref>), and find it more difficult to stick out their tongue or open their mouth. Furthermore, their speech is more likely to be impeded by a foreign object in their oral cavity. In the case of children, their tongues are smaller, they also salivate more, and need more frequent toilet visits, which necessitates shorter experimental procedures, including shorter preparation times. When testing children and patients, researchers therefore often opt for only two tongue sensors (tongue tip and tongue back) in addition to the intraoral jaw movement sensor and the intraoral reference sensor.</p>
<p>While the exact sensor placement depends on the study, there are some typical sensor placements. These are depicted in Figure <xref ref-type="fig" rid="F2">2</xref>, which shows movement sensors used to track the movement of articulators (red dots; including the lips, jaw, and tongue) and reference sensors, placed on orofacial structures that do not move during speech production (green dots; including both mastoids, the nasion, and upper incisor). More details on individual sensor categories are provided below.</p>
<fig id="F2">
<label>Figure 2</label>
<caption>
<p>Common placement of EMA sensors: Red dots mark movement sensors, green dots reference sensors. Original image by Tavin, distributed under the CC Attribution 3.0 Unported license (sensor points were added by the authors).</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="/article/id/6289/file/75943/"/>
</fig>
<p>After all sensors have been placed, a biteplate<xref ref-type="fn" rid="n6">6</xref> recording can be made with a biteplate object that has several sensors attached to it (see Figure <xref ref-type="fig" rid="F9">9</xref> in Section 4.2 for a picture of our lab&#8217;s biteplate with three sensors). The object is placed between the participant&#8217;s teeth and a recording is made to obtain the relative orientation of the sensors on the biteplate compared to the reference sensors. This information is then used to rotate the acquired sensor movement data (of the sensors attached to the articulators) to a comparable occlusal plane per participant (<xref ref-type="bibr" rid="B167">Westbury, 1994</xref>). Finally, palate trace recordings are made, where a sensor is used to trace the palate across the occlusal plane, providing an estimate of the shape of participants&#8217; oral cavity (see <xref ref-type="bibr" rid="B111">Neufeld &amp; van Lieshout, 2014, for a description on how EMA sensors can be used to construct a 3D model of the hard palate</xref>).</p>
<p>The time it takes for all sensors to be placed varies. Earnest and Max (<xref ref-type="bibr" rid="B30">2003</xref>), for example, state that it can take anywhere between 30 and 60 minutes. This time can be reduced depending on the device, the number of sensors, and their placement. Before starting the experiment, researchers additionally allow some time for the participants to adjust to the sensors. A study by Dromey et al. (<xref ref-type="bibr" rid="B29">2018</xref>), who tested sensor habituation, found that after ten minutes, participants reached a level of habituation to the sensors that did not improve even if the habituation stage lasted longer. In general, if researchers include a sensor habituation stage, it is most often 5&#8211;10 minutes of informal conversation (e.g., <xref ref-type="bibr" rid="B74">Katz, Mehta, &amp; Wood, 2018</xref>; <xref ref-type="bibr" rid="B45">Gooz&#233;e et al., 2007</xref>).</p>
<p>Several brands of adhesive can be used to adhere the sensors. The Carstens website recommends Epiglu (Meyer Haake GmbH), whereas NDI does not give any adhesive recommendations on their website. Other popular adhesives include PeriAcryl&#174;90HV (Glustitch), Isodent cyanoacrylate adhesive (Ellman International), Cyano Veneer Fast (Scheu Dental Technology), Cyanodent (Ellman International), Histoacryl (B. Braun), and Aron Alpha (Toagosei). Note that IsoDent and Cyano Dent adhesives appear to be discontinued<xref ref-type="fn" rid="n7">7</xref>, and Cyano Veneer Fast has not renewed its medical certification, while the intraoral use of Histoacryl may be problematic due to potential cytotoxic effects (<xref ref-type="bibr" rid="B129">Schneider &amp; Otto, 2012</xref>). PeriAcryl&#174;90HV has been used most often in recent years.</p>
<p>What these adhesives (except for Histoacryl; <xref ref-type="bibr" rid="B129">Schneider &amp; Otto, 2012</xref>) have in common is that they are intended for oral tissue (e.g., for use in dental or oral surgery), are biologically safe, and relatively viscous. Dental cements, including Ketac&#8482;, Durelon, and Fuji, have also been used by several labs to attach tongue sensors (e.g., <xref ref-type="bibr" rid="B104">Mooshammer, Hoole, &amp; Geumann, 2006</xref>; <xref ref-type="bibr" rid="B149">Tabain, 2003</xref>; <xref ref-type="bibr" rid="B142">Steele &amp; van Lieshout, 2004</xref>), but are more invasive, as they involve covering the tongue dorsum with a hard substance. Dental cement also causes faster deterioration of sensors and leads to participant discomfort. However, it does have the benefit of making sensors adhere to the tongue for a longer period of time (e.g., <xref ref-type="bibr" rid="B5">Ball, Gracco, &amp; Stone, 2001, state that the sensors remain firmly attached to the tongue surface for over 90 minutes</xref>).</p>
<p>Before discussing frequent sensor placements, it is also necessary to mention some more unusual sensor placements. In the past, sensors have been adhered to the velum using different means, from glue to atraumatic sutures (e.g., <xref ref-type="bibr" rid="B33">Engelke, Hoch, Bruns, &amp; Striebeck, 1996, number of participants N = 1</xref>; <xref ref-type="bibr" rid="B116">Okadome &amp; Honda, 2001, N = 3</xref>; <xref ref-type="bibr" rid="B66">Jaeger &amp; Hoole, 2011, N = 4</xref>). Other orofacial structures to which sensors have been adhered include the uvula (e.g., <xref ref-type="bibr" rid="B53">Hoenig &amp; Schoener, 1992, N = 30</xref>), thyroid cartilage/skin above the larynx (e.g., <xref ref-type="bibr" rid="B1">Alvarez et al., 2019</xref>, N = 14; <xref ref-type="bibr" rid="B135">Shosted, Carignan, &amp; Rong, 2011, N = 4</xref>; <xref ref-type="bibr" rid="B16">B&#252;ckins, Greisbach, &amp; Hermes, 2018, N = 4</xref>), and sublaminally on the underside of the tongue (e.g., <xref ref-type="bibr" rid="B124">Rochon &amp; Pompino-Marschall, 1999, N = 4</xref>).</p>
</sec>
<sec>
<title>3.3. Anatomical considerations</title>
<sec>
<title>3.3.1. Tongue anatomy</title>
<p>The tongue is a highly mobile and muscled articulator, responsible for speech, mastication, and deglutition. For the purposes of speech production, there are two potential ways of defining parts of the tongue: the anatomical perspective (see note<xref ref-type="fn" rid="n8">8</xref> for details) and the functional perspective, which defines the tongue in terms of functions that different parts serve in the process of speech motor control, and is thus directly relevant to EMA data collection. Following Ladefoged and Maddieson (<xref ref-type="bibr" rid="B87">1996, Ch. 2</xref>), the tongue consists of the tongue tip (Figure <xref ref-type="fig" rid="F3">3&#8211;1</xref>), tongue blade (just behind the tip), tongue body (Figure <xref ref-type="fig" rid="F3">3&#8211;2</xref>), and tongue root (Figure <xref ref-type="fig" rid="F3">3&#8211;3</xref>). The tip of the tongue starts parallel to the surfaces of incisors and extends to cover a small area about 2 mm wide on the upper surface of the tongue at rest. The blade of the tongue is the part that starts behind the tongue tip and extends to 2 mm behind the point of the tongue that is located below the center of the alveolar ridge (i.e., the point of the maximum slope). Sounds made with the tongue tip are said to be apical while those made with the tongue blade are said to be laminal. When discussing sensor placement, we refer to the sensor adhered to this most anterior part of the tongue (encompassing both the tip and the blade) as the &#8216;tongue tip&#8217; sensor (Figure <xref ref-type="fig" rid="F3">3&#8211;1</xref>).</p>
<fig id="F3">
<label>Figure 3</label>
<caption>
<p>Tongue anatomy: tongue tip (1), tongue body (2), and tongue root (3). Original image by Jonas T&#246;le, distributed under the CC CC0 1.0 Universal Public Domain Dedication license.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="/article/id/6289/file/75944/"/>
</fig>
<p>The tongue body (Figure <xref ref-type="fig" rid="F3">3&#8211;2</xref>) is the mass of tongue behind the blade and can roughly be divided into tongue body front (below the hard palate) and tongue body back (below the velum). Sounds that are produced with this part of the tongue are dorsal. When discussing sensor placement, we refer to sensors placed on the tongue body as either &#8216;tongue mid&#8217; or &#8216;tongue back,&#8217; depending on how close to the tongue root the sensor is. Unless specified differently, all sensors are placed along the midline of the tongue, i.e., the median sulcus, which divides the tongue into the left and right parts.</p>
<p>Finally&#8212;regarding the tongue parts that are not easily accessible for sensor placement and EMA measurements&#8212;the tongue root is found behind the tongue body (Figure <xref ref-type="fig" rid="F3">3&#8211;3</xref>), in the oropharynx, together with the epiglottis. It is not easily possible to track tongue root movements with an EMA sensor due to the gag reflex.</p>
<p>Depending on the target sounds and/or phenomena being studied, different sensors are used (see Table <xref ref-type="table" rid="T1">1</xref> for some common sounds and corresponding sensors). In all cases, it is presumed that reference sensors (most frequently on the nasion, upper incisor, and both mastoids) are additionally being used. Note that the table only shows a limited subset of sounds that have been studied with EMA. Importantly, Yunusova, Rosenthal, Rudy, Baljko, and Sakalogiannakis (<xref ref-type="bibr" rid="B177">2012</xref>) describe which lingual sounds can be distinguished using articulography, and state that consonants cannot be distinguished on the basis of only one characteristic, such as the tongue position measured with a single sensor, as more dimensions are needed (e.g., also lip sensors).</p>
<table-wrap id="T1">
<label>Table 1</label>
<caption>
<p>Sounds studied with EMA sensors. Other sensors are needed in order to determine how sensor location relates to other orofacial structures and articulators. Example studies are included.</p>
</caption>
<table>
<tr>
<th align="left" valign="top">Target sound</th>
<th align="left" valign="top">Articulator sensor placement</th>
<th align="left" valign="top">Example study</th>
</tr>
<tr>
<td colspan="3"><hr/></td>
</tr>
<tr>
<td align="left" valign="top">bilabial stops (<italic>/p, b/</italic>)</td>
<td align="left" valign="top">vermillion border of upper and lower lips</td>
<td align="left" valign="top"><xref ref-type="bibr" rid="B158">Tong and Ng, 2011</xref></td>
</tr>
<tr>
<td align="left" valign="top">velar stops (<italic>/k, g/</italic>)</td>
<td align="left" valign="top">tongue back sensor (close to place of constriction)</td>
<td align="left" valign="top"><xref ref-type="bibr" rid="B13">Brunner, Fuchs, and Perrier, 2011a</xref></td>
</tr>
<tr>
<td align="left" valign="top">alveolar stops (<italic>/t, d/</italic>)</td>
<td align="left" valign="top">tongue tip</td>
<td align="left" valign="top"><xref ref-type="bibr" rid="B85">K&#252;hnert and Hoole, 2004</xref></td>
</tr>
<tr>
<td align="left" valign="top">liquids (/<italic>l, r</italic>/)</td>
<td align="left" valign="top">tongue sensors placed laterally and midsagitally</td>
<td align="left" valign="top"><xref ref-type="bibr" rid="B63">Howson and Kochetov, 2015</xref></td>
</tr>
<tr>
<td align="left" valign="top">sibilants (/<italic>s, z, &#643;, &#658;</italic>/)</td>
<td align="left" valign="top">tongue tip</td>
<td align="left" valign="top"><xref ref-type="bibr" rid="B17">Bukmaier and Harrington, 2016</xref></td>
</tr>
<tr>
<td align="left" valign="top">(labio)dental fricatives (<italic>/f, v, &#952;, &#240;/</italic>)</td>
<td align="left" valign="top">three tongue sensors</td>
<td align="left" valign="top"><xref ref-type="bibr" rid="B172">Wieling, Veenstra, Adank, and Tiede, 2017</xref></td>
</tr>
<tr>
<td align="left" valign="top">trills</td>
<td align="left" valign="top">tongue sensors placed laterally and midsagitally</td>
<td align="left" valign="top"><xref ref-type="bibr" rid="B64">Howson, Kochetov, and van Lieshout, 2015</xref></td>
</tr>
<tr>
<td align="left" valign="top">vowels</td>
<td align="left" valign="top">three or more tongue sensors</td>
<td align="left" valign="top"><xref ref-type="bibr" rid="B58">Hoole, Mooshammer, and Tillmann, 1994</xref>;</td>
</tr>
<tr>
<td align="left" valign="top">nasal vowels</td>
<td align="left" valign="top">three tongue sensors</td>
<td align="left" valign="top"><xref ref-type="bibr" rid="B20">Carignan, Shosted, Shih, and Rong, 2011</xref></td>
</tr>
</table>
</table-wrap>
<p>Tongue shapes vary vastly from one individual to the next (<xref ref-type="bibr" rid="B77">King &amp; Parent, 2001</xref>; <xref ref-type="bibr" rid="B86">Kullaa-Mikkonen, Mikkonen, &amp; Kotilainen, 1982</xref>). For example, some individuals may have a more fissured tongue with more grooving than others, which makes sensor adhesion directly to the median sulcus more difficult. Regarding tongue anatomy, several factors should be considered, including age (namely, adults have a longer tongue than children; <xref ref-type="bibr" rid="B162">Vorperian et al., 2005</xref>), body weight (namely, tongue muscle volume positively correlates with body weight; <xref ref-type="bibr" rid="B147">Stone et al., 2018</xref>), and gender. The effects of the latter are less clear, as some studies have shown that men have significantly larger tongue breadth and volume (<xref ref-type="bibr" rid="B117">Oliver &amp; Evans, 1986</xref>; <xref ref-type="bibr" rid="B92">Mahne et al., 2007</xref>), while others failed to find such an effect, even though men do usually have a larger bony structure (<xref ref-type="bibr" rid="B61">Hopkin, 1967</xref>). Additionally, tongue rhythm and velocity correlate with age (movements are slower and more irregular in the elderly; <xref ref-type="bibr" rid="B52">Hirai et al., 1989</xref>). Finally, different types of tongue movements exist, from hollowing and grooving to pulling back, tipping, heaping, and bunching (<xref ref-type="bibr" rid="B51">Hiiemae &amp; Palmer, 2003</xref>), which impacts the production of different sounds.</p>
</sec>
<sec>
<title>3.3.2. Hard palate, salivary flow rates, and gingival tissue</title>
<p>Aside from considerations related to the tongue itself, restrictions posed by the rest of the oral cavity have to be taken into account when placing intraoral sensors. Particularly relevant in this regard are the hard palate, gingival tissue, and salivary flow rates. Differences between speakers occur in the height, length, slope, width, and curvature of the hard palate (e.g., <xref ref-type="bibr" rid="B12">Brunner, Fuchs, &amp; Perrier, 2009</xref>; <xref ref-type="bibr" rid="B126">Rudy &amp; Yunusova, 2013</xref>; <xref ref-type="bibr" rid="B88">Lammert, Proctor, &amp; Narayanan, 2018</xref>). These differences in palate shape are also responsible for variability in speech production. When comparing the speech produced by individuals with flat, domed, or regular palates, it has been hypothesized that speakers with flat palates have more precise articulations because that is the only way to maintain acoustic consistency (<xref ref-type="bibr" rid="B4">Bakst &amp; Johnson, 2018</xref>; <xref ref-type="bibr" rid="B12">Brunner et al., 2009</xref>). Furthermore, palatal morphology can also account for some variability in tongue positioning (<xref ref-type="bibr" rid="B126">Rudy &amp; Yunusova, 2013</xref>).</p>
<p>Other anatomical considerations include the production of saliva and gingival tissue. Salivary flow rates (i.e., the quantity of saliva) differ greatly across healthy individuals (<xref ref-type="bibr" rid="B169">Whelton, 2012</xref>). This may substantially influence how well intraoral sensors adhere to the tongue and incisors, as the usual cyanoacrylate adhesives (see description of adhesives above) polymerize after coming into contact with saliva. Moreover, the production of saliva is heavily influenced by external factors, such as degree of hydration or circadian rhythm, but also by minor factors including gender, age, and body weight (<xref ref-type="bibr" rid="B169">Whelton, 2012</xref>). Specifically, men salivate more than women (<xref ref-type="bibr" rid="B65">Inoue et al., 2006</xref>), elderly adults salivate less than middle-aged adults (<xref ref-type="bibr" rid="B110">Navazesh, Mulligan, Kipnis, Denny, P. A., &amp; Denny, P. C., 1992</xref>), and individuals with a higher body mass index have a less heavy salivary flow rate (<xref ref-type="bibr" rid="B35">Flink, Bergdahl, Tegelberg, Rosenblad, &amp; Lagerl&#246;f, 2008</xref>).</p>
<p>Finally, especially relevant for the attachment of the intraoral jaw-movement and reference sensors, which are usually positioned on or close to the lower and upper incisors, is the amount of gingival tissue above and below the incisors. These two (lower and upper incisor) sensors can be more easily placed when the speaker has a larger gingival surface above and below the incisors. For speakers with a small gingival surface, or for speakers who have a prominent labial frenulum, an alternative sensor placement plan may be considered (e.g., on the chin&#8212;which is non-ideal due to skin movement&#8212;or directly on the incisors as opposed to the gingival tissue).</p>
</sec>
</sec>
<sec>
<title>3.4. Reference sensors</title>
<sec>
<title>3.4.1. Use and positioning</title>
<p>During the post-processing stage of EMA data, positional data from the reference sensors is used to correct for deviations in head position relative to a consistent reference position, which is usually the occlusal plane. The reference sensors are usually placed as far apart as possible (to minimize the effect of noise on the position estimation of individual sensors) on bony structures with least skin movement, including the nasion (N), mastoid processes (i.e., on the bone behind both ears; ML and MR), and the gingival tissue of upper central or lateral incisors (UI). Our literature review shows that older studies predominantly included two reference sensors placed in the midsagittal plane (i.e., on the nasion and upper incisor), while newer studies often include more.</p>
<p>While reference sensors are usually similar in architecture as movement sensors (i.e., capturing five degrees of freedom, hereinafter 5DOF), NDI has additionally developed a (two-channel) 6DOF sensor in which two 5DOF sensors are integrated to have a specific distance and relative orientation. If a 6DOF sensor is used, it is usually attached to the forehead, and automatically corrects the data of the other sensors for the head movements (measured via the 6DOF sensor). While it is convenient to use only one reference sensor, the potential for noise (induced by skin movement) is greater in comparison to the more commonly used three-sensor setup as discussed above.</p>
</sec>
<sec>
<title>3.4.2. Preparation and adhesion</title>
<p>Reference sensors are prepared differently depending on where they are being placed. Those placed on extraoral structures (i.e., the nasion and mastoid sensors) are generally taped using medical tape. They need to be taped firmly to prevent movement; a small drop of adhesive can additionally be added to achieve this. They can also be coated in latex to make disinfection after the experimental session easier and to prolong sensor longevity. The intraoral reference sensor is usually placed on the gingiva above the upper central or lateral incisors. Section 3.6.2. provides more information on preparing the intraoral incisor reference sensor.</p>
<p>The reference sensors can alternatively be prepared and placed on a pair of goggles, on the frame of a pair of plastic glasses, or on a headband (e.g., <xref ref-type="bibr" rid="B67">Ji, Berry, &amp; Johnson, 2013</xref>; <xref ref-type="bibr" rid="B99">Mefferd, 2019</xref>; <xref ref-type="bibr" rid="B152">Thompson &amp; Kim, 2019</xref>; <xref ref-type="bibr" rid="B75">Kearney et al., 2018</xref>). The Appendix shows additional information regarding individual researchers&#8217; strategies to place reference sensors.</p>
</sec>
</sec>
<sec>
<title>3.5. Tongue sensors</title>
<sec>
<title>3.5.1. Use and positioning</title>
<p>Tongue sensors are used to track tongue movements and investigate the production of a wide range of sounds, from alveolar stops (with a tongue tip sensor) to velars (with a tongue back sensor). Sensors are placed midsagitally unless the researcher wishes to specifically study lateral sounds, in which case one or two sensors may be added on the lateral parts of the tongue.</p>
<p>Concerning tongue sensors, 375 journal studies (out of 412 in total) explicitly mention the number and/or positioning of tongue sensors (as opposed to, e.g., only generally mentioning that they used tongue sensors). A total of 41 out of 375 studies (11%) use one tongue sensor, 90 studies use two tongue sensors (24%), 165 studies use three tongue sensors (44%), 70 studies use four tongue sensors (19%), and nine studies use five tongue sensors or more (2%). Either two or three sensors on the tongue are thus the most frequent choice, bringing the total number of intraoral sensors to four or five (including the reference sensor on the upper incisors and a jaw-movement sensor on the lower incisors).</p>
<p>If three sensors are used, they are usually placed on the tongue tip (TT), tongue middle (TM), and tongue back (TB) along the tongue&#8217;s median sulcus. When three sensors are used, there are two main approaches to dividing the tongue dorsum: either by placing TT and TB according to a predetermined measurement strategy or by spacing the sensors equidistantly (see below and also Table <xref ref-type="table" rid="T2">2</xref>).</p>
<table-wrap id="T2">
<label>Table 2</label>
<caption>
<p>Tongue sensor placement strategies. Percentages are calculated based on the number of studies that use the sensor in question (as defined under the sensor type). The dominant strategy is in bold.</p>
</caption>
<table>
<tr>
<th align="left" valign="top">Sensor</th>
<th align="left" valign="top">Methods of placement</th>
<th align="center" valign="top">Studies (%)</th>
</tr>
<tr>
<td colspan="3"><hr/></td>
</tr>
<tr>
<td align="left" valign="top" rowspan="2">Tongue Tip (TT)</td>
<td align="left" valign="top"><underline>from anatomical tongue tip</underline></td>
<td align="right" valign="top"></td>
</tr>
<tr>
<td align="left" valign="top">&#8804;1 cm (often 0.5 cm)</td>
<td align="right" valign="top">30 (11%)</td>
</tr>
<tr>
<td align="left" valign="top">263 studies (96%) out of 273</td>
<td align="left" valign="top"><bold>1 cm</bold></td>
<td align="right" valign="top"><bold>164 (62%)</bold></td>
</tr>
<tr>
<td align="left" valign="top">use a TT sensor</td>
<td align="left" valign="top">1.1&#8211;2 cm</td>
<td align="right" valign="top">16 (6%)</td>
</tr>
<tr>
<td align="left" valign="top"></td>
<td align="left" valign="top">just behind the tongue tip</td>
<td align="right" valign="top">18 (7%)</td>
</tr>
<tr>
<td align="left" valign="top"></td>
<td align="left" valign="top">other (including not defined)</td>
<td align="right" valign="top">35 (13%)</td>
</tr>
<tr>
<td colspan="3"><hr/></td>
</tr>
<tr>
<td align="left" valign="top">Tongue Back (TB)</td>
<td align="left" valign="top"><bold>as far back (as feasible; as comfortable)</bold></td>
<td align="right" valign="top"><bold>50 (23%)</bold></td>
</tr>
<tr>
<td align="left" valign="top" rowspan="12">216 studies (79%) out of 273 use a TB sensor</td>
<td align="left" valign="top"><underline>behind anatomical tongue tip</underline></td>
<td align="right" valign="top"></td>
</tr>
<tr>
<td align="left" valign="top">&lt;3.5 cm</td>
<td align="right" valign="top">9 (4%)</td>
</tr>
<tr>
<td align="left" valign="top">4&#8211;4.5 cm</td>
<td align="right" valign="top">13 (%)</td>
</tr>
<tr>
<td align="left" valign="top">5&#8211;5.5 cm</td>
<td align="right" valign="top">10 (%)</td>
</tr>
<tr>
<td align="left" valign="top">&gt;6</td>
<td align="right" valign="top">2 (%)</td>
</tr>
<tr>
<td align="left" valign="top"><underline>behind TT sensor</underline></td>
<td align="right" valign="top"></td>
</tr>
<tr>
<td align="left" valign="top">&lt;3 cm</td>
<td align="right" valign="top">5 (2%)</td>
</tr>
<tr>
<td align="left" valign="top">4&#8211;5 cm</td>
<td align="right" valign="top">5 (2%)</td>
</tr>
<tr>
<td align="left" valign="top"><underline>behind TM1 or TM2 sensor</underline></td>
<td align="right" valign="top"></td>
</tr>
<tr>
<td align="left" valign="top">1&#8211;2 cm</td>
<td align="right" valign="top">32 (15%)</td>
</tr>
<tr>
<td align="left" valign="top">other</td>
<td align="right" valign="top">17 (8%)</td>
</tr>
<tr>
<td align="left" valign="top">not defined</td>
<td align="right" valign="top">42 (19%)</td>
</tr>
<tr>
<td colspan="3"><hr/></td>
</tr>
<tr>
<td align="left" valign="top">Tongue Mid (TM)</td>
<td align="left" valign="top"><underline>With 2 or 3 sensors (TT, TM, TB):</underline></td>
<td align="right" valign="top"></td>
</tr>
<tr>
<td align="left" valign="top" rowspan="9">207 studies (76%) out of 273 use one or two TM sensors</td>
<td align="left" valign="top"><bold>midpoint between TT and TB</bold></td>
<td align="right" valign="top"><bold>40 (19%)</bold></td>
</tr>
<tr>
<td align="left" valign="top">1&#8211;2 cm behind TT sensor</td>
<td align="right" valign="top">29 (14%)</td>
</tr>
<tr>
<td align="left" valign="top">3&#8211;3.5 cm behind TT sensor</td>
<td align="right" valign="top">20 (10%)</td>
</tr>
<tr>
<td align="left" valign="top">1&#8211;2 cm behind anatomical tip</td>
<td align="right" valign="top">18 (9%)</td>
</tr>
<tr>
<td align="left" valign="top">3&#8211;3.5 cm behind anatomical tip</td>
<td align="right" valign="top">17 (8%)</td>
</tr>
<tr>
<td align="left" valign="top">4&#8211;5 cm behind anatomical tip</td>
<td align="right" valign="top">15 (7%)</td>
</tr>
<tr>
<td align="left" valign="top"><underline>With 4 or more sensors (TT, TM1, TM2, TB):</underline></td>
<td align="right" valign="top"></td>
</tr>
<tr>
<td align="left" valign="top">midpoint between TT and TB, equal-spaced</td>
<td align="right" valign="top">13 (6%)</td>
</tr>
<tr>
<td align="left" valign="top">other (including not defined)</td>
<td align="right" valign="top">43 (21%)</td>
</tr>
</table>
</table-wrap>
<p>In their placement of the TT sensor, most researchers provide a measurement, with &#8216;approximately 1 cm&#8217; from anatomical tongue tip as the most popular choice (note that the sensor cannot be placed directly on the tip because it would interfere significantly with speech production and fall off quickly). Keeping in mind the functional perspective on tongue anatomy, this means that the &#8216;tongue tip&#8217; sensor is in fact placed on the tongue blade as opposed to the tongue tip. The exact method of measurement (i.e., by ruler, calliper, or simply &#8216;eyeballing&#8217;) is mostly left unspecified. Furthermore, with a few exceptions, it is not indicated whether the measurements were performed with the tongue comfortably extended, stretched out, or at rest inside the mouth.</p>
<p>Regarding the placement of the TB and TM sensors, strategies vary to a greater extent than the strategies for the TT sensor. Some researchers decide on a specific measurement, e.g., by placing TB and TM sensors with 2 cm of space in between each sensor or by placing the TB sensor 4&#8211;5 cm from the TT sensor, with the TM sensor in between the two. Others decide to place the TB sensor &#8216;as far back as possible&#8217; and the TM sensor in between. If two TM sensors are used, they are most often defined as being placed equidistantly between the TT and TB sensors.</p>
<p>Few studies use lateral sensors (some exceptions include e.g., <xref ref-type="bibr" rid="B64">Howson et al., 2015</xref>; <xref ref-type="bibr" rid="B73">Katz, Mehta, &amp; Wood, 2017</xref>; <xref ref-type="bibr" rid="B151">Thibeault, M&#233;nard, Baum, Richard, &amp; McFarland, 2011</xref>; see the Appendix for a full list of studies using tongue lateral sensors). If lateral sensors are used, they are most often placed to the side of the TM sensor, about 1 cm from the tongue edge.</p>
<p>Table <xref ref-type="table" rid="T2">2</xref> provides an overview of the most common strategies for tongue sensor placement as well as their usage frequency in our literature review. The main strategy for each sensor type is highlighted in bold. In total, 273 out of 375 studies explicitly defined the position of at least one tongue sensor. For more details on which researchers use which strategy, the reader is invited to consult the &#8216;tongue sensors&#8217; tab in the Appendix.</p>
<p>While not strictly in the purview of this literature review, we would like to mention two recent publications, which proposed more data-driven approaches to sensor placement. First, Patem, Illa, Afshan, and Ghosh (<xref ref-type="bibr" rid="B118">2018</xref>) used dynamic programming in order to determine optimal sensor placement for the sounds of American English based on rtMRI video frames of the vocal tract. Based on data of four participants (two male, two female), they determined that the optimal placement for three tongue sensors is to place the tongue tip sensor at 19.93 &#177; 11.45 mm from tongue base,<xref ref-type="fn" rid="n9">9</xref> the tongue middle sensor at 38.2 &#177; 11.52 mm from the tongue tip sensor, and the tongue back sensor at 80.51 &#177; 13.51 mm from the tongue tip sensor.</p>
<p>These measurements are informative for the four participants examined, however it would in practice be difficult to measure a participant&#8217;s tongue in such detail and difficult to find participants for whom such measurements would be suitable (e.g., placing a tongue back sensor at 8 cm from the tongue tip sensor is often not practically possible due to limited tongue length; Patem and colleagues themselves state that they did not consider the level of discomfort in determining optimal sensor locations). Furthermore, it is not possible to accurately determine the tongue base without access to MRI, and the confidence intervals of the presented optimal placements are rather large.</p>
<p>Second, Wang, Samal, Rong, and Green (<xref ref-type="bibr" rid="B164">2016</xref>) used machine learning to determine an optimal set of points needed for classifying speech movements. They determined that for classifying most sounds (including both vowels and consonants), a set of four sensors (tongue tip, tongue back, upper lip, and lower lip) suffices. This is especially informative when studying the speech of clinical populations, since in those circumstances it is often desirable to use the minimal number of sensors to limit the burden on the participants.</p>
</sec>
<sec>
<title>3.5.2. Preparation and adhesion</title>
<p>Few studies mention the preparation of tongue sensors prior to placement. However, no conclusions can be drawn from this, as some researchers might simply not mention the specifics of sensor preparation due to manuscript length limitations or a perceived lack of interest from the readers. We could nonetheless identify some tongue sensor preparation options. Note that the tongue itself is also often &#8216;prepared,&#8217; as it is dried to improve sensor adhesion (see also Section 4 for our drying procedure). First, some researchers adhere the sensors to the tongue without any preparation (i.e., using bare or out-of-the-box sensors).</p>
<p>Another option is to coat the sensors in latex before adhesion, a frequently-used approach (<xref ref-type="bibr" rid="B30">Earnest &amp; Max, 2003</xref>). This method is suggested on the website of the Carstens articulograph (<xref ref-type="bibr" rid="B31">Electromagnetic Articulograph, 2019</xref>), where it is indicated that Plasty late latex milk (Glorex GmbH) is a suitable product for coating the sensors. The latex coating, they report, keeps the sensors clean and without glue residue. In their Carstens AG500 Manual (<xref ref-type="bibr" rid="B21">2006</xref>) they additionally state, under the &#8216;Cleaning and disinfection of sensors&#8217; section, that coating the sensors in latex is recommended, as the latex can simply be peeled off after testing. Sensors can (and, if possible, should) according to Carstens be coated in latex for use on other facial surfaces, not just lingual, as this increases sterility and sensor longevity. Latex coating should also increase the longevity of (reusable) NDI Vox sensors (NDI, personal communication).</p>
<p>The third approach for preparing tongue sensors consists of increasing the sensor size to increase the adhesion surface and thereby potentially increasing the sensor adhesion duration. This can be done, for example, by placing small pieces of silk between the sensor and lingual surfaces (e.g., <xref ref-type="bibr" rid="B67">Ji et al., 2013</xref>; <xref ref-type="bibr" rid="B44">Gooz&#233;e, Murdoch, Theodoros, &amp; Stokes, 2000</xref>; <xref ref-type="bibr" rid="B37">Fuchs, 2005</xref>), gluing a small transparent layer of plastic to the bottom of the sensors (e.g., <xref ref-type="bibr" rid="B173">Wieling, Veenstra, Adank, Weber, &amp; Tiede, 2015</xref>), or covering the head of the sensors with a small, thin flap of latex (our approach; see Section 4).</p>
<p>We carried out a sensor-adhesion experiment to compare these three approaches for tongue sensor adhesion. This experiment is reported on in Section 5.</p>
</sec>
</sec>
<sec>
<title>3.6. Jaw-movement sensors</title>
<sec>
<title>3.6.1. Use and positioning</title>
<p>Jaw movements can be tracked with either an intraoral sensor that is adhered on the lower incisors or an extraoral sensor adhered to the chin. The former is preferred, as the position of the chin sensor may also be affected by skin movement during speaking. From 286 studies that use a sensor to track jaw movement, 214 (75%) use a sensor on (or near) the lower incisors, compared to 72 (25%) which use a sensor on the chin. However, note that there are also differences in the placement of incisor sensors: While most researchers refer to placement on &#8216;incisors,&#8217; only few place the sensor on the incisors themselves (i.e., on the teeth). Most place the sensor on the gingival tissue below the incisors.</p>
<p>Most studies use only one jaw movement sensor. However, some have also used several (e.g., <xref ref-type="bibr" rid="B164">Wang et al., 2016, who placed three sensors on the jaw</xref>; <xref ref-type="bibr" rid="B106">Mooshammer, Tiede, Shattuck-Hufnagel, &amp; Goldstein, 2019</xref>, who placed two sensors on the lower gumline, one below the front incisors and one below the left premolar; <xref ref-type="bibr" rid="B98">Mefferd, 2017</xref>, who placed three sensors to the lower gumline; or <xref ref-type="bibr" rid="B105">Mooshammer, Hoole, &amp; Geumann, 2007</xref>, who placed two sensors on the outer and inner surface of the lower gumline and one sensor on the chin). Note that even with a single sensor, jaw movements can easily be tracked but are often hard to decouple from tongue and lower lip movement (e.g., <xref ref-type="bibr" rid="B49">Henriques &amp; van Lieshout, 2013</xref>), as components of jaw movements are also present in tongue and lip movements. Furthermore, as the jaw is a rigid body, at least two 5DOF sensors are necessary to correctly track its orientation relative to the head.</p>
</sec>
<sec>
<title>3.6.2. Preparation and adhesion</title>
<p>If the jaw-movement sensor is placed extraorally, most frequently on the chin, no special preparation is mentioned in the reviewed studies (although the sensors can be coated in latex to increase sterility and longevity). In contrast, our literature review revealed several methods of preparing an intraoral jaw sensor (and the intraoral reference sensor). These methods include using the same dental adhesive as on the tongue, creating a custom dental mould of the incisor to which the sensor is adhered (e.g., <xref ref-type="bibr" rid="B142">Steele &amp; van Lieshout, 2004</xref>; <xref ref-type="bibr" rid="B143">Steele, van Lieshout, &amp; Pelletier, 2012</xref>), or adhering the sensor to a piece of Stomahesive wafer (e.g., <xref ref-type="bibr" rid="B98">Mefferd, 2017</xref>; <xref ref-type="bibr" rid="B9">Berry, Kolb, Schroeder, &amp; Johnson, 2017</xref>; <xref ref-type="bibr" rid="B29">Dromey et al., 2018</xref>). The latter approach&#8212;using Stomahesive&#8212;increases the surface of the sensor as well as its adhesion to the participant&#8217;s gingival tissue due to the nature of the material. As this is the method used in our lab, there are further details on the preparation of Stomahesive-covered sensors in Section 4.</p>
</sec>
</sec>
<sec>
<title>3.7. Lip sensors</title>
<sec>
<title>3.7.1. Use and positioning</title>
<p>Lip sensors are generally placed on the vermillion border of the upper and lower lips. Data obtained through these sensor positions allow to estimate variations in lip aperture or lip protrusion that are phonetically relevant (e.g., production of bilabial stops as compared to fricatives, or between rounded and unrounded vowels). In some cases, such as when a study focuses on lip movements specifically, more lip sensors are attached, namely at the right and/or left lip corners (e.g., <xref ref-type="bibr" rid="B96">Meenakshi &amp; Ghosh, 2018</xref>; <xref ref-type="bibr" rid="B125">Rong et al., 2012</xref>; <xref ref-type="bibr" rid="B24">Cler, Lee, Mittelman, Stepp, &amp; Bohland, 2017</xref>).</p>
</sec>
<sec>
<title>3.7.2. Preparation and adhesion</title>
<p>Lip sensors can be bare or coated with latex (to increase hygiene and longevity, as these sensors come in contact with saliva). If more than two lip sensors are used, latex-coated sensors are likely to result in affected articulation due to their larger size. Most often, lip sensors are adhered with a piece of tape. To increase adhesiveness, a small drop of adhesive can additionally be added, which ensures that the sensors are firmly adhered for the duration of the experiment. This is especially important if the medical tape does not stick adequately (e.g., due to the participant&#8217;s sweat or repeated large labial movements in stimuli targeting plosives).</p>
</sec>
</sec>
</sec>
<sec>
<title>4. EMA data collection in practice: A suggested procedure</title>
<p>In Section 4 of the paper, we provide a practical description of the data collection procedure employed in our lab at the University of Groningen. Our approach is only one of the many possible strategies available to researchers who collect speech production data with EMA, as was also illustrated in the previous part. The description includes all details which are important, but often omitted from publications.</p>
<sec>
<title>4.1. Preparation of the sensors using latex</title>
<p>In the procedure used in our lab, all sensors are prepared at least half a day before the experiment. In this preparation stage, we distinguish between three types of sensors: (1) the extraoral sensors (identified with MR, ML, N, UL, and LL, below) plus the sensors attached to the tongue (TM and TT), except for the most posterior tongue sensor, (2) the most posterior tongue sensor (TB), and (3) the sensors attached close to the incisors on the upper and lower gums (UI and LI). We check the sensors for any visible defects (e.g., broken wire) before using them.</p>
<p>The first group of sensors is prepared by dipping each of them in mask-making latex (RD 407 Mask Making Latex, Monster Makers). The TB sensor is prepared similarly but having an additional latex flap cover (see Section 5.1), which increases the surface of the sensor and may be beneficial for the adhesion duration (see Section 5.5). Finally, the UI and LI sensor are prepared using a Stomahesive wafer (ConvaTec PLC). A small rectangular piece of Stomahesive is cut measuring about 10 mm &#215; 6 mm. The sensor is placed on top of this piece and a drop of latex is applied to it in order to make it adhere (Figure <xref ref-type="fig" rid="F4">4</xref>, left and right). The early preparation phase is necessary, as the latex takes several hours to completely dry. However, the sensors should not be prepared too early (e.g., a week in advance), as the latex becomes less flexible with time and more difficult to remove. In case of re-use, we disinfect sensors first using SPORECLEAR Medical Device Disinfectant (Hu-Friedy Mfg. Co., LLC) and then wipe them with an alcohol wipe before storing them.</p>
<fig id="F4">
<label>Figure 4</label>
<caption>
<p>Preparation of the incisor sensors with Stomahesive.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="/article/id/6289/file/75945/"/>
</fig>
</sec>
<sec>
<title>4.2. Preparation and attachment of reference sensors</title>
<p>After checking that participants are not pregnant, do not have a pacemaker, and do not have a latex allergy, our data collection procedure is as follows. All sensors are screwed into the miniature terminal blocks of the NDI Wave (or, in the case of the NDI Vox, plugged into the sensor harness assembly), wiped with an alcohol wipe, and placed on a sterilized tray a short time before the participant&#8217;s arrival. We perform a sensor validation check by verifying that each sensor that is screwed in also functions as it should. Once participants arrive, we first ask them to take a disposable toothbrush and scrub their tongue (especially along the midline). They do this in front of a mirror, so that they are aware of how far back they are reaching and do not trigger their gag reflex. By scrubbing their tongue, they remove the coating that covers the tongue (the amount of coating differs per participant<xref ref-type="fn" rid="n10">10</xref>). We subsequently ask the participant to remove jewellery, glasses, and hearing aids, when applicable, as they make sensor placement more difficult and potentially could interfere with the signal (as the presence of metal inside the magnetic field has a negative effect on the precision of the recovered sensor positions). The glasses and hearing aids are returned to the participant once the sensor placement is complete if their use is necessary for successful participation in the experiment.</p>
<p>We additionally ask participants whether they are wearing dentures, as these may move slightly during speaking, which could result in some wire pull for sensors placed on the gingival tissue. Since dentures cannot be removed without impeding articulation, we note their presence but otherwise do not ask the participant to remove them. Additionally, if possible, participants should shave before the experiment and avoid wearing makeup as this makes sensor placement more difficult.</p>
<p>Subsequently, the participant is asked to sit down next to the EMA field generator (we were using the NDI Wave system, but have very recently moved to using the NDI Vox system). We first place four prepared reference sensors:<xref ref-type="fn" rid="n11">11</xref></p>
<list list-type="simple">
<list-item><p>&#8211; mastoid right (MR)</p></list-item>
<list-item><p>&#8211; mastoid left (ML)</p></list-item>
<list-item><p>&#8211; nasion (N)</p></list-item>
<list-item><p>&#8211; (close to the) upper incisor (UI)</p></list-item>
</list>
<p>All sensors (reference and others) are first held in reverse action tweezers (Hobbycraft), as they make the application of sensors to the participant easier. The first three reference sensors are applied after the researcher has sterilized their hands using Sterilium&#174; (Medline). Before placing any intraoral sensors, the researcher puts on (latex) dental gloves and a dental mask.<xref ref-type="fn" rid="n12">12</xref></p>
<p>The mastoid sensors (ML and MR) are placed behind the participant&#8217;s ears on the skin covering the mastoid part of the temporal bone, where there is minimal skin movement (Figure <xref ref-type="fig" rid="F5">5</xref>).</p>
<fig id="F5">
<label>Figure 5</label>
<caption>
<p>Mastoid sensor (placed below the glasses).</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="/article/id/6289/file/75946/"/>
</fig>
<p>The nasion sensor (N; Figure <xref ref-type="fig" rid="F6">6</xref>) is placed on the part where there is least skin creasing. If the participant is wearing glasses, the sensor is placed right above or below their glasses, depending on how big the frame is. The first three sensors are secured with a drop of glue. We use PeriAcryl&#174;90 HV adhesive (GluStitch Inc), which is kept in the fridge (at ~2&#176;C) until the participant&#8217;s arrival. At that moment, two to three drops of adhesive are added to a small plastic mixing well (Maxill Inc.) after which the adhesive is returned to the fridge. A small disposable plastic pipette is used to transfer the adhesive from the mixing well to the sensor.</p>
<fig id="F6">
<label>Figure 6</label>
<caption>
<p>Nasion sensor.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="/article/id/6289/file/75947/"/>
</fig>
<p>The sensor wires are adhered to the participant using Leukopor or Leukosilk tape (BSN medical GmbH). A piece of tape is additionally placed over the ML and MR sensors to secure them (see tape in Figure <xref ref-type="fig" rid="F5">5</xref>). We add a piece of tape to the N sensor but place it slightly higher on the forehead (see tape in Figure <xref ref-type="fig" rid="F6">6</xref>), as it otherwise disturbs the participant&#8217;s visual field.</p>
<p>The final reference sensor (UI), on top of the piece of Stomahesive, is attached to the gingiva above the left upper incisor. No glue is added to the Stomahesive, as it adheres to tissue by itself. We avoid placing any incisor sensors to the midsagittal line, directly above the central incisors, due to the labial frenulum, which connects the upper lip to the gingival tissue and is quite sensitive. The UI sensor placement relative to the labial frenulum can be seen in Figure <xref ref-type="fig" rid="F7">7</xref>.</p>
<fig id="F7">
<label>Figure 7</label>
<caption>
<p>Upper incisor sensor placement.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="/article/id/6289/file/75948/"/>
</fig>
<p>After the reference sensors have been placed, the palate trace and biteplate recordings follow. These are crucial (particularly the biteplate recording) to ensure the subsequent quality of the collected data. For the palate trace, we adhere one spare sensor to the end of the participant&#8217;s dominant thumb using Leukopor tape (so that the sensor wires are leading down the thumb and pointing towards the wrist) and instruct them to trace the thumb from the back of the hard palate to their front teeth. The purpose of this procedure as well as the tracing method are explained by means of a mouth puppet (Super Duper&#174; Publications; Figure <xref ref-type="fig" rid="F8">8</xref>), which, due to its cartoonish look, is also useful in decreasing participants&#8217; potential anxiety. The palate trace is performed twice.</p>
<fig id="F8">
<label>Figure 8</label>
<caption>
<p>Mouth puppet with attached sensors is very useful in explaining EMA.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="/article/id/6289/file/75949/"/>
</fig>
<p>For the biteplate recording, we created a (reusable) fixed triangular protractor with three sensors glued to it (Figures <xref ref-type="fig" rid="F9">9</xref> and <xref ref-type="fig" rid="F10">10</xref>). The same protractor is used for all participants; it is wiped with an alcohol wipe <italic>before</italic> every use and disinfected with SPORECLEAR Medical Device Disinfectant (Hu-Friedy Mfg. Co., LLC) <italic>after</italic> every use. The protractor is pushed as far back as comfortable into the corners of the participant&#8217;s mouth. The participant is then asked to hold the protractor firmly between their teeth and sit still for a few seconds while the biteplate recording is made. The protractor must be in contact with the molars in order to obtain a true occlusal reference. We check the biteplate recording directly by comparing the Euclidean distances between all the reference sensors and the three sensors on the biteplate, using MATLAB (MathWorks Inc.). If these distances remain relatively constant over time, this indicates that the position of the reference sensors and the biteplate sensors is correctly tracked.</p>
<fig id="F9">
<label>Figure 9</label>
<caption>
<p>Biteplate protractor with three attached sensors.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="/article/id/6289/file/75950/"/>
</fig>
<fig id="F10">
<label>Figure 10</label>
<caption>
<p>Biteplate protractor in use.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="/article/id/6289/file/75951/"/>
</fig>
</sec>
<sec>
<title>4.3. Attachment of movement sensors</title>
<p>After the palate trace and biteplate recordings, we proceed with attaching sensors to the articulators that we wish to capture. Most frequently, these sensors are the following (listed in the order of placement):</p>
<list list-type="simple">
<list-item><p>&#8211; tongue back (TB)</p></list-item>
<list-item><p>&#8211; tongue mid (TM)</p></list-item>
<list-item><p>&#8211; tongue tip (TT)</p></list-item>
<list-item><p>&#8211; lower incisor (LI)</p></list-item>
<list-item><p>&#8211; upper lip (UL)</p></list-item>
<list-item><p>&#8211; lower lip (LL)</p></list-item>
</list>
<p>To determine where to place the tongue back sensor, we use a colour transfer applicator stick (Dr. Thompson&#8217;s, GUNZdental). We ask the participant to drag the stick midsagitally across the midline of their hard palate (as they had done before with the palate trace sensor) and then pronounce the velar /k/, followed by directly sticking out their tongue.<xref ref-type="fn" rid="n13">13</xref> They are asked not to swallow while their tongue is being marked. The colour from the applicator is transferred from the palate to the part of the tongue where the back-most (velar) sound is made. We use the same stick to draw a coronal line through this spot. Additionally, we use measuring tape to measure 1 cm from the tongue tip (when the tongue is stretched) and drag a coronal line through that point as well. The coronal line enables us to always re-adhere the sensor to the approximately same position if it starts getting loose, as the point might become smudged through speaking and swallowing, but the line will remain clearly visible. Figure <xref ref-type="fig" rid="F11">11</xref> below shows the coronal lines on the tongue left by the colour transfer applicator stick, with the median sulcus still clearly visible.</p>
<fig id="F11">
<label>Figure 11</label>
<caption>
<p>Indicatory markings for sensor placement.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="/article/id/6289/file/75952/"/>
</fig>
<p>The participant can now swallow as the coronal lines will remain clear, even when they come in contact with saliva. The participants are asked to stick out their tongue as far as comfortable. We place barber tape (Comair GmbH, folded three times to contain at least eight layers) on the back line marking on participant&#8217;s tongue, dab the tape on the tongue for about 5&#8211;10 seconds, and finally drag the tape across the tongue. This procedure dries the tongue dorsum and is crucial in ensuring that sensors do not fall off easily. We hold each sensor in the tweezers and add a drop of adhesive using a small plastic disposable pipette before placing the sensor on the tongue.</p>
<p>The TB sensor is placed on the crossing between the marked posterior line and the median sulcus, so that the wire of the sensor is pointing downward and towards the lip corner. A disposable wooden tongue depressor (Tegler) is used to press the sensor to the tongue for 10&#8211;20 seconds. The wire is then secured to the cheek using Leukopor tape. It is essential that the wires have enough slack, as large speech gestures may otherwise lead to wire tension, which is uncomfortable for the participant and may cause the sensor to come loose. The process is repeated for the TT sensor, which is placed on the crossing between the marked anterior line and the median sulcus. Note that the TT sensor is positioned in such a way that the wire is pointed towards the side of the tongue, as a wire running over the tongue tip feels uncomfortable for the participant and leads to lisping (<xref ref-type="bibr" rid="B59">Hoole &amp; Nguyen, 1999</xref>).</p>
<p>The tongue mid sensor is placed halfway between the marked lines for the TT and TB sensors on the median sulcus by eyeballing. In line with previous methodological considerations (see Section 3.5.1), we generally do not use the TM sensor when testing clinical populations or children. If we are using lateral sensors, we place these to the right and left side of the TM sensor, 0.5&#8211;1 cm from the edge of the tongue (depending on how wide or narrow the participant&#8217;s tongue is). We only place more than three sensors if that is required for the purposes of the study. The final intraoral sensor (LI) tracks the jaw movement. This sensor, prepared with Stomahesive, is attached to the gingiva below the right lower incisor. No additional glue is needed, as Stomahesive adheres to tissue by itself.</p>
<p>Finally, two lip sensors (UL and LL) are attached at the vermillion border of the upper and lower lip using a drop of dental adhesive. Depending on the amount of facial hair surrounding the upper and lower lip, the removal of lip sensors can lead to mild discomfort.</p>
</sec>
</sec>
<sec>
<title>5. Sensor adhesion experiment</title>
<sec>
<title>5.1. Aim</title>
<p>The present experiment tested how different preparation methods for EMA sensors affect adherence to the tongue. As discussed in Section 3.5.2, several methods for sensor preparation exist. We specifically focus on the tongue sensors, as these usually are most likely to come off relatively quickly. The aim of this experiment was therefore to determine which type of sensor preparation (see below) is most beneficial for adhesion, also depending on the position on the tongue. In addition, we evaluated (qualitatively) whether the participant&#8217;s tongue anatomy influences adhesiveness.</p>
<p>We tested three types of sensor preparations: out-of-the-box (&#8216;bare&#8217;) sensors, latex-coated sensors, and sensors with a latex flap. Out-of-the-box sensors (Figure <xref ref-type="fig" rid="F12">12</xref>, left) are the sensors as provided by NDI for the Wave device (approximate surface: 30 mm2), latex-coated sensors (Figure <xref ref-type="fig" rid="F12">12</xref>, center) are dipped in latex (with only a slightly larger surface than the out of the box sensors, but with rounder edges), and sensors with a latex flap (Figure <xref ref-type="fig" rid="F12">12</xref>, right) are covered in the same latex, but now a brush is used to apply the latex while the sensor head is lying on a flat surface (approximate surface: 70 mm2).</p>
<fig id="F12">
<label>Figure 12</label>
<caption>
<p>Sensor preparation types (from left to right: out of the box, latex-coated, latex flap).</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="/article/id/6289/file/75953/"/>
</fig>
</sec>
<sec>
<title>5.2. Participants and experimental procedure</title>
<p>To test these three types of sensor preparations, we tested 10 female adult participants in three separate sessions. All 10 participants were between 20 and 30 years of age. The study was approved by the Faculty of Arts Research Ethics Review Committee of the University of Groningen (approval number 71276154).</p>
<p>For each of the three sessions, we used one type of sensor and followed the same application procedure for each type (as described in Section 4 above). The sessions took place on three different days, thus avoiding the risk of glue residue and tongue fatigue, both of which would have influenced the resulting adhesion times. During the first session, we adhered out-of-the-box sensors, during the second session the latex-coated sensors, and during the final session the sensors with the latex flap.</p>
<p>During every session, we placed five sensors on the tongue, as this is the maximum number of tongue sensors used by researchers (see Section 3.5.1 and the Appendix). The sensors in question were placed on the tongue tip (TT; 1 cm from the tip), tongue back (TB; place of /k/ constriction), tongue middle (TM; between TT and TB), and tongue lateral right and left (TLR and TLL, placed to the left and right of the TM sensor, respectively). While few studies investigate lateral sounds (see above), we wished to assess whether different types of sensor preparations are also suitable for studying lateral movement of the tongue, as those parts move differently, and the sensors are more prone to interference from the participants&#8217; molars. Figure <xref ref-type="fig" rid="F13">13</xref> below displays sensor placement examples for latex-coated sensors.</p>
<fig id="F13">
<label>Figure 13</label>
<caption>
<p>Sensor placement during adhesion experiment.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="/article/id/6289/file/75954/"/>
</fig>
<p>The sensor placement process took approximately ten minutes. After we placed the five sensors on the tongue, we started displaying the stimuli to the participants using Microsoft PowerPoint on a computer monitor in front of them. The articulograph was not turned on for this experiment, as we were not collecting kinematic data and merely wished to determine how long it took for each sensor to fall off.</p>
</sec>
<sec>
<title>5.3. Stimuli</title>
<p>The experimental procedure consisted of the following tasks and stimuli. First, the participants read the short text <italic>Please call Stella</italic> from the <italic>Speech Accent Archive</italic> (<xref ref-type="bibr" rid="B165">Weinberger, 2015</xref>). This allowed them to get used to speaking with sensors in their mouth (i.e., the sensor habituation stage) and took approximately one minute. We did not include a longer sensor habituation stage as our goal was not to record the participants&#8217; natural speech. Following the text was a wordlist. It contained 300 words of varying lengths and from various thematic fields (e.g., vegetables, fruit, school, vocations). Each word appeared on the screen for four seconds, during which the participant read it out loud. This procedure lasted for 20 minutes. Finally, at the end of the first wordlist, the participants performed a rapid syllable repetition task, namely the diadochokinesis (DDK) task, at a comfortable but fast speaking pace as defined by the participants themselves. The DDK task involved the repetition of syllables /pa/, /ta/, /ka/, and /pataka/, and was included because fast repetitive movements may potentially cause the sensors to fall off faster. The experiment was considered complete once all the sensors had been detached or the three tasks were repeated twice, which took about 45 minutes. When a sensor fell off (the participants were instructed to inform us when they felt a sensor get loose), we removed it and noted the time it fell off. We did not re-attach any sensors.</p>
<p>The experimental procedure, including participant preparation (five minutes) and sensor placement (ten minutes), took 60 minutes at most. At that point, we stopped the experiment and removed the remaining sensors. The maximum time a sensor was adhered to a person was therefore 45 minutes. The experimental procedure is schematically presented in Figure <xref ref-type="fig" rid="F14">14</xref>.</p>
<fig id="F14">
<label>Figure 14</label>
<caption>
<p>Experimental procedure.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="/article/id/6289/file/75955/"/>
</fig>
</sec>
<sec>
<title>5.4. Anatomical measurements of the tongue</title>
<p>For all participants, we measured the relative tongue length, tongue width, and maximal mouth opening. All three measurements were taken with the participant&#8217;s tongue comfortably extended using a ruler. First, we measured the relative tongue length, defining it as the distance between the anatomical tongue tip and the place we had marked as the place of /k/ constriction. Second, we measured the tongue width, defined as the widest part of the tongue, parallel to the molars. Finally, we asked the participants to open their mouth as wide as they comfortably could and measured the vertical distance between the surface of the tongue and the edge of their upper central incisors. We defined this as &#8216;mouth opening,&#8217; which in effect represents the maximum intraoral space that the researcher can work with during the sensor placement procedure.</p>
<p>Due to the lack of suitable equipment, we were not able to measure the participants&#8217; salivary flow rate or take any other anatomical measurements.</p>
</sec>
<sec>
<title>5.5. Statistical analysis and results</title>
<p>To assess the potential effect of sensor preparation method and sensor position on sensor adhesiveness, we used linear mixed effects regression modelling with participant as a random-effect factor and the optimal random-effects structure (i.e., assessing the inclusion of random intercepts and slopes) determined via model comparison. Specifically, we evaluated whether sensor preparation type (OUT-OF-THE-BOX, LATEX-COATED, FLAP) and sensor position (TT, TM, TB, TLL, TLR) affected sensor adhesiveness. As our initial analysis appeared to show a clear distinction between the TB sensor (adhering for a much shorter duration than the other sensors, whose adhesion did not differ significantly from each other; see Figure <xref ref-type="fig" rid="F15">15</xref>), we created a new fixed effect predictor distinguishing the TB sensor from the other sensors.</p>
<fig id="F15">
<label>Figure 15</label>
<caption>
<p>Effect of sensor position on adhesiveness.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="/article/id/6289/file/75956/"/>
</fig>
<p>The best model for our data, determined via model comparison, only warranted the inclusion of the distinction between the TB sensor and the other sensors, in addition to the by-subject random intercept and a by-subject random slope for the contrast between the TB and the other sensors. Specifically, this model showed that the TB sensor adhered approximately 14 minutes less than the other sensors (<italic>&#946;</italic> = &#8211;14.0, <italic>t</italic> = &#8211;5.0, <italic>p</italic> &lt; 0.001). Sensor preparation type (see Figure <xref ref-type="fig" rid="F16">16</xref>) did not reach significance in the best model, nor did any of the other anatomical predictors. Of course, this may be partly due to our limited sample size (N = 10).</p>
<fig id="F16">
<label>Figure 16</label>
<caption>
<p>Effect of sensor preparation type on adhesiveness.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="/article/id/6289/file/75957/"/>
</fig>
<p>When explicitly focusing on the interaction between sensor preparation type and the sensor (TB versus the other sensors), the flap sensor appeared to be detrimental for the adhesion time for non-TB sensors, reducing the estimated adhesion time with about five minutes compared to the bare sensor and about three minutes compared to the sensor coated in latex. However, for the TB sensors the opposite pattern was found. The adhesion time of the sensor with the flap was estimated to be about five minutes higher compared to the sensor coated in latex and more than nine minutes higher compared to bare sensor. Figure <xref ref-type="fig" rid="F17">17</xref> shows this interaction; Table <xref ref-type="table" rid="T3">3</xref> shows speaker-specific differences in sensor adhesion times.</p>
<fig id="F17">
<label>Figure 17</label>
<caption>
<p>Visualization of interaction between sensor position (TB is tongue body and Non-TB are all other sensors) and sensor preparation type (out-of-the-box, latex, and flap).</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="/article/id/6289/file/75958/"/>
</fig>
<table-wrap id="T3">
<label>Table 3</label>
<caption>
<p>Speaker-specific differences in sensor adhesiveness (distinction between non-TB and TB sensors; adhesion time is reported in minutes). For the Non-TB sensors the values are averaged and the <italic>SD</italic> is shown between parentheses.</p>
</caption>
<table>
<tr>
<th align="left" valign="top" rowspan="3">Participant</th>
<th align="center" valign="top" colspan="2">Bare</th>
<th align="center" valign="top" colspan="2">Latex</th>
<th align="center" valign="top" colspan="2">Latex flap</th>
</tr>
<tr>
<th colspan="6"><hr/></th>
</tr>
<tr>
<th align="center" valign="top">Non-TB</th>
<th align="center" valign="top">TB</th>
<th align="center" valign="top">Non-TB</th>
<th align="center" valign="top">TB</th>
<th align="center" valign="top">Non-TB</th>
<th align="center" valign="top">TB</th>
</tr>
<tr>
<td colspan="7"><hr/></td>
</tr>
<tr>
<td align="left" valign="top">P01</td>
<td align="right" valign="top">24 (13)</td>
<td align="right" valign="top">18</td>
<td align="right" valign="top">29 (11)</td>
<td align="right" valign="top">23</td>
<td align="right" valign="top">39 (12)</td>
<td align="right" valign="top">3</td>
</tr>
<tr>
<td align="left" valign="top">P02</td>
<td align="right" valign="top">14 (21)</td>
<td align="right" valign="top">1</td>
<td align="right" valign="top">13 (11)</td>
<td align="right" valign="top">16</td>
<td align="right" valign="top">24 (8)</td>
<td align="right" valign="top">45</td>
</tr>
<tr>
<td align="left" valign="top">P03</td>
<td align="right" valign="top">45 (0)</td>
<td align="right" valign="top">3</td>
<td align="right" valign="top">45 (0)</td>
<td align="right" valign="top">6</td>
<td align="right" valign="top">45 (0)</td>
<td align="right" valign="top">37</td>
</tr>
<tr>
<td align="left" valign="top">P04</td>
<td align="right" valign="top">42 (3)</td>
<td align="right" valign="top">33</td>
<td align="right" valign="top">30 (16)</td>
<td align="right" valign="top">25</td>
<td align="right" valign="top">19 (3)</td>
<td align="right" valign="top">3</td>
</tr>
<tr>
<td align="left" valign="top">P05</td>
<td align="right" valign="top">40 (11)</td>
<td align="right" valign="top">40</td>
<td align="right" valign="top">44 (3)</td>
<td align="right" valign="top">19</td>
<td align="right" valign="top">43 (5)</td>
<td align="right" valign="top">4</td>
</tr>
<tr>
<td align="left" valign="top">P06</td>
<td align="right" valign="top">29 (11)</td>
<td align="right" valign="top">1</td>
<td align="right" valign="top">12 (14)</td>
<td align="right" valign="top">1</td>
<td align="right" valign="top">10 (11)</td>
<td align="right" valign="top">1</td>
</tr>
<tr>
<td align="left" valign="top">P07</td>
<td align="right" valign="top">27 (19)</td>
<td align="right" valign="top">0</td>
<td align="right" valign="top">19 (15)</td>
<td align="right" valign="top">1</td>
<td align="right" valign="top">12 (11)</td>
<td align="right" valign="top">2</td>
</tr>
<tr>
<td align="left" valign="top">P08</td>
<td align="right" valign="top">45 (0)</td>
<td align="right" valign="top">32</td>
<td align="right" valign="top">45 (0)</td>
<td align="right" valign="top">11</td>
<td align="right" valign="top">28 (20)</td>
<td align="right" valign="top">45</td>
</tr>
<tr>
<td align="left" valign="top">P09</td>
<td align="right" valign="top">42 (6)</td>
<td align="right" valign="top">4</td>
<td align="right" valign="top">45 (0)</td>
<td align="right" valign="top">41</td>
<td align="right" valign="top">41 (9)</td>
<td align="right" valign="top">42</td>
</tr>
<tr>
<td align="left" valign="top">P10</td>
<td align="right" valign="top">38 (10)</td>
<td align="right" valign="top">1</td>
<td align="right" valign="top">45 (0)</td>
<td align="right" valign="top">45</td>
<td align="right" valign="top">38 (16)</td>
<td align="right" valign="top">45</td>
</tr>
</table>
</table-wrap>
</sec>
<sec>
<title>5.6. Discussion of experimental investigation</title>
<p>In general, our sensor adhesion experiment demonstrated no clear general advantage of any particular sensor preparation type. With five sensors on the participant&#8217;s tongue, it was difficult to make all of them adhere for the duration of 45 minutes (the only exception being two participants). The adhesiveness of the TB sensor, which was significantly lower than that of other sensors, did improve when the sensor was prepared with a latex flap. When attaching intraoral sensors, it is crucial to preserve a sterile environment. As sensors coated in latex (both with and without a latex flap) are more hygienic, easier to clean, and likely deteriorate slower, we recommend coating the sensor in latex when possible. Based on our results we further recommend adding a latex flap for the tongue back sensor.</p>
<p>Additionally, we would like to mention some qualitative observations. Placing a total of five sensors on the tongue is not ideal, particularly when keeping in mind that in a regular experiment, two additional intraoral sensors would need to be included as well. This difficulty was especially pronounced with latex flap sensors, as the required tongue surface to attach the sensors to was largest. In the case of using sensors with a latex flap, participants appeared to take longer to get habituated. However, their articulation seemed to return to normal within the first ten minutes of the experiment (although note that we did not quantify this) and should therefore not be problematic for a regular experimental setup. A practical advantage of the sensors with a latex flap was that once part of the flap detaches from the tongue, this is quickly noticed by the participant and can be easily resolved by adding some glue underneath the flap.</p>
<p>There were several limitations to this study. First, we adhered five sensors to the tongue, which is a larger number than usual. While this was done on purpose, as we wished to assess not only adhesion of sensors placed midsagittally but also those placed laterally, it also might not reflect adequately how long the sensors would adhere in a normal experimental scenario with only two or three tongue sensors. Second, we did not readhere the sensors once they fell off. In a real experimental scenario, one would reglue the sensor to the same position where it fell off. In our experience, it is easiest to reglue a sensor with a flap as the adhesive surface is largest.</p>
<p>Another experiment would need to be conducted to assess how the different sensor types compare when focusing on ease of reattachment. Finally, sensor placement and its effectiveness are strongly impacted by individual factors. While we included certain tongue anatomical measures (none of which turned out to be significant predictors in our best model), others that were not measured and differ between participants&#8212;such as salivary flow rate (<xref ref-type="bibr" rid="B169">Whelton, 2012</xref>) and tongue surface (<xref ref-type="bibr" rid="B86">Kullaa-Mikkonen et al., 1982</xref>)&#8212;likely play an important role as well.<xref ref-type="fn" rid="n14">14</xref></p>
</sec>
</sec>
<sec>
<title>6. Conclusion</title>
<p>The present paper provided an introduction to electromagnetic articulography and an overview of data collection procedures on the basis of reviewing 905 publications employing electromagnetic (midsagittal) articulography since 1987. In addition, we provided a detailed description of the procedure used in our own lab.</p>
<p>EMA data collection and analysis are time-consuming and technically demanding. Consequently, it is difficult to include a large numbers of participants. Compare, for example, the five participants that seem to be the norm in EMA research (see Section 2.5) with the 50 participants that would be needed for a study with 80% power and aimed at identifying effect sizes as low as Cohen&#8217;s d = 0.4 (<xref ref-type="bibr" rid="B15">Brysbaert, 2019</xref>). If testing 50 or more participants is not really feasible, then individuals who participate in EMA studies should be carefully selected and the testing procedure should facilitate between- and within-speaker comparability. Reliable, accurate, and replicable sensor placement should therefore be ensured.</p>
<p>As we demonstrated in our review, however, there is currently still a great variety of approaches used for EMA sensor preparation and placement. For example, while nearly all studies use a tongue tip sensor, frequently placing it &#8216;1 cm&#8217; behind the anatomical tongue tip, researchers often do not specify how this distance from the tongue tip was measured (e.g., using ruler as opposed to eyeballing) nor the position that the tongue was in (e.g., at rest inside the mouth, comfortably protruded, completely stretched). This can make a substantial difference, however. Based on our experience, a point that is 1 cm from the tip with the tongue at rest can be nearly 1.5 cm from the tip when the tongue is protruded. Another example of varying sensor placement strategies pertains to the &#8216;tongue back&#8217; sensor, which is often placed an arbitrary number of centimetres from the tongue tip or as far back as comfortable and/or possible. Participants, however, are not comparable, as tongue sizes, oral cavities, and comfort levels can differ greatly. One strategy for solving this (also used within our lab) is to place the tongue back sensor where the /k/ (or another sound involving a posterior constriction) is made. In this way, the placement of the sensor makes sense from an articulatory perspective, which is missing from the other (more arbitrary) approaches.<xref ref-type="fn" rid="n15">15</xref> Other conundrums with intraoral sensor placements, unfortunately, are not as easily solvable. These, for example, include situations in which a speaker does not have enough gingival tissue for the placement of an incisor sensor, when the tongue of the speaker is too small to place the desired number of sensors, or when a speaker produces too much saliva, causing the sensor to fall off repeatedly.</p>
<p>As point tracking technology continues to improve, it is necessary to strive for better and more consistent methods of sensor adhesion, preparation, and placement. Not to limit the creativity of researchers, but rather to ensure more comparable results in a field usually only focusing on small sample sizes. It is our hope that this paper may serve as a starting point for further debate on the topic.</p>
</sec>
<sec sec-type="supplementary-material">
<title>Additional File</title>
<p>The following additional file for this article can be found as follows.</p>
<supplementary-material id="S1" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.5334/labphon.237.s1">
<!--[<inline-supplementary-material xlink:title="local_file" xlink:href="labphon-12-237-s1.xlsx">labphon-12-237-s1.xlsx</inline-supplementary-material>]-->
<label>Appendix</label>
<caption>
<p>An .xlsx file, which includes all EMA studies that were collected as part of our literature review. The appendix contains information on the topic, studied population, and sensors in use. It also includes specific information on sensor placement strategies for tongue sensors. DOI: <uri>https://doi.org/10.5334/labphon.237.s1</uri></p>
</caption>
</supplementary-material>
</sec>
</body>
<back>
<fn-group>
<fn id="n1"><p>Electromagnetic Articulography (EMA) used to be known as Electromagnetic Midsagittal Articulography (EMMA). While the &#8216;midsagittal&#8217; part is not applicable anymore as the sensors are tracked in 3D, both spellings remain in use in the literature. Other alternative names include &#8216;(electromagnetic) articulometry&#8217; and &#8216;electromagnetometry.&#8217; The device can be called an EMA, an articulograph, an articulometer, or (especially in the early years) a magnetometer.</p></fn>
<fn id="n2"><p>The predecessor to the articulographs was the x-ray microbeam, which tracked six pellets on the tongue and teeth (<xref ref-type="bibr" rid="B78">Kiritani, Itoh, &amp; Fujimura, 1975</xref>).</p></fn>
<fn id="n3"><p>Please note that the term &#8216;moderate-strength&#8217; is used here as the field is strong enough to cause interference with various devices (to the extent of corrupting the data, not harming the participant), but not nearly as strong as, for example, the field in an MRI chamber.</p></fn>
<fn id="n4"><p>Our literature review underwent three separate stages, going from 247 publications (first draft) to 626 publications (second draft) and finally to 905 publications (final publication). For the first draft of this paper, we collected publications from five international peer-reviewed journals (namely the <italic>Journal of Laboratory Phonology</italic>; <italic>The Journal of the Acoustical Society of America</italic>; the <italic>Journal of Phonetics</italic>; the <italic>Journal of Speech, Language, and Hearing Research</italic>; and <italic>Clinical Linguistics and Phonetics</italic>) as well as conference abstracts from the <italic>International Congress of the Phonetic Sciences</italic>, which led us to identify 247 publications. On the basis of reviewers&#8217; comments, we decided to perform a more extensive literature review for the second draft of the paper. We used the search terms &#8216;electromagnetic articulography&#8217; and &#8216;electromagnetic midsagittal articulography&#8217; on Google Scholar, which led us to identify 626 publications. In the second round of revisions, however, a reviewer (justly) pointed out that &#8216;articulometry&#8217; is a frequent term that should be included. We therefore finally used the search terms described in Section 3 of this paper (namely, &#8216;articulography,&#8217; &#8216;articulograph,&#8217; &#8216;articulometry,&#8217; and &#8216;articulometer&#8217;), excluding the search terms we had looked for previously for the second draft. We did not discard any publications at any stage of the process.</p></fn>
<fn id="n5"><p>As electromagnetic articulography was pioneered in Germany, many early papers are written in German.</p></fn>
<fn id="n6"><p>Researchers refer to both &#8216;biteplate&#8217; and &#8216;biteplane&#8217; recordings.</p></fn>
<fn id="n7"><p>The company Ellman International, Inc., seems to have been acquired by <xref ref-type="bibr" rid="B26">Cynosure, Inc., in 2004</xref> (<xref ref-type="bibr" rid="B26">Cynosure, Inc., 2014, para. 1</xref>) and some products were discontinued.</p></fn>
<fn id="n8"><p>Following Seikel, Drumright, and Huddock (<xref ref-type="bibr" rid="B133">2020, Ch. 6</xref>), the tongue consists of the tongue tip or apex (i.e., the anterior-most portion of the tongue), the tongue body (i.e., the portion of the tongue that is found within the oral cavity and makes up about two thirds of the tongue surface), and the tongue root or base (i.e., the part of the tongue that resides in the oropharynx). The superior surface of the tongue is dorsal (also called the tongue dorsum), and the undersurface is ventral. The median sulcus divides the tongue into left and right sides.</p></fn>
<fn id="n9"><p>Unfortunately, Patem et al. (<xref ref-type="bibr" rid="B118">2018</xref>) do not specify how their manual annotators defined &#8216;tongue base,&#8217; but it is presumed that it refers to the point where the tongue meets the floor of the mouth.</p></fn>
<fn id="n10"><p>Coffee, especially, leaves a brown coating on the tongue, which is not optimal for sensor placement.</p></fn>
<fn id="n11"><p>In principle, three (or even two) reference sensors are enough to correct head movement. However, we (as many other researchers) use one additional sensor as a backup in case one of the reference sensors malfunctions. We do not use the NDI 6DOF sensor (containing two sensors with a specific distance and orientation towards each other) which may be used to automatically correct for head movement, but use separate reference sensors instead, as it is beneficial to maximize the difference between the reference sensors to minimize the influence of noise from the reference sensors on the rotation.</p></fn>
<fn id="n12"><p>We use the dental mask for adults but often avoid it for children, as they do not yet have such a strong &#8216;germ reflex&#8217; and we noticed it makes them feel uncomfortable.</p></fn>
<fn id="n13"><p>This procedure is similar to the procedure used by Brunner, Hoole, and Perrier (<xref ref-type="bibr" rid="B14">2011b</xref>). However, we use the colour transfer applicator to mark the spot where the participant produces their /k/. Brunner et al. (<xref ref-type="bibr" rid="B14">2011b</xref>) used an oral disinfectant with a strong purple colouring agent and asked the participant to close their mouth and push their tongue (neutral position) against the hard palate. The colour mark was thus transferred to the tongue dorsum.</p></fn>
<fn id="n14"><p>During our sensor placement, we did notice that intraoral sensors were more difficult to adhere to those participants who produced more saliva. However, as we could not objectively measure salivary flow rates, we cannot accurately report on the relationship between saliva production and sensor adhesiveness.</p></fn>
<fn id="n15"><p>One centimetre behind the tongue tip is also somewhat arbitrary, however it seems to be a good compromise between seeing (and measuring) tongue tip movements and not overly impeding participants&#8217; speech.</p></fn>
</fn-group>
<ack>
<title>Acknowledgements</title>
<p>We would like to thank the editors and two anonymous reviewers for comments that helped us improve the paper substantially. Importantly, we would like to acknowledge that many parts of the EMA approach used in our lab are based on borrowing successful approaches from other labs. We would particularly like to thank Mark Tiede for demonstrating his procedure at Haskins Laboratories and for commenting on an earlier version of this paper. Furthermore, we would like to thank all other researchers with whom we have discussed issues and exchanged experiences regarding EMA studies, including June Sun, Fabian Tomaschek, Marianne Pouplier, Michael Proctor, and Stefanie Keulen.</p>
<p>We would further like to acknowledge funding from the Dutch Research Organisation (NWO) to Martijn Wieling (grants no. 019.2011.3.110.016, 016.144.049 and PGW.19.034), and the International Macquarie University Research Excellence Scholarship (iMQRES) grant awarded to Jidde Jacobi.</p>
</ack>
<sec>
<title>Competing Interests</title>
<p>The authors have no competing interests to declare.</p>
</sec>
<ref-list>
<ref id="B1"><label>1</label><mixed-citation publication-type="journal"><string-name><surname>Alvarez</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Dias</surname>, <given-names>F. J.</given-names></string-name>, <string-name><surname>Lezcano</surname>, <given-names>M. F.</given-names></string-name>, <string-name><surname>Arias</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Fuentes</surname>, <given-names>R.</given-names></string-name> (<year>2019</year>). <article-title>A Novel Three-Dimensional Analysis of Tongue Movement During Water and Saliva Deglutition: A Preliminary Study on Swallowing Patterns</article-title>. <source>Dysphagia</source>, <volume>34</volume>, <fpage>397</fpage>&#8211;<lpage>406</lpage>. DOI: <pub-id pub-id-type="doi">10.1007/s00455-018-9953-0</pub-id></mixed-citation></ref>
<ref id="B2"><label>2</label><mixed-citation publication-type="journal"><string-name><surname>Aron</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Berger</surname>, <given-names>M.-O.</given-names></string-name>, <string-name><surname>Kerrien</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Wrobel-Dautcourt</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Potard</surname>, <given-names>B.</given-names></string-name>, and <string-name><surname>Laprie</surname>, <given-names>Y.</given-names></string-name> (<year>2016</year>). <article-title>Multimodal acquisition of articulatory data: Geometrical and temporal registration</article-title>. <source>JASA</source>, <volume>139</volume>(<issue>2</issue>), <fpage>636</fpage>&#8211;<lpage>648</lpage>. DOI: <pub-id pub-id-type="doi">10.1121/1.4940666</pub-id></mixed-citation></ref>
<ref id="B3"><label>3</label><mixed-citation publication-type="journal"><string-name><surname>Badin</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Tarabalka</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Elisei</surname>, <given-names>F.</given-names></string-name>, &amp; <string-name><surname>Bailly</surname>, <given-names>G.</given-names></string-name> (<year>2010</year>). <article-title>Can you &#8216;read&#8217; tongue movements? Evaluation of the contribution of tongue display to speech understanding</article-title>. <source>Speech Communication</source>, <volume>52</volume>, <fpage>493</fpage>&#8211;<lpage>503</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/j.specom.2010.03.002</pub-id></mixed-citation></ref>
<ref id="B4"><label>4</label><mixed-citation publication-type="journal"><string-name><surname>Bakst</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Johnson</surname>, <given-names>K.</given-names></string-name> (<year>2018</year>). <article-title>Modeling the effect of palate shape on the articulatory-acoustics mapping</article-title>. <source>JASA Express Letters</source>, <volume>144</volume>(<issue>1</issue>), <fpage>EL71</fpage>&#8211;<lpage>EL75</lpage>. DOI: <pub-id pub-id-type="doi">10.1121/1.5048043</pub-id></mixed-citation></ref>
<ref id="B5"><label>5</label><mixed-citation publication-type="journal"><string-name><surname>Ball</surname>, <given-names>M. J.</given-names></string-name>, <string-name><surname>Gracco</surname>, <given-names>V.</given-names></string-name>, &amp; <string-name><surname>Stone</surname>, <given-names>M.</given-names></string-name> (<year>2001</year>). <article-title>A Comparison of Imaging Techniques for the Investigation of Normal and Disordered Speech Production</article-title>. <source>Advances in Speech Language Pathology</source>, <volume>3</volume>(<issue>1</issue>), <fpage>13</fpage>&#8211;<lpage>24</lpage>. DOI: <pub-id pub-id-type="doi">10.3109/14417040109003705</pub-id></mixed-citation></ref>
<ref id="B6"><label>6</label><mixed-citation publication-type="journal"><string-name><surname>Bartle-Meyer</surname>, <given-names>C. J.</given-names></string-name>, <string-name><surname>Gooz&#233;e</surname>, <given-names>J. V.</given-names></string-name>, &amp; <string-name><surname>Murdoch</surname>, <given-names>B. E.</given-names></string-name> (<year>2009</year>). <article-title>Kinematic investigation of lingual movement in words of increasing length in acquired apraxia of speech</article-title>. <source>Clinical Linguistics and Phonetics</source>, <volume>23</volume>(<issue>2</issue>), <fpage>93</fpage>&#8211;<lpage>121</lpage>. DOI: <pub-id pub-id-type="doi">10.1080/02699200802564284</pub-id></mixed-citation></ref>
<ref id="B7"><label>7</label><mixed-citation publication-type="journal"><string-name><surname>Benus</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Gafos</surname>, <given-names>A. I.</given-names></string-name> (<year>2007</year>). <article-title>Articulatory characteristics of Hungarian &#8216;transparent&#8217; vowels</article-title>. <source>Journal of Phonetics</source>, <volume>35</volume>, <fpage>271</fpage>&#8211;<lpage>300</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/j.wocn.2006.11.002</pub-id></mixed-citation></ref>
<ref id="B8"><label>8</label><mixed-citation publication-type="journal"><string-name><surname>Berry</surname>, <given-names>J. J.</given-names></string-name> (<year>2011</year>). <article-title>Accuracy of the NDI Wave Speech Research System</article-title>. <source>JSLHR</source>, <volume>54</volume>, <fpage>1295</fpage>&#8211;<lpage>1301</lpage>. DOI: <pub-id pub-id-type="doi">10.1044/1092-4388(2011/10-0226)</pub-id></mixed-citation></ref>
<ref id="B9"><label>9</label><mixed-citation publication-type="journal"><string-name><surname>Berry</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Kolb</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Schroeder</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Johnson</surname>, <given-names>M. T.</given-names></string-name> (<year>2017</year>). <article-title>Jaw Rotation in Dysarthria Measured with a Single Electromagnetic Articulography Sensor</article-title>. <source>American Journal of Speech-Language Pathology</source>, <volume>26</volume>(<issue>2S</issue>), <fpage>596</fpage>&#8211;<lpage>610</lpage>. DOI: <pub-id pub-id-type="doi">10.1044/2017_AJSLP-16-0104</pub-id></mixed-citation></ref>
<ref id="B10"><label>10</label><mixed-citation publication-type="journal"><string-name><surname>Bocquelet</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Hueber</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Girin</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Savariaux</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name><surname>Yvert</surname>, <given-names>B.</given-names></string-name> (<year>2016</year>). <article-title>Real-Time Control of an Articulatory-Based Speech Synthesizer for Brain Computer Interfaces</article-title>. <source>PLoS Comput Biol</source>, <volume>12</volume>(<issue>11</issue>), <elocation-id>e1005119</elocation-id>. DOI: <pub-id pub-id-type="doi">10.1371/journal.pcbi.1005119</pub-id></mixed-citation></ref>
<ref id="B11"><label>11</label><mixed-citation publication-type="confproc"><string-name><surname>Branderud</surname>, <given-names>P.</given-names></string-name> (<year>1985</year>). <article-title>Movetrack &#8211; a movement tracking system</article-title>. <conf-name>Proceedings of the French-Swedish Symposium on Speech</conf-name>, <conf-loc>Grenoble, France</conf-loc>, pp. <fpage>113</fpage>&#8211;<lpage>122</lpage>.</mixed-citation></ref>
<ref id="B12"><label>12</label><mixed-citation publication-type="journal"><string-name><surname>Brunner</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Fuchs</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Perrier</surname>, <given-names>P.</given-names></string-name> (<year>2009</year>). <article-title>On the relationship between palate shape and articulatory behavior</article-title>. <source>JASA</source>, <volume>125</volume>, <fpage>3936</fpage>&#8211;<lpage>3949</lpage>. DOI: <pub-id pub-id-type="doi">10.1121/1.3125313</pub-id></mixed-citation></ref>
<ref id="B13"><label>13</label><mixed-citation publication-type="journal"><string-name><surname>Brunner</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Fuchs</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Perrier</surname>, <given-names>P.</given-names></string-name> (<year>2011a</year>). <article-title>Supralaryngeal control in Korean velar stops</article-title>. <source>Journal of Phonetics</source>, <volume>39</volume>, <fpage>178</fpage>&#8211;<lpage>195</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/j.wocn.2011.01.003</pub-id></mixed-citation></ref>
<ref id="B14"><label>14</label><mixed-citation publication-type="journal"><string-name><surname>Brunner</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Hoole</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Perrier</surname>, <given-names>P.</given-names></string-name> (<year>2011b</year>). <article-title>Adaptation strategies in perturbed /s/</article-title>. <source>Clinical Linguistics and Phonetics</source>, <volume>25</volume>(<issue>8</issue>), <fpage>705</fpage>&#8211;<lpage>724</lpage>. DOI: <pub-id pub-id-type="doi">10.3109/02699206.2011.553699</pub-id></mixed-citation></ref>
<ref id="B15"><label>15</label><mixed-citation publication-type="journal"><string-name><surname>Brysbaert</surname>, <given-names>M.</given-names></string-name> (<year>2019</year>). <article-title>How Many Participants Do We Have to Include in Properly Powered Experiments? A Tutorial of Power Analysis with Reference Tables</article-title>. <source>Journal of Cognition</source>, <volume>2</volume>(<issue>1</issue>), art. <fpage>16</fpage>. DOI: <pub-id pub-id-type="doi">10.5334/joc.72</pub-id></mixed-citation></ref>
<ref id="B16"><label>16</label><mixed-citation publication-type="journal"><string-name><surname>B&#252;ckins</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Greisbach</surname>, <given-names>R.</given-names></string-name>, &amp; <string-name><surname>Hermes</surname>, <given-names>A.</given-names></string-name> (<year>2018</year>). <article-title>Larynx movement in the production of Georgian ejective sounds</article-title>. In <source>Challenges in Analysis and Processing of Spontaneous Speech</source>, <fpage>127</fpage>&#8211;<lpage>138</lpage>. DOI: <pub-id pub-id-type="doi">10.18135/CAPSS.127</pub-id></mixed-citation></ref>
<ref id="B17"><label>17</label><mixed-citation publication-type="journal"><string-name><surname>Bukmaier</surname>, <given-names>V.</given-names></string-name>, &amp; <string-name><surname>Harrington</surname>, <given-names>J.</given-names></string-name> (<year>2016</year>). <article-title>The articulatory and acoustic characteristics of Polish sibilants and their consequences for diachronic change</article-title>. <source>Journal of the International Phonetic Association</source>, <volume>46</volume>(<issue>3</issue>), <fpage>311</fpage>&#8211;<lpage>329</lpage>. DOI: <pub-id pub-id-type="doi">10.1017/S0025100316000062</pub-id></mixed-citation></ref>
<ref id="B18"><label>18</label><mixed-citation publication-type="journal"><string-name><surname>Cai</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Qin</surname>, <given-names>X.</given-names></string-name>, <string-name><surname>Cai</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Li</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Liu</surname>, <given-names>X.</given-names></string-name>, &amp; <string-name><surname>Zhong</surname>, <given-names>H.</given-names></string-name> (<year>2018</year>). <article-title>The DKU-JNU-EMA electromagnetic articulography database on Mandarin and Chinese dialects with tandem feature based acoustic-to-articulatory inversion</article-title>. <source>ISCSLP 2018 &#8211; Proceedings</source>, <fpage>235</fpage>&#8211;<lpage>239</lpage>. DOI: <pub-id pub-id-type="doi">10.1109/ISCSLP.2018.8706629</pub-id></mixed-citation></ref>
<ref id="B19"><label>19</label><mixed-citation publication-type="confproc"><string-name><surname>Canevari</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Badino</surname>, <given-names>L.</given-names></string-name>, &amp; <string-name><surname>Fadiga</surname>, <given-names>L.</given-names></string-name> (<year>2015</year>). <article-title>A new Italian dataset of parallel acoustic and articulatory data</article-title>. <conf-name>INTERSPEECH 2015</conf-name>, <fpage>2152</fpage>&#8211;<lpage>2156</lpage>.</mixed-citation></ref>
<ref id="B20"><label>20</label><mixed-citation publication-type="journal"><string-name><surname>Carignan</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Shosted</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Shih</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name><surname>Rong</surname>, <given-names>P.</given-names></string-name> (<year>2011</year>). <article-title>Articulatory compensation for nasality: An EMA study of lingual position during nasalized vowels</article-title>. <source>Journal of Phonetics</source>, <volume>39</volume>(<issue>4</issue>), <fpage>668</fpage>&#8211;<lpage>682</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/j.wocn.2011.07.005</pub-id></mixed-citation></ref>
<ref id="B21"><label>21</label><mixed-citation publication-type="webpage"><collab>Carstens Medizinelektronik GmbH</collab>. (<year>2006</year>). <source>AG500 Manual</source>. Retrieved from <uri>http://www.ag500.de/</uri></mixed-citation></ref>
<ref id="B22"><label>22</label><mixed-citation publication-type="webpage"><collab>Carstens Medizinelektronik GmbH</collab>. (<year>2014</year>). <source>AG501 Manual</source>. Retrieved from <uri>https://www.ag500.de/</uri></mixed-citation></ref>
<ref id="B23"><label>23</label><mixed-citation publication-type="journal"><string-name><surname>Cheng</surname>, <given-names>H. Y.</given-names></string-name>, <string-name><surname>Murdoch</surname>, <given-names>B. E.</given-names></string-name>, <string-name><surname>Gooz&#233;e</surname>, <given-names>J. V.</given-names></string-name>, &amp; <string-name><surname>Scott</surname>, <given-names>D.</given-names></string-name> (<year>2007</year>). <article-title>Physiologic development of tongue-jaw coordination from childhood to adulthood</article-title>. <source>JSLHR</source>, <volume>50</volume>(<issue>2</issue>), <fpage>352</fpage>&#8211;<lpage>60</lpage>. DOI: <pub-id pub-id-type="doi">10.1044/1092-4388(2007/025)</pub-id></mixed-citation></ref>
<ref id="B24"><label>24</label><mixed-citation publication-type="journal"><string-name><surname>Cler</surname>, <given-names>G. J.</given-names></string-name>, <string-name><surname>Lee</surname>, <given-names>J. C.</given-names></string-name>, <string-name><surname>Mittelman</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Stepp</surname>, <given-names>C. E.</given-names></string-name>, &amp; <string-name><surname>Bohland</surname>, <given-names>J. W.</given-names></string-name> (<year>2017</year>). <article-title>Kinematic analysis of speech sound sequencing errors induced by delayed auditory feedback</article-title>. <source>JSLHR</source>, <volume>60</volume>(<issue>6, special issue</issue>), <fpage>1695</fpage>&#8211;<lpage>1711</lpage>. DOI: <pub-id pub-id-type="doi">10.1044/2017_JSLHR-S-16-0234</pub-id></mixed-citation></ref>
<ref id="B25"><label>25</label><mixed-citation publication-type="webpage"><string-name><surname>Crose</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Kuk</surname>, <given-names>F.</given-names></string-name>, &amp; <string-name><surname>Bindeballe</surname>, <given-names>H.</given-names></string-name> (<year>2011</year>). <article-title>Digital Wireless Hearing Aids, Part 4: Interference</article-title>. <source>Hearing Review</source>, <volume>18</volume>(<issue>13</issue>), <fpage>30</fpage>&#8211;<lpage>39</lpage>. Retrieved from <uri>www.hearingreview.com</uri></mixed-citation></ref>
<ref id="B26"><label>26</label><mixed-citation publication-type="webpage"><string-name><surname>Cynosure</surname>, <given-names>Inc.</given-names></string-name> (<year>2014</year>, <month>September</month> <day>8</day>). <source>Cynosure Acquires Assets of RS Medical Device Manufacturer Ellman International, Inc</source> [Press release]. Retrieved from <uri>https://prnewswire.com</uri></mixed-citation></ref>
<ref id="B27"><label>27</label><mixed-citation publication-type="confproc"><string-name><surname>Demange</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Ouni</surname>, <given-names>S.</given-names></string-name> (<year>2011</year>). <article-title>Continuous Episodic Memory Based Speech Recognition Using Articulatory Dynamics</article-title>. <conf-name>Proceedings of INTERSPEECH 2011</conf-name>, <fpage>2305</fpage>&#8211;<lpage>2308</lpage>.</mixed-citation></ref>
<ref id="B28"><label>28</label><mixed-citation publication-type="journal"><string-name><surname>Didirkov&#225;</surname>, <given-names>I.</given-names></string-name>, &amp; <string-name><surname>Hirsch</surname>, <given-names>F.</given-names></string-name> (<year>2019</year>). <article-title>A two-case study of coarticulation in stuttered speech. An articulatory approach</article-title>. <source>Clinical Linguistics &amp; Phonetics</source>. DOI: <pub-id pub-id-type="doi">10.1080/02699206.2019.1660913</pub-id></mixed-citation></ref>
<ref id="B29"><label>29</label><mixed-citation publication-type="journal"><string-name><surname>Dromey</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Hunter</surname>, <given-names>E.</given-names></string-name>, &amp; <string-name><surname>Nissen</surname>, <given-names>S. L.</given-names></string-name> (<year>2018</year>). <article-title>Speech adaptation to kinematic recording sensors: Perceptual and acoustic findings</article-title>. <source>JSLHR</source>, <volume>61</volume>(<issue>3</issue>), <fpage>593</fpage>&#8211;<lpage>603</lpage>. DOI: <pub-id pub-id-type="doi">10.1044/2017_JSLHR-S-17-0169</pub-id></mixed-citation></ref>
<ref id="B30"><label>30</label><mixed-citation publication-type="journal"><string-name><surname>Earnest</surname>, <given-names>M. M.</given-names></string-name>, &amp; <string-name><surname>Max</surname>, <given-names>L.</given-names></string-name> (<year>2003</year>). <article-title>En Route to the Three-Dimensional Registration and Analysis of Speech Movements: Instrumental Techniques for the Study of Articulatory Kinematics</article-title>. <source>Contemporary Issues in Communication Science and Disorders</source>, <volume>30</volume>, <fpage>5</fpage>&#8211;<lpage>25</lpage>. DOI: <pub-id pub-id-type="doi">10.1044/cicsd_30_S_5</pub-id></mixed-citation></ref>
<ref id="B31"><label>31</label><mixed-citation publication-type="webpage"><collab>Electromagnetic Articulograph</collab>. (<year>2019</year>). <source>Highest-precision Electromagnetic Articulography (EMA): 3D recording of articulatory orofacial movements</source>. Retrieved from <uri>www.articulograph.de</uri></mixed-citation></ref>
<ref id="B32"><label>32</label><mixed-citation publication-type="journal"><string-name><surname>Engelke</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Engelke</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name><surname>Schwetska</surname>, <given-names>R.</given-names></string-name> (<year>1990</year>). <article-title>Clinical and instrumental examination of tongue motor function [German]</article-title>. <source>Deutsche Zahnarztliche Zeitschrift</source>, <volume>45</volume>(<issue>7</issue>), <fpage>S11</fpage>&#8211;<lpage>6</lpage>.</mixed-citation></ref>
<ref id="B33"><label>33</label><mixed-citation publication-type="journal"><string-name><surname>Engelke</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Hoch</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Bruns</surname>, <given-names>T.</given-names></string-name>, &amp; <string-name><surname>Striebeck</surname>, <given-names>M.</given-names></string-name> (<year>1996</year>). <article-title>Simultaneous Evaluation of Articulatory Velopharyngeal Function under Different Dynamic Conditions with EMA and Videoendoscopy</article-title>. <source>Folia Phoniatrica et Logopaedica</source>, <volume>48</volume>(<issue>2</issue>), <fpage>65</fpage>&#8211;<lpage>77</lpage>. DOI: <pub-id pub-id-type="doi">10.1159/000266387</pub-id></mixed-citation></ref>
<ref id="B34"><label>34</label><mixed-citation publication-type="journal"><string-name><surname>Engelke</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Sch&#246;nle</surname>, <given-names>P. W.</given-names></string-name>, <string-name><surname>Kring</surname>, <given-names>R. A.</given-names></string-name>, &amp; <string-name><surname>Richter</surname>, <given-names>C.</given-names></string-name> (<year>1989</year>). <article-title>Electromagnetic articulography (EMA) studies on orofacial movement functions [in German]</article-title>. <source>Deutsche Zahnarztliche Zeitschrift</source>, <volume>44</volume>(<issue>8</issue>), <fpage>618</fpage>&#8211;<lpage>622</lpage>.</mixed-citation></ref>
<ref id="B35"><label>35</label><mixed-citation publication-type="journal"><string-name><surname>Flink</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Bergdahl</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Tegelberg</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Rosenblad</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Lagerl&#246;f</surname>, <given-names>F.</given-names></string-name> (<year>2008</year>). <article-title>Prevalence of hyposalivation in relation to general health, body mass index and remaining teeth in different age groups of adults</article-title>. <source>Community Dentistry and Oral Epidemiology</source>, <volume>36</volume>(<issue>6</issue>), <fpage>523</fpage>&#8211;<lpage>531</lpage>. DOI: <pub-id pub-id-type="doi">10.1111/j.1600-0528.2008.00432.x</pub-id></mixed-citation></ref>
<ref id="B36"><label>36</label><mixed-citation publication-type="journal"><string-name><surname>Friedman</surname>, <given-names>J. H.</given-names></string-name>, <string-name><surname>Brown</surname>, <given-names>R. G.</given-names></string-name>, <string-name><surname>Comella</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Garber</surname>, <given-names>C. E.</given-names></string-name>, <string-name><surname>Krupp</surname>, <given-names>L. B.</given-names></string-name>, <string-name><surname>Lou</surname>, <given-names>J.-S.</given-names></string-name>, <string-name><surname>Marsh</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Nail</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Shulman</surname>, <given-names>L.</given-names></string-name>, &amp; <string-name><surname>Taylor</surname>, <given-names>C. B.</given-names></string-name> (<year>2007</year>). <article-title>Fatigue in Parkinson&#8217;s disease: A review</article-title>. <source>Movement disorders</source>, <volume>22</volume>(<issue>3</issue>), <fpage>297</fpage>&#8211;<lpage>308</lpage>. DOI: <pub-id pub-id-type="doi">10.1002/mds.21240</pub-id></mixed-citation></ref>
<ref id="B37"><label>37</label><mixed-citation publication-type="thesis"><string-name><surname>Fuchs</surname>, <given-names>S.</given-names></string-name> (<year>2005</year>). <source>Articulatory correlates of the voicing contrast in alveolar obstruent production in German</source> (Doctoral thesis, Centre for General Linguistics, <publisher-loc>Berlin, Germany</publisher-loc>). <publisher-name>Deutsche National Bibliothek</publisher-name>. <uri>https://d-nb.info/105944173X/34</uri>. DOI: <pub-id pub-id-type="doi">10.21248/zaspil.41.2005.268</pub-id></mixed-citation></ref>
<ref id="B38"><label>38</label><mixed-citation publication-type="journal"><string-name><surname>Fuentes</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Dias</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Alvarez</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Lezcano</surname>, <given-names>M. F.</given-names></string-name>, <string-name><surname>Farfan</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Astete</surname>, <given-names>N.</given-names></string-name>, &amp; <string-name><surname>Arias</surname>, <given-names>A.</given-names></string-name> (<year>2018</year>). <article-title>Application of 3D Electromagnetic Articulography in Dentistry: Mastication and Deglutition Analysis. Protocol Report</article-title>. <source>International Journal of Odontostomatology</source>, <volume>12</volume>(<issue>1</issue>), <fpage>105</fpage>&#8211;<lpage>112</lpage>. DOI: <pub-id pub-id-type="doi">10.4067/S0718-381X2018000100105</pub-id></mixed-citation></ref>
<ref id="B39"><label>39</label><mixed-citation publication-type="webpage"><string-name><surname>Gafos</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Kirov</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name><surname>Shaw</surname>, <given-names>J.</given-names></string-name> (<year>2010</year>). <source>Guidelines for using mview</source>. Retrieved from: <uri>http://www.haskins.yale.edu/staff/gafos_downloads/ArtA3DEMA.pdf</uri></mixed-citation></ref>
<ref id="B40"><label>40</label><mixed-citation publication-type="journal"><string-name><surname>Geng</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Turk</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Scobbie</surname>, <given-names>J. M.</given-names></string-name>, &#8230;, &amp; <string-name><surname>Wiegand</surname>, <given-names>R.</given-names></string-name> (<year>2013</year>). <article-title>Recording speech articulation in dialogue: Evaluating a synchronized double electromagnetic articulography setup</article-title>. <source>Journal of Phonetics</source>, <volume>41</volume>(<issue>6</issue>), <fpage>421</fpage>&#8211;<lpage>431</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/j.wocn.2013.07.002</pub-id></mixed-citation></ref>
<ref id="B41"><label>41</label><mixed-citation publication-type="journal"><string-name><surname>Gibbon</surname>, <given-names>F.</given-names></string-name> (<year>2008</year>). <article-title>Instrumental analysis of articulation in speech impairment</article-title>. In <string-name><given-names>M. J.</given-names> <surname>Ball</surname></string-name>, <string-name><given-names>M. R.</given-names> <surname>Perkins</surname></string-name>, <string-name><given-names>N.</given-names> <surname>M&#252;ller</surname></string-name>, &amp; <string-name><given-names>S.</given-names> <surname>Howard</surname></string-name> (Eds.). <source>Handbook of Clinical Phonetics and Linguistics</source> (pp. <fpage>311</fpage>&#8211;<lpage>331</lpage>). DOI: <pub-id pub-id-type="doi">10.1002/9781444301007</pub-id></mixed-citation></ref>
<ref id="B42"><label>42</label><mixed-citation publication-type="journal"><string-name><surname>Gilbert</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Olsen</surname>, <given-names>K. N.</given-names></string-name>, <string-name><surname>Leung</surname>, <given-names>Y.</given-names></string-name>, &amp; <string-name><surname>Stevens</surname>, <given-names>C. J.</given-names></string-name> (<year>2015</year>). <article-title>Transforming an embodied conversational agent into an efficient talking head: From keyframe-based animation to multimodal concatenation synthesis</article-title>. <source>Computational Cognitive Science</source>, <volume>1</volume>(<issue>1</issue>), <fpage>1</fpage>&#8211;<lpage>12</lpage>. DOI: <pub-id pub-id-type="doi">10.1186/s40469-015-0007-8</pub-id></mixed-citation></ref>
<ref id="B43"><label>43</label><mixed-citation publication-type="journal"><string-name><surname>Girin</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Hueber</surname>, <given-names>T.</given-names></string-name>, &amp; <string-name><surname>Alameda-Pineda</surname>, <given-names>X.</given-names></string-name> (<year>2017</year>). <article-title>Extending the Cascaded Gaussian Mixture Regression Framework for Cross-Speaker Acoustic-Articulatory Mapping</article-title>. <source>IEEE/ACM Transactions on Audio, Speech, and Language Processing</source>, <volume>25</volume>(<issue>3</issue>), <fpage>662</fpage>&#8211;<lpage>673</lpage>. DOI: <pub-id pub-id-type="doi">10.1109/TASLP.2017.2651398</pub-id></mixed-citation></ref>
<ref id="B44"><label>44</label><mixed-citation publication-type="journal"><string-name><surname>Gooz&#233;e</surname>, <given-names>J. V.</given-names></string-name>, <string-name><surname>Murdoch</surname>, <given-names>B. E.</given-names></string-name>, <string-name><surname>Theodoros</surname>, <given-names>D. G.</given-names></string-name>, &amp; <string-name><surname>Stokes</surname>, <given-names>P. D.</given-names></string-name> (<year>2000</year>). <article-title>Kinematic analysis of tongue movements in dysarthria following traumatic brain injury using electromagnetic articulography</article-title>. <source>Brain Injury</source>, <volume>14</volume>(<issue>2</issue>), <fpage>153</fpage>&#8211;<lpage>174</lpage>. DOI: <pub-id pub-id-type="doi">10.1080/026990500120817</pub-id></mixed-citation></ref>
<ref id="B45"><label>45</label><mixed-citation publication-type="journal"><string-name><surname>Gooz&#233;e</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Murdoch</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Ozanne</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Cheng</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Hill</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Gibbon</surname>, <given-names>F.</given-names></string-name> (<year>2007</year>). <article-title>Lingual kinematics and coordination in speech-disordered children exhibiting differentiated versus undifferentiated lingual gestures</article-title>. <source>International Journal of Language and Communication Disorders</source>, <volume>42</volume>(<issue>6</issue>), <fpage>703</fpage>&#8211;<lpage>724</lpage>. DOI: <pub-id pub-id-type="doi">10.1080/13682820601104960</pub-id></mixed-citation></ref>
<ref id="B46"><label>46</label><mixed-citation publication-type="journal"><string-name><surname>Harper</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Lee</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Goldstein</surname>, <given-names>L.</given-names></string-name>, &amp; <string-name><surname>Byrd</surname>, <given-names>D.</given-names></string-name> (<year>2018</year>). <article-title>Simultaneous electromagnetic articulography and electroglottography data acquisition of natural speech</article-title>. <source>JASA</source>, <volume>144</volume>(<issue>5</issue>), <fpage>e380</fpage>&#8211;<lpage>e385</lpage>. DOI: <pub-id pub-id-type="doi">10.1121/1.5066349</pub-id></mixed-citation></ref>
<ref id="B47"><label>47</label><mixed-citation publication-type="journal"><string-name><surname>Hartinger</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Mooshammer</surname>, <given-names>C.</given-names></string-name> (<year>2008</year>). <article-title>Articulatory variability in cluttering</article-title>. <source>Folia Phoniatrica et Logopaedica</source>, <volume>60</volume>, <fpage>64</fpage>&#8211;<lpage>72</lpage>. DOI: <pub-id pub-id-type="doi">10.1159/000114647</pub-id></mixed-citation></ref>
<ref id="B48"><label>48</label><mixed-citation publication-type="journal"><string-name><surname>Hasegawa-Johnson</surname>, <given-names>M.</given-names></string-name> (<year>1998</year>). <article-title>Electromagnetic exposure safety of the Carstens Articulograph AG100</article-title>. <source>JASA</source>, <volume>104</volume>, <fpage>2529</fpage>&#8211;<lpage>2532</lpage>. DOI: <pub-id pub-id-type="doi">10.1121/1.423775</pub-id></mixed-citation></ref>
<ref id="B49"><label>49</label><mixed-citation publication-type="journal"><string-name><surname>Henriques</surname>, <given-names>R. N.</given-names></string-name>, &amp; <string-name><surname>van Lieshout</surname>, <given-names>P.</given-names></string-name> (<year>2013</year>). <article-title>A Comparison of Methods for Decoupling Tongue and Lower Lip from Jaw Movements in 3D Articulography</article-title>. <source>JSLHR</source>, <volume>56</volume>(<issue>5</issue>), <fpage>1503</fpage>&#8211;<lpage>1516</lpage>. DOI: <pub-id pub-id-type="doi">10.1044/1092-4388(2013/12-0016)</pub-id></mixed-citation></ref>
<ref id="B50"><label>50</label><mixed-citation publication-type="journal"><string-name><surname>Hermes</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>M&#252;cke</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Thies</surname>, <given-names>T.</given-names></string-name>, &amp; <string-name><surname>Barbe</surname>, <given-names>M. T.</given-names></string-name> (<year>2019</year>). <article-title>Coordination patterns in Essential Tremor patients with Deep Brain Stimulation: Syllables with low and high complexity</article-title>. <source>Laboratory Phonology: Journal of the Association for Laboratory Phonology</source>, <volume>10</volume>(<issue>1</issue>). DOI: <pub-id pub-id-type="doi">10.5334/labphon.141</pub-id></mixed-citation></ref>
<ref id="B51"><label>51</label><mixed-citation publication-type="journal"><string-name><surname>Hiiemae</surname>, <given-names>K. M.</given-names></string-name>, &amp; <string-name><surname>Palmer</surname>, <given-names>J. B.</given-names></string-name> (<year>2003</year>). <article-title>Tongue movements in feeding and speech</article-title>. <source>Critical Reviews in Oral Biology &amp; Medicine</source>, <volume>14</volume>(<issue>6</issue>), <fpage>413</fpage>&#8211;<lpage>429</lpage>. DOI: <pub-id pub-id-type="doi">10.1177/154411130301400604</pub-id></mixed-citation></ref>
<ref id="B52"><label>52</label><mixed-citation publication-type="journal"><string-name><surname>Hirai</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Tanaka</surname>, <given-names>O.</given-names></string-name>, <string-name><surname>Koshino</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Takasaki</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Hashikawa</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Yajima</surname>, <given-names>T.</given-names></string-name>, &amp; <string-name><surname>Matai</surname>, <given-names>N.</given-names></string-name> (<year>1989</year>). <article-title>Aging and tongue skill. Ultrasound (motion-mode) evaluation. [in Japanese]</article-title> <source>Nihon Hotetsu Shika Gakkai Zasshi</source>, <volume>33</volume>, <fpage>457</fpage>&#8211;<lpage>465</lpage>. DOI: <pub-id pub-id-type="doi">10.2186/jjps.33.457</pub-id></mixed-citation></ref>
<ref id="B53"><label>53</label><mixed-citation publication-type="journal"><string-name><surname>Hoenig</surname>, <given-names>J. F.</given-names></string-name>, &amp; <string-name><surname>Schoener</surname>, <given-names>W. F.</given-names></string-name> (<year>1992</year>). <article-title>Radiological survey of the cervical spine in cleft lip and palate</article-title>. <source>Dentomaxillofacial Radiology</source>, <volume>21</volume>(<issue>1</issue>), <fpage>36</fpage>&#8211;<lpage>39</lpage>. DOI: <pub-id pub-id-type="doi">10.1259/dmfr.21.1.1397450</pub-id></mixed-citation></ref>
<ref id="B54"><label>54</label><mixed-citation publication-type="journal"><string-name><surname>H&#246;hne</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Sch&#246;nle</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Conrad</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Veldschoten</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Wenig</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Faghouri</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Sandner</surname>, <given-names>N.</given-names></string-name>, &amp; <string-name><surname>Hong</surname>, <given-names>G.</given-names></string-name> (<year>1987</year>). <article-title>Direct measurement of vocal tract shape &#8211; articulography</article-title>. In <source>Proceedings of the European Conference on Speech Technology</source>, <fpage>2230</fpage>&#8211;<lpage>2232</lpage>.</mixed-citation></ref>
<ref id="B55"><label>55</label><mixed-citation publication-type="journal"><string-name><surname>Hoke</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Tiede</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Grender</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Klukowska</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Peters</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Carr</surname>, <given-names>G.</given-names></string-name> (<year>2019</year>). <article-title>Using Electromagnetic Articulography to Measure Denture Micromovement during Chewing with and without Denture Adhesive</article-title>. <source>Journal of Prosthodontics</source>, <volume>28</volume>(<issue>1</issue>), <fpage>e252</fpage>&#8211;<lpage>e258</lpage>. DOI: <pub-id pub-id-type="doi">10.1111/jopr.12679</pub-id></mixed-citation></ref>
<ref id="B56"><label>56</label><mixed-citation publication-type="webpage"><string-name><surname>Hoole</surname>, <given-names>P.</given-names></string-name> (<year>2012</year>). <source>Phil Hoole&#8217;s matlab software for EMA processing</source>. Available from: <uri>https://www.phonetik.uni-muenchen.de/~hoole/articmanual/index.html</uri></mixed-citation></ref>
<ref id="B57"><label>57</label><mixed-citation publication-type="journal"><string-name><surname>Hoole</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Gfoerer</surname>, <given-names>S.</given-names></string-name> (<year>1990</year>). <article-title>Electromagnetic articulography as a tool in the study of lingual coarticulation</article-title>. <source>JASA, S123</source>. DOI: <pub-id pub-id-type="doi">10.1121/1.2027902</pub-id></mixed-citation></ref>
<ref id="B58"><label>58</label><mixed-citation publication-type="confproc"><string-name><surname>Hoole</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Mooshammer</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name><surname>Tillmann</surname>, <given-names>H. G.</given-names></string-name> (<year>1994</year>). <article-title>Kinematic analysis of vowel production in German</article-title>. In <conf-name>Proceedings of ICSLP94</conf-name>.</mixed-citation></ref>
<ref id="B59"><label>59</label><mixed-citation publication-type="book"><string-name><surname>Hoole</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Nguyen</surname>, <given-names>N.</given-names></string-name> (<year>1999</year>). <chapter-title>12 - Electromagnetic Articulography</chapter-title>. In <string-name><given-names>W. J.</given-names> <surname>Harcastle</surname></string-name> (Ed.), <source>Coarticulation: Theory, Data and Techniques</source>. <publisher-loc>Cambridge, UK</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>, pp. <fpage>260</fpage>&#8211;<lpage>269</lpage>. DOI: <pub-id pub-id-type="doi">10.1017/CBO9780511486395.013</pub-id></mixed-citation></ref>
<ref id="B60"><label>60</label><mixed-citation publication-type="book"><string-name><surname>Hoole</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Zierdt</surname>, <given-names>A.</given-names></string-name> (<year>2010</year>). <chapter-title>Five-dimensional articulography</chapter-title>. In <string-name><given-names>B.</given-names> <surname>Maassen</surname></string-name> &amp; <string-name><given-names>Pascal H. H. M.</given-names> <surname>van Lieshout</surname></string-name> (Eds.), <source>Speech Motor Control: New Developments in Basic and Applied Research</source> (pp. <fpage>331</fpage>&#8211;<lpage>349</lpage>). <publisher-loc>Oxford, UK</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>. DOI: <pub-id pub-id-type="doi">10.1093/acprof:oso/9780199235797.003.0020</pub-id></mixed-citation></ref>
<ref id="B61"><label>61</label><mixed-citation publication-type="journal"><string-name><surname>Hopkin</surname>, <given-names>G. B.</given-names></string-name> (<year>1967</year>). <article-title>Neonatal and Adult Tongue Dimensions</article-title>. <source>The Angle Orthodontist</source>, <volume>37</volume>(<issue>2</issue>), <fpage>132</fpage>&#8211;<lpage>133</lpage>.</mixed-citation></ref>
<ref id="B62"><label>62</label><mixed-citation publication-type="journal"><string-name><surname>Horn</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>K&#252;hnast</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Axmann-Krcmar</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name><surname>G&#246;z</surname>, <given-names>G.</given-names></string-name> (<year>2004</year>). <article-title>Influence of Orofacial Dysfunctions on Spatial and Temporal Dimensions of Swallowing Movements</article-title>. <source>Journal of Orofacial Orthopedics</source>, <volume>65</volume>(<issue>5</issue>), <fpage>376</fpage>&#8211;<lpage>388</lpage>. DOI: <pub-id pub-id-type="doi">10.1007/s00056-004-0315-1</pub-id></mixed-citation></ref>
<ref id="B63"><label>63</label><mixed-citation publication-type="confproc"><string-name><surname>Howson</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Kochetov</surname>, <given-names>A.</given-names></string-name> (<year>2015</year>). <article-title>An EMA examination of liquids in Czech</article-title>. In <conf-name>Proceedings of ICPhS 2015</conf-name>.</mixed-citation></ref>
<ref id="B64"><label>64</label><mixed-citation publication-type="journal"><string-name><surname>Howson</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Kochetov</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>van Lieshout</surname>, <given-names>P.</given-names></string-name> (<year>2015</year>). <article-title>Examination of the grooving patterns of the Czech trill-fricative</article-title>. <source>Journal of Phonetics</source>, <volume>49</volume>, <fpage>117</fpage>&#8211;<lpage>129</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/j.wocn.2015.01.002</pub-id></mixed-citation></ref>
<ref id="B65"><label>65</label><mixed-citation publication-type="journal"><string-name><surname>Inoue</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Ono</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Masuda</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Morimoto</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Tanaka</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Yokota</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Inenaga</surname>, <given-names>K.</given-names></string-name> (<year>2006</year>). <article-title>Gender difference in unstimulated whole saliva flow rate and salivary gland sizes</article-title>. <source>Archives of Oral Biology</source>, <volume>51</volume>, <fpage>1055</fpage>&#8211;<lpage>1060</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/j.archoralbio.2006.06.010</pub-id></mixed-citation></ref>
<ref id="B66"><label>66</label><mixed-citation publication-type="journal"><string-name><surname>Jaeger</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Hoole</surname>, <given-names>P.</given-names></string-name> (<year>2011</year>). <article-title>Articulatory factors influencing regressive place assimilation across word boundaries in German</article-title>. <source>Journal of Phonetics</source>, <volume>39</volume>, <fpage>413</fpage>&#8211;<lpage>428</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/j.wocn.2011.03.002</pub-id></mixed-citation></ref>
<ref id="B67"><label>67</label><mixed-citation publication-type="confproc"><string-name><surname>Ji</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Berry</surname>, <given-names>J. J.</given-names></string-name>, &amp; <string-name><surname>Johnson</surname>, <given-names>M. T.</given-names></string-name> (<year>2013</year>). <article-title>Vowel production in Mandarin accented English and American English: Kinematic and acoustic data from the Marquette University Mandarin accented English corpus</article-title>. In <conf-name>Proceedings of Meetings on Acoustics</conf-name>, <volume>19</volume>(<issue>2013</issue>). DOI: <pub-id pub-id-type="doi">10.1121/1.4800290</pub-id></mixed-citation></ref>
<ref id="B68"><label>68</label><mixed-citation publication-type="confproc"><string-name><surname>Ji</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Berry</surname>, <given-names>J. J.</given-names></string-name>, &amp; <string-name><surname>Johnson</surname>, <given-names>M. T.</given-names></string-name> (<year>2014</year>). <article-title>The electromagnetic articulography Mandarin accented English (EMA-MAE) corpus of acoustic and 3D articulatory kinematic data</article-title>. <conf-name>ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing &#8211; Proceedings</conf-name>, <fpage>7719</fpage>&#8211;<lpage>7723</lpage>. DOI: <pub-id pub-id-type="doi">10.1109/ICASSP.2014.6855102</pub-id></mixed-citation></ref>
<ref id="B69"><label>69</label><mixed-citation publication-type="journal"><string-name><surname>Joglar</surname>, <given-names>J. A.</given-names></string-name>, <string-name><surname>Nguyen</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Garst</surname>, <given-names>D. M.</given-names></string-name>, &amp; <string-name><surname>Katz</surname>, <given-names>W. F.</given-names></string-name> (<year>2009</year>). <article-title>Safety of Electromagnetic Articulography in Patients with Pacemakers and Implantable Cardioverter-Defibrillators</article-title>. <source>JSLHR</source>, <volume>52</volume>(<issue>4</issue>), <fpage>1082</fpage>&#8211;<lpage>1087</lpage>. DOI: <pub-id pub-id-type="doi">10.1044/1092-4388(2009/08-0028)</pub-id></mixed-citation></ref>
<ref id="B70"><label>70</label><mixed-citation publication-type="journal"><string-name><surname>Katz</surname>, <given-names>W. F.</given-names></string-name>, &amp; <string-name><surname>Bharadwaj</surname>, <given-names>S.</given-names></string-name> (<year>2001</year>). <article-title>Coarticulation in fricative-vowel syllables produced by children and adults: A preliminary report</article-title>. <source>Clinical Linguistics and Phonetics</source>, <volume>15</volume>(<issue>1&#8211;2</issue>), <fpage>139</fpage>&#8211;<lpage>143</lpage>. DOI: <pub-id pub-id-type="doi">10.3109/02699200109167646</pub-id></mixed-citation></ref>
<ref id="B71"><label>71</label><mixed-citation publication-type="journal"><string-name><surname>Katz</surname>, <given-names>W. F.</given-names></string-name>, <string-name><surname>Bharadwaj</surname>, <given-names>S. V.</given-names></string-name>, <string-name><surname>Gabbert</surname>, <given-names>G. J.</given-names></string-name>, <string-name><surname>Loizou</surname>, <given-names>P. C.</given-names></string-name>, <string-name><surname>Tobey</surname>, <given-names>E. A.</given-names></string-name>, &amp; <string-name><surname>Poroy</surname>, <given-names>O.</given-names></string-name> (<year>2003</year>). <article-title>EMA compatibility of the Clarion 1.2 cochlear implant system</article-title>. <source>Acoustic Research Letters Online</source>, <volume>4</volume>, <fpage>100</fpage>&#8211;<lpage>105</lpage>. DOI: <pub-id pub-id-type="doi">10.1121/1.1591712</pub-id></mixed-citation></ref>
<ref id="B72"><label>72</label><mixed-citation publication-type="journal"><string-name><surname>Katz</surname>, <given-names>W. F.</given-names></string-name>, <string-name><surname>Carter</surname>, <given-names>G. C.</given-names></string-name>, &amp; <string-name><surname>Levitt</surname>, <given-names>J. S.</given-names></string-name> (<year>2007</year>). <article-title>Treating buccofacial apraxia using augmented kinematic feedback</article-title>. <source>Aphasiology</source>, <volume>21</volume>(<issue>12</issue>), <fpage>1230</fpage>&#8211;<lpage>1247</lpage>. DOI: <pub-id pub-id-type="doi">10.1080/02687030600591161</pub-id></mixed-citation></ref>
<ref id="B73"><label>73</label><mixed-citation publication-type="journal"><string-name><surname>Katz</surname>, <given-names>W. F.</given-names></string-name>, <string-name><surname>Mehta</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Wood</surname>, <given-names>M.</given-names></string-name> (<year>2017</year>). <article-title>Using electromagnetic articulography with a tongue lateral sensor to discriminate manner of articulation</article-title>. <source>JASA</source>, <volume>141</volume>(<issue>1</issue>), <fpage>EL57</fpage>&#8211;<lpage>EL63</lpage>. DOI: <pub-id pub-id-type="doi">10.1121/1.4973907</pub-id></mixed-citation></ref>
<ref id="B74"><label>74</label><mixed-citation publication-type="journal"><string-name><surname>Katz</surname>, <given-names>W. F.</given-names></string-name>, <string-name><surname>Mehta</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Wood</surname>, <given-names>M.</given-names></string-name> (<year>2018</year>). <article-title>Effects of syllable position and vowel context on Japanese /r/: Kinematic and perceptual data</article-title>. <source>Acoust. Sci. &amp; Tech</source>., <volume>39</volume>(<issue>2</issue>), <fpage>130</fpage>&#8211;<lpage>137</lpage>. DOI: <pub-id pub-id-type="doi">10.1250/ast.39.130</pub-id></mixed-citation></ref>
<ref id="B75"><label>75</label><mixed-citation publication-type="journal"><string-name><surname>Kearney</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Haworth</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Scholl</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Faloutsos</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Baljko</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Yunusova</surname>, <given-names>Y.</given-names></string-name> (<year>2018</year>). <article-title>Treating speech movement hypokinesia in Parkinson&#8217;s disease: Does movement size matter?</article-title> <source>JSLHR</source>, <volume>61</volume>(<issue>11</issue>), <fpage>2703</fpage>&#8211;<lpage>2721</lpage>. DOI: <pub-id pub-id-type="doi">10.1044/2018_JSLHR-S-17-0439</pub-id></mixed-citation></ref>
<ref id="B76"><label>76</label><mixed-citation publication-type="journal"><string-name><surname>Kim</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Lammert</surname>, <given-names>A. C.</given-names></string-name>, <string-name><surname>Ghosh</surname>, <given-names>P. K.</given-names></string-name>, &amp; <string-name><surname>Narayanan</surname>, <given-names>S. S.</given-names></string-name> (<year>2014</year>). <article-title>Co-registration of speech production datasets from electromagnetic articulography and real-time magnetic resonance imaging</article-title>. <source>JASA</source>, <volume>135</volume>(<issue>2</issue>), <fpage>e115</fpage>&#8211;<lpage>e121</lpage>. DOI: <pub-id pub-id-type="doi">10.1121/1.4862880</pub-id></mixed-citation></ref>
<ref id="B77"><label>77</label><mixed-citation publication-type="journal"><string-name><surname>King</surname>, <given-names>S. A.</given-names></string-name>, &amp; <string-name><surname>Parent</surname>, <given-names>R. E.</given-names></string-name> (<year>2001</year>). <article-title>A 3D parametric tongue model for animated speech</article-title>. <source>J. Visual. Comput. Animat</source>., <volume>12</volume>, <fpage>112</fpage>&#8211;<lpage>115</lpage>. DOI: <pub-id pub-id-type="doi">10.1002/vis.249</pub-id></mixed-citation></ref>
<ref id="B78"><label>78</label><mixed-citation publication-type="journal"><string-name><surname>Kiritani</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Itoh</surname>, <given-names>K.</given-names></string-name>, &amp; <string-name><surname>Fujimura</surname>, <given-names>O.</given-names></string-name> (<year>1975</year>). <article-title>Tongue-pellet tracking by a computer-controlled x-ray microbeam system</article-title>. <source>JASA</source>, <volume>57</volume>(<issue>6</issue>), <fpage>1516</fpage>&#8211;<lpage>1520</lpage>. DOI: <pub-id pub-id-type="doi">10.1121/1.380593</pub-id></mixed-citation></ref>
<ref id="B79"><label>79</label><mixed-citation publication-type="journal"><string-name><surname>Kochetov</surname>, <given-names>A.</given-names></string-name> (<year>2020</year>). <article-title>Research methods in articulatory phonetics I: Introduction and studying oral gestures</article-title>. <source>Language and Linguistics Compass</source>, <volume>2020</volume>, <elocation-id>e12368</elocation-id>. DOI: <pub-id pub-id-type="doi">10.1111/lnc3.12368</pub-id></mixed-citation></ref>
<ref id="B80"><label>80</label><mixed-citation publication-type="thesis"><string-name><surname>Kolb</surname>, <given-names>A.</given-names></string-name> (<year>2015</year>). <source>Software Tools and Analysis Methods for the Use of Electromagnetic Articulography Data in Speech Research</source> (Master thesis, <publisher-loc>Marquette University, Milwaukee, Wisconsin</publisher-loc>). <publisher-name>Marquette University e-publications</publisher-name>. <uri>https://epublications.marquette.edu/theses_open/291/</uri></mixed-citation></ref>
<ref id="B81"><label>81</label><mixed-citation publication-type="journal"><string-name><surname>Krivokapi&#263;</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Tiede</surname>, <given-names>M. K.</given-names></string-name>, &amp; <string-name><surname>Tyrone</surname>, <given-names>M. E.</given-names></string-name> (<year>2017</year>). <article-title>A Kinematic Study of Prosodic Structure in Articulatory and Manual Gestures: Results from a Novel Method of Data Collection</article-title>. <source>Laboratory Phonology: Journal of the Association for Laboratory Phonology</source>, <volume>8</volume>(<issue>1</issue>), <fpage>1</fpage>&#8211;<lpage>26</lpage>. DOI: <pub-id pub-id-type="doi">10.5334/labphon.75</pub-id></mixed-citation></ref>
<ref id="B82"><label>82</label><mixed-citation publication-type="journal"><string-name><surname>Kr&#246;ger</surname>, <given-names>B. J.</given-names></string-name>, <string-name><surname>Pouplier</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Tiede</surname>, <given-names>M. K.</given-names></string-name> (<year>2000</year>). <article-title>An evaluation of the Aurora system as a flesh-point tracking tool for speech production research</article-title>. <source>JSLHR</source>, <volume>51</volume>(<issue>4</issue>), <fpage>914</fpage>&#8211;<lpage>921</lpage>. DOI: <pub-id pub-id-type="doi">10.1044/1092-4388(2008/067)</pub-id></mixed-citation></ref>
<ref id="B83"><label>83</label><mixed-citation publication-type="journal"><string-name><surname>Kroos</surname>, <given-names>C.</given-names></string-name> (<year>2012</year>). <article-title>Evaluation of the measurement precision in three-dimensional Electromagnetic Articulography (Carstens AG500)</article-title>. <source>Journal of Phonetics</source>, <volume>40</volume>, <fpage>453</fpage>&#8211;<lpage>465</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/j.wocn.2012.03.002</pub-id></mixed-citation></ref>
<ref id="B84"><label>84</label><mixed-citation publication-type="confproc"><string-name><surname>Kroos</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Bundgaard-Nielsen</surname>, <given-names>R. L.</given-names></string-name>, &amp; <string-name><surname>Best</surname>, <given-names>C. T.</given-names></string-name> (<year>2012</year>). <article-title>Exploring nonlinear relationships between speech face motion and tongue movements using Mutual information</article-title>. In <conf-name>International Speech Production Seminar 2014</conf-name>, <conf-loc>K&#246;ln, Germany</conf-loc>, 2014, pp. <fpage>237</fpage>&#8211;<lpage>240</lpage>.</mixed-citation></ref>
<ref id="B85"><label>85</label><mixed-citation publication-type="journal"><string-name><surname>K&#252;hnert</surname>, <given-names>B.</given-names></string-name>, &amp; <string-name><surname>Hoole</surname>, <given-names>P.</given-names></string-name> (<year>2004</year>). <article-title>Speaker-specific kinematic properties of alveolar reductions in English and German</article-title>. <source>Clinical Linguistics &amp; Phonetics</source>, <volume>18</volume>(<issue>6</issue>), <fpage>559</fpage>&#8211;<lpage>575</lpage>. DOI: <pub-id pub-id-type="doi">10.1080/02699200420002268853</pub-id></mixed-citation></ref>
<ref id="B86"><label>86</label><mixed-citation publication-type="journal"><string-name><surname>Kullaa-Mikkonen</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Mikkonen</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Kotilainen</surname>, <given-names>R.</given-names></string-name> (<year>1982</year>). <article-title>Prevalence of different morphologic forms of the human tongue in young Finns</article-title>. <source>Oral Surgery, Oral Medicine, Oral Pathology</source>, <volume>53</volume>(<issue>2</issue>), <fpage>152</fpage>&#8211;<lpage>156</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/0030-4220(82)90281-X</pub-id></mixed-citation></ref>
<ref id="B87"><label>87</label><mixed-citation publication-type="book"><string-name><surname>Ladefoged</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Maddieson</surname>, <given-names>I.</given-names></string-name> (<year>1996</year>). <source>The Sounds of the World&#8217;s Languages</source>. <publisher-loc>Oxford</publisher-loc>: <publisher-name>Blackwell</publisher-name>.</mixed-citation></ref>
<ref id="B88"><label>88</label><mixed-citation publication-type="journal"><string-name><surname>Lammert</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Proctor</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Narayanan</surname>, <given-names>S.</given-names></string-name> (<year>2018</year>). <article-title>Morphological variation in the adult hard palate and posterior pharyngeal wall</article-title>. <source>JSLHR</source>, <volume>56</volume>, <fpage>521</fpage>&#8211;<lpage>530</lpage>. DOI: <pub-id pub-id-type="doi">10.1044/1092-4388(2012/12-0059)</pub-id></mixed-citation></ref>
<ref id="B89"><label>89</label><mixed-citation publication-type="journal"><string-name><surname>Lee</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Bell</surname>, <given-names>M.</given-names></string-name> (<year>2018</year>). <article-title>Articulatory range of movement in individuals with dysarthria secondary to amyotrophic lateral sclerosis</article-title>. <source>American Journal of Speech-Language Pathology</source>, <volume>27</volume>(<issue>3</issue>), <fpage>996</fpage>&#8211;<lpage>1009</lpage>. DOI: <pub-id pub-id-type="doi">10.1044/2018_AJSLP-17-0064</pub-id></mixed-citation></ref>
<ref id="B90"><label>90</label><mixed-citation publication-type="journal"><string-name><surname>Lobsang</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Lu</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Honda</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Wei</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Guan</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Fang</surname>, <given-names>Q.</given-names></string-name>, &amp; <string-name><surname>Dang</surname>, <given-names>J.</given-names></string-name> (<year>2016</year>). <article-title>Tibetan vowel analysis with a multi-modal Mandarin-Tibetan speech corpus</article-title>. <source>APSIPA 2016</source>. DOI: <pub-id pub-id-type="doi">10.1109/APSIPA.2016.7820776</pub-id></mixed-citation></ref>
<ref id="B91"><label>91</label><mixed-citation publication-type="journal"><string-name><surname>Maeda</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Berger</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Engwall</surname>, <given-names>O.</given-names></string-name>, <string-name><surname>Laprie</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Maragos</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Potard</surname>, <given-names>B.</given-names></string-name>, &amp; <string-name><surname>Schoentgen</surname>, <given-names>J.</given-names></string-name> (<year>2006</year>). <source>Acoustic-to-articulatoy inversion: Methods and Acquisition of articulatory data</source> (Report on Special Targeted Research Project).</mixed-citation></ref>
<ref id="B92"><label>92</label><mixed-citation publication-type="journal"><string-name><surname>Mahne</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>El-Haddad</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Alavi</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Houseni</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Moonis</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Mong</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Hernandez-Pampaloni</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Torigian</surname>, <given-names>D. A.</given-names></string-name> (<year>2007</year>). <article-title>Assessment of Age-Related Morphological and Functional Changes of Selected Structures of the Head and Neck by Computed Tomography, Magnetic Resonance Imaging, and Positron Emission Tomography</article-title>. <source>Seminars in Nuclear Medicine</source>, <volume>37</volume>(<issue>2</issue>), <fpage>88</fpage>&#8211;<lpage>102</lpage>. DOI: <pub-id pub-id-type="doi">10.1053/j.semnuclmed.2006.10.003</pub-id></mixed-citation></ref>
<ref id="B93"><label>93</label><mixed-citation publication-type="journal"><string-name><surname>Maurer</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Gr&#246;ne</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Landis</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Hoch</surname>, <given-names>G.</given-names></string-name>, &amp; <string-name><surname>Sch&#246;nle</surname>, <given-names>P. W.</given-names></string-name> (<year>1993</year>). <article-title>Re-examination of the relation between the vocal tract and the vowel sound with electromagnetic articulography (EMA) in vocalizations</article-title>. <source>Clinical Linguistics and Phonetics</source>, <volume>7</volume>(<issue>2</issue>), <fpage>129</fpage>&#8211;<lpage>143</lpage>. DOI: <pub-id pub-id-type="doi">10.3109/02699209308985550</pub-id></mixed-citation></ref>
<ref id="B94"><label>94</label><mixed-citation publication-type="journal"><string-name><surname>McClean</surname>, <given-names>M. D.</given-names></string-name>, <string-name><surname>Tasko</surname>, <given-names>S. M.</given-names></string-name>, &amp; <string-name><surname>Runyan</surname>, <given-names>C. M.</given-names></string-name> (<year>2004</year>). <article-title>Orofacial movements associated with fluent speech in persons who stutter</article-title>. <source>JSLHR</source>, <volume>47</volume>(<issue>2</issue>), <fpage>294</fpage>&#8211;<lpage>303</lpage>. DOI: <pub-id pub-id-type="doi">10.1044/1092-4388(2004/024)</pub-id></mixed-citation></ref>
<ref id="B95"><label>95</label><mixed-citation publication-type="journal"><string-name><surname>McNeil</surname>, <given-names>M. R.</given-names></string-name>, <string-name><surname>Katz</surname>, <given-names>W. F.</given-names></string-name>, <string-name><surname>Fossett</surname>, <given-names>T. R. D.</given-names></string-name>, <string-name><surname>Garst</surname>, <given-names>D. M.</given-names></string-name>, <string-name><surname>Szuminsky</surname>, <given-names>N. J.</given-names></string-name>, <string-name><surname>Carter</surname>, <given-names>G.</given-names></string-name>, &amp; <string-name><surname>Lim</surname>, <given-names>K. Y.</given-names></string-name> (<year>2010</year>). <article-title>Effects of Online Augmented Kinematic and Perceptual Feedback on Treatment of Speech Movements in Apraxia of Speech</article-title>. <source>Folia Phoniatrica et Logopaedica</source>, <volume>62</volume>(<issue>3</issue>), <fpage>127</fpage>&#8211;<lpage>133</lpage>. DOI: <pub-id pub-id-type="doi">10.1159/000287211</pub-id></mixed-citation></ref>
<ref id="B96"><label>96</label><mixed-citation publication-type="journal"><string-name><surname>Meenakshi</surname>, <given-names>N.</given-names></string-name>, &amp; <string-name><surname>Ghosh</surname>, <given-names>P. K.</given-names></string-name> (<year>2018</year>). <article-title>Reconstruction of articulatory movements during neutral speech from those during whispered speech</article-title>. <source>JASA</source>, <volume>143</volume>(<issue>6</issue>), <fpage>3352</fpage>&#8211;<lpage>3364</lpage>. DOI: <pub-id pub-id-type="doi">10.1121/1.5039750</pub-id></mixed-citation></ref>
<ref id="B97"><label>97</label><mixed-citation publication-type="confproc"><string-name><surname>Meenakshi</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Yarra</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Yamini</surname>, <given-names>B. K.</given-names></string-name>, &amp; <string-name><surname>Ghosh</surname>, <given-names>P. K.</given-names></string-name> (<year>2014</year>). <article-title>Comparison of speech quality with and without sensors in electromagnetic articulograph AG 501 recording</article-title>. In <conf-name>Proceedings of INTERSPEECH 2014</conf-name> (pp. <fpage>935</fpage>&#8211;<lpage>939</lpage>).</mixed-citation></ref>
<ref id="B98"><label>98</label><mixed-citation publication-type="journal"><string-name><surname>Mefferd</surname>, <given-names>A. S.</given-names></string-name> (<year>2017</year>). <article-title>Tongue- and jaw-specific contributions to acoustic vowel contrast changes in the diphthong/ai/ in response to slow, loud, and clear speech</article-title>. <source>JSLHR</source>, <volume>60</volume>(<issue>11</issue>), <fpage>3144</fpage>&#8211;<lpage>3158</lpage>. DOI: <pub-id pub-id-type="doi">10.1044/2017_JSLHR-S-17-0114</pub-id></mixed-citation></ref>
<ref id="B99"><label>99</label><mixed-citation publication-type="journal"><string-name><surname>Mefferd</surname>, <given-names>A. S.</given-names></string-name> (<year>2019</year>). <article-title>Effects of speaking rate, loudness, and clarity modifications on kinematic endpoint variability</article-title>. <source>Clinical Linguistics and Phonetics</source>, <volume>33</volume>(<issue>6</issue>), <fpage>570</fpage>&#8211;<lpage>585</lpage>. DOI: <pub-id pub-id-type="doi">10.1080/02699206.2019.1566401</pub-id></mixed-citation></ref>
<ref id="B100"><label>100</label><mixed-citation publication-type="journal"><string-name><surname>Mefferd</surname>, <given-names>A. S.</given-names></string-name>, &amp; <string-name><surname>Dietrich</surname>, <given-names>M. S.</given-names></string-name> (<year>2019</year>). <article-title>Tongue- and Jaw-Specific Articulatory Underpinnings of Reduced and Enhanced Acoustic Vowel Contrast in Talkers with Parkinson&#8217;s Disease</article-title>. <source>JSLHR</source>, <volume>62</volume>(<issue>7</issue>), <fpage>2118</fpage>&#8211;<lpage>2132</lpage>. DOI: <pub-id pub-id-type="doi">10.1044/2019_JSLHR-S-MSC18-18-0192</pub-id></mixed-citation></ref>
<ref id="B101"><label>101</label><mixed-citation publication-type="journal"><string-name><surname>Mennen</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Scobbie</surname>, <given-names>J. M.</given-names></string-name>, <string-name><surname>de Leeuw</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Schaeffler</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Schaeffler</surname>, <given-names>F.</given-names></string-name> (<year>2010</year>). <article-title>Measuring language-specific phonetic settings</article-title>. <source>Second Language Research</source>, <volume>26</volume>(<issue>1</issue>), <fpage>13</fpage>&#8211;<lpage>41</lpage>. DOI: <pub-id pub-id-type="doi">10.1177/0267658309337617</pub-id></mixed-citation></ref>
<ref id="B102"><label>102</label><mixed-citation publication-type="journal"><string-name><surname>Mitra</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Sivaraman</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Nam</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Espy-Wilson</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Saltzman</surname>, <given-names>E.</given-names></string-name>, &amp; <string-name><surname>Tiede</surname>, <given-names>M.</given-names></string-name> (<year>2017</year>). <article-title>Hybrid convolutional neural networks for articulatory and acoustic information based speech recognition</article-title>. <source>Speech Communication</source>, <volume>89</volume>, <fpage>103</fpage>&#8211;<lpage>112</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/j.specom.2017.03.003</pub-id></mixed-citation></ref>
<ref id="B103"><label>103</label><mixed-citation publication-type="journal"><string-name><surname>Moen</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Gram Simonsen</surname>, <given-names>H.</given-names></string-name>, &amp; <string-name><surname>Lindstad</surname>, <given-names>A. M.</given-names></string-name> (<year>2004</year>). <article-title>An electronic database of Norwegian speech sounds: Clinical aspects</article-title>. <source>Journal of Multilingual Communication Disorders</source>, <volume>2</volume>(<issue>1</issue>), <fpage>43</fpage>&#8211;<lpage>49</lpage>. DOI: <pub-id pub-id-type="doi">10.1080/14769670310001616624</pub-id></mixed-citation></ref>
<ref id="B104"><label>104</label><mixed-citation publication-type="journal"><string-name><surname>Mooshammer</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Hoole</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Geumann</surname>, <given-names>A.</given-names></string-name> (<year>2006</year>). <article-title>Interarticulator cohesion within coronal consonant production</article-title>. <source>JASA</source>, <volume>120</volume>(<issue>2</issue>), <fpage>1028</fpage>&#8211;<lpage>1039</lpage>. DOI: <pub-id pub-id-type="doi">10.1121/1.2208430</pub-id></mixed-citation></ref>
<ref id="B105"><label>105</label><mixed-citation publication-type="journal"><string-name><surname>Mooshammer</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Hoole</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Geumann</surname>, <given-names>A.</given-names></string-name> (<year>2007</year>). <article-title>Jaw and Order</article-title>. <source>Language and Speech</source>, <volume>50</volume>(<issue>2</issue>), <fpage>145</fpage>&#8211;<lpage>176</lpage>. DOI: <pub-id pub-id-type="doi">10.1177/00238309070500020101</pub-id></mixed-citation></ref>
<ref id="B106"><label>106</label><mixed-citation publication-type="journal"><string-name><surname>Mooshammer</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Tiede</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Shattuck-Hufnagel</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Goldstein</surname>, <given-names>L.</given-names></string-name> (<year>2019</year>). <article-title>Towards the Quantification of Peggy Babcock: Speech Errors and Their Position within the Word</article-title>. <source>Phonetica</source>, <volume>76</volume>(<issue>5</issue>), <fpage>363</fpage>&#8211;<lpage>396</lpage>. DOI: <pub-id pub-id-type="doi">10.1159/000494140</pub-id></mixed-citation></ref>
<ref id="B107"><label>107</label><mixed-citation publication-type="journal"><string-name><surname>M&#252;cke</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Hermes</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Roettger</surname>, <given-names>T. B.</given-names></string-name>, <string-name><surname>Becker</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Niemann</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Gembek</surname>, <given-names>T. A.</given-names></string-name>, <string-name><surname>Timmermann</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Visser-Vandewalle</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Fink</surname>, <given-names>G. R.</given-names></string-name>, <string-name><surname>Grice</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Barbe</surname>, <given-names>M. T.</given-names></string-name> (<year>2018</year>). <article-title>The effects of Thalamic Deep Brain Stimulation on speech dynamics in patients with Essential Tremor: an articulographic study</article-title>. <source>PLoS One</source>, <volume>13</volume>(<issue>1</issue>). DOI: <pub-id pub-id-type="doi">10.1371/journal.pone.0191359</pub-id></mixed-citation></ref>
<ref id="B108"><label>108</label><mixed-citation publication-type="journal"><string-name><surname>Murdoch</surname>, <given-names>B. E.</given-names></string-name> (<year>2011</year>). <article-title>Physiological investigation of dysarthria: Recent advances</article-title>. <source>International Journal of Speech-Language Pathology</source>, <volume>13</volume>(<issue>1</issue>), <fpage>28</fpage>&#8211;<lpage>35</lpage>. DOI: <pub-id pub-id-type="doi">10.3109/17549507.2010.487919</pub-id></mixed-citation></ref>
<ref id="B109"><label>109</label><mixed-citation publication-type="journal"><string-name><surname>Narayanan</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Toutios</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Ramanarayanan</surname>, <given-names>V.</given-names></string-name>, &#8230;, &amp; <string-name><surname>Proctor</surname>, <given-names>M.</given-names></string-name> (<year>2014</year>). <article-title>Real-time magnetic resonance imaging and electromagnetic articulography database for speech production research (TC)</article-title>. <source>JASA</source>, <volume>137</volume>, <fpage>1307</fpage>&#8211;<lpage>1311</lpage>. DOI: <pub-id pub-id-type="doi">10.1121/1.4890284</pub-id></mixed-citation></ref>
<ref id="B110"><label>110</label><mixed-citation publication-type="journal"><string-name><surname>Navazesh</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Mulligan</surname>, <given-names>R. A.</given-names></string-name>, <string-name><surname>Kipnis</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Denny</surname>, <given-names>P. A.</given-names></string-name>, &amp; <string-name><surname>Denny</surname>, <given-names>P. C.</given-names></string-name> (<year>1992</year>). <article-title>Comparison of Whole Saliva Flow Rates and Mucin Concentrations in Healthy Caucasian Young and Aged Adults</article-title>. <source>Journal of Dental Research</source>, <volume>71</volume>(<issue>6</issue>), <fpage>1275</fpage>&#8211;<lpage>1278</lpage>. DOI: <pub-id pub-id-type="doi">10.1177/00220345920710060201</pub-id></mixed-citation></ref>
<ref id="B111"><label>111</label><mixed-citation publication-type="journal"><string-name><surname>Neufeld</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name><surname>van Lieshout</surname>, <given-names>P.</given-names></string-name> (<year>2014</year>). <article-title>Tongue kinematics in palate relative coordinate spaces for electromagnetic articulography</article-title>. <source>JASA</source>, <volume>135</volume>, <fpage>352</fpage>&#8211;<lpage>361</lpage>. DOI: <pub-id pub-id-type="doi">10.1121/1.4836515</pub-id></mixed-citation></ref>
<ref id="B112"><label>112</label><mixed-citation publication-type="journal"><string-name><surname>Nijland</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Maassen</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Hulstijn</surname>, <given-names>W.</given-names></string-name>, &amp; <string-name><surname>Peters</surname>, <given-names>H.</given-names></string-name> (<year>2004</year>). <article-title>Speech motor coordination in Dutch-speaking children with DAS studied with EMMA</article-title>. <source>Journal of Multilingual Communication Disorders</source>, <volume>2</volume>(<issue>1</issue>), <fpage>50</fpage>&#8211;<lpage>60</lpage>. DOI: <pub-id pub-id-type="doi">10.1080/1476967031000091015</pub-id></mixed-citation></ref>
<ref id="B113"><label>113</label><mixed-citation publication-type="webpage"><collab>Northern Digital Inc</collab>. (<year>2009</year>, rev. 2016). <source>Wave User Guide</source>. Retrieved from <uri>http://support.ndigital.com</uri></mixed-citation></ref>
<ref id="B114"><label>114</label><mixed-citation publication-type="webpage"><collab>Northern Digital Inc</collab>. (<year>2019</year>). <source>Vox-EMA System User Guide</source>. Retrieved from <uri>http://support.ndigital.com</uri></mixed-citation></ref>
<ref id="B115"><label>115</label><mixed-citation publication-type="webpage"><collab>Northern Digital Inc</collab>. (<year>2020</year>, <month>June</month>). <source>NDI Company Update &#8211; June 2020</source>. Retrieved from <uri>https://www.ndigital.com/</uri></mixed-citation></ref>
<ref id="B116"><label>116</label><mixed-citation publication-type="journal"><string-name><surname>Okadome</surname>, <given-names>T.</given-names></string-name>, &amp; <string-name><surname>Honda</surname>, <given-names>M.</given-names></string-name> (<year>2001</year>). <article-title>Generation of articulatory movements by using a kinematic triphone model</article-title>. <source>JASA</source>, <volume>110</volume>(<issue>1</issue>), <fpage>453</fpage>&#8211;<lpage>463</lpage>. DOI: <pub-id pub-id-type="doi">10.1121/1.1377633</pub-id></mixed-citation></ref>
<ref id="B117"><label>117</label><mixed-citation publication-type="journal"><string-name><surname>Oliver</surname>, <given-names>R. G.</given-names></string-name>, &amp; <string-name><surname>Evans</surname>, <given-names>S. P.</given-names></string-name> (<year>1986</year>). <article-title>Tongue size, oral cavity size and speech</article-title>. <source>The Angle Orthodontist</source>, <volume>56</volume>, <fpage>234</fpage>&#8211;<lpage>243</lpage>.</mixed-citation></ref>
<ref id="B118"><label>118</label><mixed-citation publication-type="journal"><string-name><surname>Patem</surname>, <given-names>A. K.</given-names></string-name>, <string-name><surname>Illa</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Afshan</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Ghosh</surname>, <given-names>P. K.</given-names></string-name> (<year>2018</year>). <article-title>Optimal sensor placement in electromagnetic articulography recording for speech production study</article-title>. <source>Computer Speech &amp; Language</source>, <volume>47</volume>, <fpage>157</fpage>&#8211;<lpage>174</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/j.csl.2017.07.008</pub-id></mixed-citation></ref>
<ref id="B119"><label>119</label><mixed-citation publication-type="journal"><string-name><surname>Perkell</surname>, <given-names>J. S.</given-names></string-name>, <string-name><surname>Cohen</surname>, <given-names>M. H.</given-names></string-name>, <string-name><surname>Svirsky</surname>, <given-names>M. A.</given-names></string-name>, <string-name><surname>Matthies</surname>, <given-names>M. L.</given-names></string-name>, <string-name><surname>Garabieta</surname>, <given-names>I.</given-names></string-name>, &amp; <string-name><surname>Jackson</surname>, <given-names>M. T. T.</given-names></string-name> (<year>1992</year>). <article-title>Electromagnetic midsagittal articulometer systems for transducing speech articulatory movements</article-title>. <source>JASA</source>, <volume>92</volume>(<issue>6</issue>), <fpage>3078</fpage>&#8211;<lpage>3096</lpage>. DOI: <pub-id pub-id-type="doi">10.1121/1.404204</pub-id></mixed-citation></ref>
<ref id="B120"><label>120</label><mixed-citation publication-type="journal"><string-name><surname>Peyron</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Mioche</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Renon</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Abouelkaram</surname>, <given-names>S.</given-names></string-name> (<year>1996</year>). <article-title>Masticatory jaw movement recordings: A new method to investigate food texture</article-title>. <source>Food Quality and Preference</source>, <volume>7</volume>(<issue>3&#8211;4</issue>), <fpage>229</fpage>&#8211;<lpage>237</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/S0950-3293(96)00014-6</pub-id></mixed-citation></ref>
<ref id="B121"><label>121</label><mixed-citation publication-type="journal"><string-name><surname>Rebernik</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Jacobi</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Tiede</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Wieling</surname>, <given-names>M.</given-names></string-name> (in revision). <article-title>Accuracy assessment of two electromagnetic articulographs: NDI Wave and NDI Vox</article-title>.</mixed-citation></ref>
<ref id="B122"><label>122</label><mixed-citation publication-type="journal"><string-name><surname>Reddihough</surname>, <given-names>D. S.</given-names></string-name>, &amp; <string-name><surname>Johnson</surname>, <given-names>H.</given-names></string-name> (<year>1999</year>). <article-title>Assessment and Management of Saliva Control Problems in Children and Adults with Neurological Impairment</article-title>. <source>Journal of Development and Physical Disabilities</source>, <volume>11</volume>, <fpage>17</fpage>&#8211;<lpage>24</lpage>. DOI: <pub-id pub-id-type="doi">10.1023/A:1021804500520</pub-id></mixed-citation></ref>
<ref id="B123"><label>123</label><mixed-citation publication-type="confproc"><string-name><surname>Richmond</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Hoole</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>King</surname>, <given-names>S.</given-names></string-name> (<year>2011</year>). <article-title>Announcing the Electromagnetic Articulography (Day 1) Subset of the mngu0 Articulatory Corpus</article-title>. In <conf-name>Proceedings of INTERSPEECH 2011</conf-name>, <conf-loc>Florence</conf-loc>, <fpage>1505</fpage>&#8211;<lpage>1508</lpage>.</mixed-citation></ref>
<ref id="B124"><label>124</label><mixed-citation publication-type="confproc"><string-name><surname>Rochon</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Pompino-Marschall</surname>, <given-names>B.</given-names></string-name> (<year>1999</year>). <article-title>The articulation of secondarily palatalized coronals in Polish</article-title>. In <conf-name>Proceedings of ICPhS 1999</conf-name>, <conf-loc>San Francisco</conf-loc>, <fpage>1897</fpage>&#8211;<lpage>1900</lpage>.</mixed-citation></ref>
<ref id="B125"><label>125</label><mixed-citation publication-type="journal"><string-name><surname>Rong</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Loucks</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Kim</surname>, <given-names>H.</given-names></string-name>, &amp; <string-name><surname>Hasegawa-Johnson</surname>, <given-names>M.</given-names></string-name> (<year>2012</year>). <article-title>Relationship between kinematics, F2 slope and speech intelligibility in dysarthria due to cerebral palsy</article-title>. <source>Clinical Linguistics &amp; Phonetics</source>, <volume>26</volume>(<issue>9</issue>). DOI: <pub-id pub-id-type="doi">10.3109/02699206.2012.706686</pub-id></mixed-citation></ref>
<ref id="B126"><label>126</label><mixed-citation publication-type="journal"><string-name><surname>Rudy</surname>, <given-names>K.</given-names></string-name>, &amp; <string-name><surname>Yunusova</surname>, <given-names>Y.</given-names></string-name> (<year>2013</year>). <article-title>The effect of anatomic factors on tongue position variability during consonants</article-title>. <source>JSLHR</source>, <volume>56</volume>(<issue>1</issue>), <fpage>137</fpage>&#8211;<lpage>149</lpage>. DOI: <pub-id pub-id-type="doi">10.1044/1092-4388(2012/11-0218)</pub-id></mixed-citation></ref>
<ref id="B127"><label>127</label><mixed-citation publication-type="journal"><string-name><surname>Rudzicz</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Namasivayam</surname>, <given-names>A. K.</given-names></string-name>, &amp; <string-name><surname>Wolff</surname>, <given-names>T.</given-names></string-name> (<year>2012</year>). <article-title>The TORGO database of acoustic and articulatory speech from speakers with dysarthria</article-title>. <source>Language Resources and Evaluation</source>, <volume>46</volume>(<issue>4</issue>), <fpage>523</fpage>&#8211;<lpage>541</lpage>. DOI: <pub-id pub-id-type="doi">10.1007/s10579-011-9145-0</pub-id></mixed-citation></ref>
<ref id="B128"><label>128</label><mixed-citation publication-type="journal"><string-name><surname>Savariaux</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Badin</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Samson</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Gerber</surname>, <given-names>S.</given-names></string-name> (<year>2017</year>). <article-title>A comparative study of the precision of Carstens and Northern Digital Instruments Electromagnetic Articulographs</article-title>. <source>JSLHR</source>, <volume>60</volume>, <fpage>322</fpage>&#8211;<lpage>340</lpage>. DOI: <pub-id pub-id-type="doi">10.1044/2016_JSLHR-S-15-0223</pub-id></mixed-citation></ref>
<ref id="B129"><label>129</label><mixed-citation publication-type="journal"><string-name><surname>Schneider</surname>, <given-names>G.</given-names></string-name>, &amp; <string-name><surname>Otto</surname>, <given-names>K.</given-names></string-name> (<year>2012</year>). <article-title>In vitro and in vivo studies on the use of Histoacryl&#174; as a soft tissue glue</article-title>. <source>Rhinology</source>, <volume>269</volume>, <fpage>1783</fpage>&#8211;<lpage>1789</lpage>. DOI: <pub-id pub-id-type="doi">10.1007/s00405-011-1868-4</pub-id></mixed-citation></ref>
<ref id="B130"><label>130</label><mixed-citation publication-type="journal"><string-name><surname>Sch&#246;nle</surname>, <given-names>P. W.</given-names></string-name>, <string-name><surname>Gr&#228;be</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Wenig</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>H&#246;hne</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Schrader</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Conrad</surname>, <given-names>B.</given-names></string-name> (<year>1987</year>). <article-title>Electromagnetic articulography: Use of alternating magnetic fields for tracking movements of multiple points inside and outside the vocal tract</article-title>. <source>Brain and Language</source>, <volume>31</volume>, <fpage>26</fpage>&#8211;<lpage>35</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/0093-934X(87)90058-7</pub-id></mixed-citation></ref>
<ref id="B131"><label>131</label><mixed-citation publication-type="journal"><string-name><surname>Sch&#246;nle</surname>, <given-names>P. W.</given-names></string-name>, <string-name><surname>M&#252;ller</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name><surname>Wenig</surname>, <given-names>P.</given-names></string-name> (<year>1989</year>). <article-title>Real-time analysis of orofacial movements with the aid of electromagnetic articulography [in German]</article-title>. <source>Biomedizinische Technik</source>, <volume>34</volume>(<issue>6</issue>), <fpage>126</fpage>&#8211;<lpage>130</lpage>. DOI: <pub-id pub-id-type="doi">10.1515/bmte.1989.34.6.126</pub-id></mixed-citation></ref>
<ref id="B132"><label>132</label><mixed-citation publication-type="journal"><string-name><surname>Sch&#246;tz</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Frid</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>L&#246;fqvist</surname>, <given-names>A.</given-names></string-name> (<year>2013</year>). <article-title>Development of speech motor control: Lip movement variability</article-title>. <source>JASA</source>, <volume>133</volume>(<issue>6</issue>), <fpage>4210</fpage>&#8211;<lpage>4217</lpage>. DOI: <pub-id pub-id-type="doi">10.1121/1.4802649</pub-id></mixed-citation></ref>
<ref id="B133"><label>133</label><mixed-citation publication-type="book"><string-name><surname>Seikel</surname>, <given-names>J. A.</given-names></string-name>, <string-name><surname>Drumright</surname>, <given-names>D. G.</given-names></string-name>, &amp; <string-name><surname>Huddock</surname>, <given-names>D. J.</given-names></string-name> (<year>2020</year>). <source>Anatomy &amp; Physiology for Speech, Language, and Hearing</source>. <publisher-loc>San Diego</publisher-loc>: <publisher-name>Plural Publishing</publisher-name>.</mixed-citation></ref>
<ref id="B134"><label>134</label><mixed-citation publication-type="journal"><string-name><surname>Shellikeri</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Green</surname>, <given-names>J. R.</given-names></string-name>, <string-name><surname>Kulkarni</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Rong</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Martino</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Zinman</surname>, <given-names>L.</given-names></string-name>, &amp; <string-name><surname>Yunusova</surname>, <given-names>Y.</given-names></string-name> (<year>2016</year>). <article-title>Speech movement measures as markers of bulbar disease in Amyotrophic Lateral Sclerosis</article-title>. <source>JSLHR</source>, <volume>59</volume>(<issue>5</issue>), <fpage>887</fpage>&#8211;<lpage>899</lpage>. DOI: <pub-id pub-id-type="doi">10.1044/2016_JSLHR-S-15-0238</pub-id></mixed-citation></ref>
<ref id="B135"><label>135</label><mixed-citation publication-type="confproc"><string-name><surname>Shosted</surname>, <given-names>R. K.</given-names></string-name>, <string-name><surname>Carignan</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name><surname>Rong</surname>, <given-names>P.</given-names></string-name> (<year>2011</year>). <article-title>Estimating vertical larynx position using EMA</article-title>. In <conf-name>Proceedings of ISSP 2011</conf-name>, <fpage>139</fpage>&#8211;<lpage>146</lpage>.</mixed-citation></ref>
<ref id="B136"><label>136</label><mixed-citation publication-type="journal"><string-name><surname>Sigona</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Stella</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Stella</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Bernardini</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Fivela</surname>, <given-names>B. G.</given-names></string-name>, &amp; <string-name><surname>Grimaldi</surname>, <given-names>M.</given-names></string-name> (<year>2018</year>). <article-title>Assessing the Position Tracking Reliability of Carstens&#8217; AG500 and AG501 Electromagnetic Articulographs during Constrained Movements and Speech Tasks</article-title>. <source>Speech Communication</source>, <volume>104</volume>, <fpage>73</fpage>&#8211;<lpage>88</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/j.specom.2018.10.001</pub-id></mixed-citation></ref>
<ref id="B137"><label>137</label><mixed-citation publication-type="journal"><string-name><surname>Simonsen</surname>, <given-names>H. G.</given-names></string-name>, <string-name><surname>Moen</surname>, <given-names>I.</given-names></string-name>, &amp; <string-name><surname>Cowen</surname>, <given-names>S.</given-names></string-name> (<year>2008</year>). <article-title>Norwegian retroflex stops in a cross linguistic perspective</article-title>. <source>Journal of Phonetics</source>, <volume>36</volume>(<issue>2</issue>), <fpage>385</fpage>&#8211;<lpage>405</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/j.wocn.2008.01.001</pub-id></mixed-citation></ref>
<ref id="B138"><label>138</label><mixed-citation publication-type="confproc"><string-name><surname>Sivaraman</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Espy-Wilson</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name><surname>Wieling</surname>, <given-names>M.</given-names></string-name> (<year>2017</year>). <article-title>Analysis of Acoustic-to-Articulatory Speech Inversion Across Different Accents and Languages</article-title>. In <conf-name>Proceedings of INTERSPEECH 2017</conf-name>, <fpage>974</fpage>&#8211;<lpage>978</lpage>. DOI: <pub-id pub-id-type="doi">10.21437/Interspeech.2017-260</pub-id></mixed-citation></ref>
<ref id="B139"><label>139</label><mixed-citation publication-type="journal"><string-name><surname>Smith</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Aasen</surname>, <given-names>R.</given-names></string-name> (<year>1992</year>). <article-title>The effects of electromagnetic fields on cardiac pacemakers</article-title>. <source>IEEE Transactions on Broadcasting</source>, <volume>38</volume>(<issue>2</issue>), <fpage>136</fpage>&#8211;<lpage>139</lpage>. DOI: <pub-id pub-id-type="doi">10.1109/11.142666</pub-id></mixed-citation></ref>
<ref id="B140"><label>140</label><mixed-citation publication-type="journal"><string-name><surname>Steele</surname>, <given-names>C. M.</given-names></string-name> (<year>2015</year>). <article-title>The blind scientists and the elephant of swallowing: A review of instrumental perspectives on swallowing physiology</article-title>. <source>Journal of Texture Studies</source>, <volume>45</volume>, <fpage>122</fpage>&#8211;<lpage>137</lpage>. DOI: <pub-id pub-id-type="doi">10.1111/jtxs.12101</pub-id></mixed-citation></ref>
<ref id="B141"><label>141</label><mixed-citation publication-type="journal"><string-name><surname>Steele</surname>, <given-names>C. M.</given-names></string-name>, &amp; <string-name><surname>van Lieshout</surname>, <given-names>P.</given-names></string-name> (<year>2009</year>). <article-title>Tongue Movements During Water Swallowing in Healthy Young and Older Adults</article-title>. <source>JSLHR</source>, <volume>52</volume>(<issue>5</issue>), <fpage>1255</fpage>&#8211;<lpage>1267</lpage>. DOI: <pub-id pub-id-type="doi">10.1044/1092-4388(2009/08-0131)</pub-id></mixed-citation></ref>
<ref id="B142"><label>142</label><mixed-citation publication-type="journal"><string-name><surname>Steele</surname>, <given-names>C. M.</given-names></string-name>, &amp; <string-name><surname>van Lieshout</surname>, <given-names>P. H. H. M.</given-names></string-name> (<year>2004</year>). <article-title>Use of Electromagnetic Midsagittal Articulography in the Study of Swallowing</article-title>. <source>JSLHR</source>, <volume>47</volume>(<issue>2</issue>), <fpage>342</fpage>&#8211;<lpage>352</lpage>. DOI: <pub-id pub-id-type="doi">10.1044/1092-4388(2004/027)</pub-id></mixed-citation></ref>
<ref id="B143"><label>143</label><mixed-citation publication-type="journal"><string-name><surname>Steele</surname>, <given-names>C. M.</given-names></string-name>, <string-name><surname>van Lieshout</surname>, <given-names>P. H. H. M.</given-names></string-name>, &amp; <string-name><surname>Pelletier</surname>, <given-names>C. A.</given-names></string-name> (<year>2012</year>). <article-title>The Influence of Stimulus Taste and Chemesthesis on Tongue Movement Timing in Swallowing</article-title>. <source>JSLHR</source>, <volume>55</volume>, <fpage>262</fpage>&#8211;<lpage>275</lpage>. DOI: <pub-id pub-id-type="doi">10.1044/1092-4388(2011/11-0012)</pub-id></mixed-citation></ref>
<ref id="B144"><label>144</label><mixed-citation publication-type="confproc"><string-name><surname>Stella</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Stella</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Figona</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Bernardini</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Grimaldi</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Fivela</surname>, <given-names>B. G.</given-names></string-name> (<year>2013</year>). <article-title>Electromagnetic articulography with AG500 and AG501</article-title>. In <conf-name>Proceedings of Interspeech 2013</conf-name>, <conf-loc>Lyon, France</conf-loc>, pp. <fpage>1316</fpage>&#8211;<lpage>1320</lpage>.</mixed-citation></ref>
<ref id="B145"><label>145</label><mixed-citation publication-type="journal"><string-name><surname>Stella</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Stella</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Grimaldi</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Fivela</surname>, <given-names>B. G.</given-names></string-name> (<year>2012</year>). <article-title>Numerical instabilities and three-dimensional electromagnetic articulography</article-title>. <source>JASA</source>, <volume>132</volume>(<issue>6</issue>), <fpage>3941</fpage>&#8211;<lpage>3949</lpage>. DOI: <pub-id pub-id-type="doi">10.1121/1.4763549</pub-id></mixed-citation></ref>
<ref id="B146"><label>146</label><mixed-citation publication-type="journal"><string-name><surname>Stone</surname>, <given-names>M.</given-names></string-name> (<year>2010</year>). <article-title>Laboratory Techniques for Investigating Speech Articulation</article-title>. In <string-name><given-names>W. J.</given-names> <surname>Hardcastle</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Laver</surname></string-name> &amp; <string-name><given-names>F. E.</given-names> <surname>Gibbon</surname></string-name> (Eds.), <source>The Handbook of Phonetic Sciences</source> (<edition>second</edition> edition, pp. <fpage>7</fpage>&#8211;<lpage>38</lpage>). DOI: <pub-id pub-id-type="doi">10.1002/9781444317251.ch1</pub-id></mixed-citation></ref>
<ref id="B147"><label>147</label><mixed-citation publication-type="journal"><string-name><surname>Stone</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Woo</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Lee</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Poole</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Seagraves</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Chung</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Kim</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Murano</surname>, <given-names>E. Z.</given-names></string-name>, <string-name><surname>Prince</surname>, <given-names>J. L.</given-names></string-name>, &amp; <string-name><surname>Blemker</surname>, <given-names>S. S.</given-names></string-name> (<year>2018</year>). <article-title>Structure and variability in human tongue muscle anatomy</article-title>. <source>Comput Methods Biomech Biomed Eng Imaging Vis</source>, <volume>6</volume>(<issue>5</issue>), <fpage>599</fpage>&#8211;<lpage>507</lpage>. DOI: <pub-id pub-id-type="doi">10.1080/21681163.2016.1162752</pub-id></mixed-citation></ref>
<ref id="B148"><label>148</label><mixed-citation publication-type="journal"><string-name><surname>Suemitsu</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Dang</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Ito</surname>, <given-names>T.</given-names></string-name>, &amp; <string-name><surname>Tiede</surname>, <given-names>M.</given-names></string-name> (<year>2015</year>). <article-title>A real-time articulatory visual feedback approach with target presentation for second language pronunciation learning</article-title>. <source>JASA</source>, <volume>138</volume>(<issue>4</issue>), <fpage>e382</fpage>&#8211;<lpage>e387</lpage>. DOI: <pub-id pub-id-type="doi">10.1121/1.4931827</pub-id></mixed-citation></ref>
<ref id="B149"><label>149</label><mixed-citation publication-type="journal"><string-name><surname>Tabain</surname>, <given-names>M.</given-names></string-name> (<year>2003</year>). <article-title>Effects of prosodic boundary on /aC/ sequences: Articulatory results</article-title>. <source>JASA</source>, <volume>113</volume>(<issue>5</issue>), <fpage>2834</fpage>&#8211;<lpage>2849</lpage>. DOI: <pub-id pub-id-type="doi">10.1121/1.1564013</pub-id></mixed-citation></ref>
<ref id="B150"><label>150</label><mixed-citation publication-type="journal"><string-name><surname>Tasko</surname>, <given-names>S. M.</given-names></string-name>, &amp; <string-name><surname>McClean</surname>, <given-names>M. D.</given-names></string-name> (<year>2004</year>). <article-title>Variations in Articulatory Movement with Changes in Speech Task</article-title>. <source>JSLHR</source>, <volume>47</volume>, <fpage>85</fpage>&#8211;<lpage>100</lpage>. DOI: <pub-id pub-id-type="doi">10.1044/1092-4388(2004/008)</pub-id></mixed-citation></ref>
<ref id="B151"><label>151</label><mixed-citation publication-type="journal"><string-name><surname>Thibeault</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>M&#233;nard</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Baum</surname>, <given-names>S. R.</given-names></string-name>, <string-name><surname>Richard</surname>, <given-names>G.</given-names></string-name>, &amp; <string-name><surname>McFarland</surname>, <given-names>D. H.</given-names></string-name> (<year>2011</year>). <article-title>Articulatory and acoustic adaptation to palatal perturbation</article-title>. <source>JASA</source>, <volume>192</volume>, <fpage>2112</fpage>&#8211;<lpage>2120</lpage>. DOI: <pub-id pub-id-type="doi">10.1121/1.3557030</pub-id></mixed-citation></ref>
<ref id="B152"><label>152</label><mixed-citation publication-type="journal"><string-name><surname>Thompson</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Kim</surname>, <given-names>Y.</given-names></string-name> (<year>2019</year>). <article-title>Relation of second formant trajectories to tongue kinematics</article-title>. <source>JASA</source>, <volume>145</volume>(<issue>4</issue>), <fpage>e323</fpage>&#8211;<lpage>e328</lpage>. DOI: <pub-id pub-id-type="doi">10.1121/1.5099163</pub-id></mixed-citation></ref>
<ref id="B153"><label>153</label><mixed-citation publication-type="book"><string-name><surname>Tiede</surname>, <given-names>M.</given-names></string-name> (<year>2005</year>). <source>MVIEW: Software for visualization and analysis of concurrently recorded movement data</source>. <publisher-loc>New Haven, CT</publisher-loc>: <publisher-name>Haskins Laboratories</publisher-name>.</mixed-citation></ref>
<ref id="B154"><label>154</label><mixed-citation publication-type="confproc"><string-name><surname>Tiede</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Bundgaard-Nielsen</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Kroos</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Gibert</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Attina</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Kasisopa</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Vatikiotis-Bateson</surname>, <given-names>E.</given-names></string-name>, &amp; <string-name><surname>Best</surname>, <given-names>C.</given-names></string-name> (<year>2010</year>). <article-title>Speech articulator movements recorded from facing talkers using two electromagnetic articulometer systems simultaneously</article-title>. In <conf-name>Proceedings of Meetings on Acoustics 11</conf-name>. DOI: <pub-id pub-id-type="doi">10.1121/1.3508805</pub-id></mixed-citation></ref>
<ref id="B155"><label>155</label><mixed-citation publication-type="confproc"><string-name><surname>Tiede</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Chen</surname>, <given-names>W.</given-names></string-name>, &amp; <string-name><surname>Whalen</surname>, <given-names>D. H.</given-names></string-name> (<year>2019</year>). <article-title>Taiwanese Mandarin sibilant contrasts investigated using coregistered EMA and ultrasound</article-title>. In <conf-name>Proceedings of ICPhS 2019</conf-name>.</mixed-citation></ref>
<ref id="B156"><label>156</label><mixed-citation publication-type="journal"><string-name><surname>Tiede</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Espy-Wilson</surname>, <given-names>C. Y.</given-names></string-name>, <string-name><surname>Goldenberg</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Mitra</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Nam</surname>, <given-names>H.</given-names></string-name>, &amp; <string-name><surname>Sivaraman</surname>, <given-names>G.</given-names></string-name> (<year>2017</year>). <article-title>Quantifying kinematic aspects of reduction in a contrasting rate production task</article-title>. <source>JASA</source>, <volume>141</volume>(<issue>5</issue>), <fpage>3580</fpage>&#8211;<lpage>3580</lpage>. DOI: <pub-id pub-id-type="doi">10.1121/1.4987629</pub-id></mixed-citation></ref>
<ref id="B157"><label>157</label><mixed-citation publication-type="journal"><string-name><surname>Tognola</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Parazzini</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Sibella</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Paglialonga</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Ravazzani</surname>, <given-names>P.</given-names></string-name> (<year>2007</year>). <article-title>Electromagnetic interference and cochlear implants</article-title>. <source>Annali dell&#8217;Istituto Superiore di Sanita</source>, <volume>43</volume>(<issue>3</issue>), <fpage>241</fpage>&#8211;<lpage>247</lpage>.</mixed-citation></ref>
<ref id="B158"><label>158</label><mixed-citation publication-type="confproc"><string-name><surname>Tong</surname>, <given-names>E.</given-names></string-name>, &amp; <string-name><surname>Ng</surname>, <given-names>M. L.</given-names></string-name> (<year>2011</year>). <article-title>Interaction between lexical tone and labial movement in Cantonese bilabial plosive production</article-title>. In <conf-name>Proceedings of ICPhS 2011</conf-name>.</mixed-citation></ref>
<ref id="B159"><label>159</label><mixed-citation publication-type="journal"><string-name><surname>Trudeau-Fisette</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Tiede</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>M&#233;nard</surname>, <given-names>L.</given-names></string-name> (<year>2017</year>). <article-title>Compensations to auditory feedback perturbations in congenitally blind and sighted speakers: Acoustic and articulatory data</article-title>. <source>PLoS One</source>, <volume>12</volume>(<issue>7</issue>), <elocation-id>e0180300</elocation-id>. DOI: <pub-id pub-id-type="doi">10.1371/journal.pone.0180300</pub-id></mixed-citation></ref>
<ref id="B160"><label>160</label><mixed-citation publication-type="book"><string-name><surname>van Lieshout</surname>, <given-names>P. H. H. M.</given-names></string-name> (<year>2007</year>). <chapter-title>The use of Electro-Magnetic Midsaggital Articulography in oral motor research</chapter-title>. In <string-name><given-names>E.</given-names> <surname>Padr&#243;s-Serrat</surname></string-name> (Ed.), <source>Bases Diagnosticas, Terapeuticas Y Posturales Del Funcionalismo Craneofacial</source> [Diagnostic, therapeutic and postural basis of craniofaxial functionalism] (pp. <fpage>1140</fpage>&#8211;<lpage>1156</lpage>). <publisher-name>Ripano Editorial Medica</publisher-name>.</mixed-citation></ref>
<ref id="B161"><label>161</label><mixed-citation publication-type="journal"><string-name><surname>van Lieshout</surname>, <given-names>P. H. H. M.</given-names></string-name>, <string-name><surname>Rutjes</surname>, <given-names>C. A. W.</given-names></string-name>, &amp; <string-name><surname>Spauwen</surname>, <given-names>P. H. M.</given-names></string-name> (<year>2002</year>). <article-title>The Dynamics of Interlip Coupling in Speakers with a Repaired Unilateral Cleft-Lip History</article-title>. <source>JSLHR</source>, <volume>45</volume>(<issue>1</issue>), <fpage>5</fpage>&#8211;<lpage>19</lpage>. DOI: <pub-id pub-id-type="doi">10.1044/1092-4388(2002/001)</pub-id></mixed-citation></ref>
<ref id="B162"><label>162</label><mixed-citation publication-type="journal"><string-name><surname>Vorperian</surname>, <given-names>H. K.</given-names></string-name>, <string-name><surname>Kent</surname>, <given-names>R. D.</given-names></string-name>, <string-name><surname>Lindstrom</surname>, <given-names>M. J.</given-names></string-name>, <string-name><surname>Kalina</surname>, <given-names>C. M.</given-names></string-name>, <string-name><surname>Gentry</surname>, <given-names>L. R.</given-names></string-name>, &amp; <string-name><surname>Yandell</surname>, <given-names>B. S.</given-names></string-name> (<year>2005</year>). <article-title>Development of vocal tract length during early childhood: A magnetic resonance imaging study</article-title>. <source>JASA</source>, <volume>117</volume>(<issue>1</issue>), <fpage>338</fpage>&#8211;<lpage>350</lpage>. DOI: <pub-id pub-id-type="doi">10.1121/1.1835958</pub-id></mixed-citation></ref>
<ref id="B163"><label>163</label><mixed-citation publication-type="confproc"><string-name><surname>Wang</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Samal</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Green</surname>, <given-names>J. R.</given-names></string-name>, &amp; <string-name><surname>Rudzicz</surname>, <given-names>F.</given-names></string-name> (<year>2012</year>). <article-title>Whole-word recognition from articulatory movements for silent speech interfaces</article-title>. In <conf-name>Proceedings of INTERSPEECH 2012</conf-name>. DOI: <pub-id pub-id-type="doi">10.1109/ICASSP.2012.6289039</pub-id></mixed-citation></ref>
<ref id="B164"><label>164</label><mixed-citation publication-type="journal"><string-name><surname>Wang</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Samal</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Rong</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Green</surname>, <given-names>J. R.</given-names></string-name> (<year>2016</year>). <article-title>An optimal set of flesh points on tongue and lips for speech-movement classification</article-title>. <source>JSLHR</source>, <volume>59</volume>, <fpage>15</fpage>&#8211;<lpage>26</lpage>. DOI: <pub-id pub-id-type="doi">10.1044/2015_JSLHR-S-14-0112</pub-id></mixed-citation></ref>
<ref id="B165"><label>165</label><mixed-citation publication-type="webpage"><string-name><surname>Weinberger</surname>, <given-names>S.</given-names></string-name> (<year>2015</year>). <source>Speech Accent Archive</source>. <publisher-name>George Mason University</publisher-name>. Retrieved from <uri>http://accent.gmu.edu</uri></mixed-citation></ref>
<ref id="B166"><label>166</label><mixed-citation publication-type="webpage"><string-name><surname>West</surname>, <given-names>P.</given-names></string-name> (<year>1999</year>). <article-title>The extent of coarticulation of English liquids: An acoustic and articulatory study</article-title>. <source>International Congress of Phonetics</source>, <fpage>1901</fpage>&#8211;<lpage>1904</lpage>. Retrieved from <uri>http://www.phon.ox.ac.uk/files/people/west/icphswest.pdf</uri></mixed-citation></ref>
<ref id="B167"><label>167</label><mixed-citation publication-type="journal"><string-name><surname>Westbury</surname>, <given-names>J. R.</given-names></string-name> (<year>1994</year>). <article-title>On coordinate systems and the representation of articulatory movements</article-title>. <source>JASA</source>, <volume>95</volume>, <fpage>2271</fpage>&#8211;<lpage>2273</lpage>. DOI: <pub-id pub-id-type="doi">10.1121/1.408638</pub-id></mixed-citation></ref>
<ref id="B168"><label>168</label><mixed-citation publication-type="journal"><string-name><surname>Whalen</surname>, <given-names>D. H.</given-names></string-name>, <string-name><surname>Iskarous</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Tiede</surname>, <given-names>M. K.</given-names></string-name>, <string-name><surname>Ostry</surname>, <given-names>D. J.</given-names></string-name>, <string-name><surname>Lehnert-LeHouillier</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Vatikiotis-Bateson</surname>, <given-names>E.</given-names></string-name>, &amp; <string-name><surname>Hailey</surname>, <given-names>D. S.</given-names></string-name> (<year>2005</year>). <article-title>The Haskins Optically Corrected Ultrasound System (HOCUS)</article-title>. <source>JSLHR</source>, <volume>48</volume>, <fpage>543</fpage>&#8211;<lpage>553</lpage>. DOI: <pub-id pub-id-type="doi">10.1044/1092-4388(2005/037)</pub-id></mixed-citation></ref>
<ref id="B169"><label>169</label><mixed-citation publication-type="book"><string-name><surname>Whelton</surname>, <given-names>H.</given-names></string-name> (<year>2012</year>). <chapter-title>Introduction: The anatomy and physiology of salivary glands</chapter-title>. In <string-name><given-names>M.</given-names> <surname>Edgar</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Dawes</surname></string-name> and <string-name><given-names>D.</given-names> <surname>O&#8217;Mullane</surname></string-name> (Eds.), <source>Saliva and oral health</source> (<edition>4th Ed.</edition>, pp. <fpage>1</fpage>&#8211;<lpage>17</lpage>). <publisher-loc>Comberton, UK</publisher-loc>: <publisher-name>Stephen Hancocks Limited</publisher-name>.</mixed-citation></ref>
<ref id="B170"><label>170</label><mixed-citation publication-type="journal"><string-name><surname>Wieling</surname>, <given-names>M.</given-names></string-name> (<year>2018</year>). <article-title>Analyzing dynamic phonetic data using generalized additive mixed modeling: A tutorial focusing on articulatory differences between L1 and L2 speakers of English</article-title>. <source>Journal of Phonetics</source>, <volume>70</volume>, <fpage>86</fpage>&#8211;<lpage>116</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/j.wocn.2018.03.002</pub-id></mixed-citation></ref>
<ref id="B171"><label>171</label><mixed-citation publication-type="journal"><string-name><surname>Wieling</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Tomaschek</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Arnold</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Tiede</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Br&#246;ker</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Thiele</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Wood</surname>, <given-names>S. N.</given-names></string-name>, &amp; <string-name><surname>Baayen</surname>, <given-names>H.</given-names></string-name> (<year>2016</year>). <article-title>Investigating dialectal differences using articulography</article-title>. <source>Journal of Phonetics</source>, <volume>59</volume>, <fpage>122</fpage>&#8211;<lpage>143</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/j.wocn.2016.09.004</pub-id></mixed-citation></ref>
<ref id="B172"><label>172</label><mixed-citation publication-type="confproc"><string-name><surname>Wieling</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Veenstra</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Adank</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Tiede</surname>, <given-names>M.</given-names></string-name> (<year>2017</year>). <article-title>Articulatory differences between L1 and L2 speakers of English</article-title>. In <conf-name>Proceedings of ISSP11</conf-name>.</mixed-citation></ref>
<ref id="B173"><label>173</label><mixed-citation publication-type="confproc"><string-name><surname>Wieling</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Veenstra</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Adank</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Weber</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Tiede</surname>, <given-names>M. K.</given-names></string-name> (<year>2015</year>). <article-title>Comparing L1 and L2 speakers using articulography</article-title>. In <conf-name>Proceedings of ICPhS 2015</conf-name>.</mixed-citation></ref>
<ref id="B174"><label>174</label><mixed-citation publication-type="confproc"><string-name><surname>Wrench</surname>, <given-names>A.</given-names></string-name> (<year>2000</year>). <article-title>A Multichannel Articulatory Database and its Application for Automatic Speech Recognition</article-title>. In <conf-name>Proceedings of 5th Seminar of Speech Production</conf-name>, <fpage>305</fpage>&#8211;<lpage>308</lpage>.</mixed-citation></ref>
<ref id="B175"><label>175</label><mixed-citation publication-type="journal"><string-name><surname>Yunusova</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Green</surname>, <given-names>J. R.</given-names></string-name>, &amp; <string-name><surname>Mefferd</surname>, <given-names>A.</given-names></string-name> (<year>2009</year>). <article-title>Accuracy Assessment for AG500, Electromagnetic Articulograph</article-title>. <source>JSLHR</source>, <volume>52</volume>(<issue>2</issue>), <fpage>547</fpage>&#8211;<lpage>555</lpage>. DOI: <pub-id pub-id-type="doi">10.1044/1092-4388(2008/07-0218)</pub-id></mixed-citation></ref>
<ref id="B176"><label>176</label><mixed-citation publication-type="journal"><string-name><surname>Yunusova</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Kearney</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Kulkarni</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Haworth</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Baljko</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Faloutsos</surname>, <given-names>P.</given-names></string-name> (<year>2017</year>). <article-title>Game-based augmented visual feedback for enlarging speech movements in Parkinson&#8217;s disease</article-title>. <source>JSLHR</source>, <volume>60</volume>(<issue>6S</issue>), <fpage>1818</fpage>&#8211;<lpage>1825</lpage>. DOI: <pub-id pub-id-type="doi">10.1044/2017_JSLHR-S-16-0233</pub-id></mixed-citation></ref>
<ref id="B177"><label>177</label><mixed-citation publication-type="journal"><string-name><surname>Yunusova</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Rosenthal</surname>, <given-names>J. S.</given-names></string-name>, <string-name><surname>Rudy</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Baljko</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Sakalogiannakis</surname>, <given-names>J.</given-names></string-name> (<year>2012</year>). <article-title>Positional targets for lingual consonants defined using electromagnetic articulography</article-title>. <source>JASA</source>, <volume>132</volume>(<issue>2</issue>), <fpage>1027</fpage>&#8211;<lpage>1038</lpage>. DOI: <pub-id pub-id-type="doi">10.1121/1.4733542</pub-id></mixed-citation></ref>
<ref id="B178"><label>178</label><mixed-citation publication-type="journal"><string-name><surname>Zhang</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Jones</surname>, <given-names>P. L.</given-names></string-name>, &amp; <string-name><surname>Jetley</surname>, <given-names>R.</given-names></string-name> (<year>2010</year>). <article-title>A hazard analysis for a generic insulin infusion pump</article-title>. <source>Journal of Diabetes Science and Technology</source>, <volume>4</volume>(<issue>2</issue>), <fpage>263</fpage>&#8211;<lpage>283</lpage>. DOI: <pub-id pub-id-type="doi">10.1177/193229681000400207</pub-id></mixed-citation></ref>
</ref-list>
</back>
</article>