<?xml version="1.0" encoding="iso-8859-1" standalone="no"?>
<!DOCTYPE GmsArticle SYSTEM "http://www.egms.de/dtd/2.0.34/GmsArticle.dtd">
<GmsArticle xmlns:xlink="http://www.w3.org/1999/xlink">
  <MetaData>
    <Identifier>zaud000042</Identifier>
    <IdentifierDoi>10.3205/zaud000042</IdentifierDoi>
    <IdentifierUrn>urn:nbn:de:0183-zaud0000421</IdentifierUrn>
    <ArticleType>Short Report</ArticleType>
    <TitleGroup>
      <Title language="en">Digital assistants for hearing aid wearers based on cloud-based artificial intelligence</Title>
      <TitleTranslated language="de">Digitale Assistenten f&#252;r H&#246;rger&#228;tetr&#228;ger auf Basis von Cloud-gest&#252;tzter K&#252;nstlicher Intelligenz</TitleTranslated>
    </TitleGroup>
    <CreatorList>
      <Creator>
        <PersonNames>
          <Lastname>Wolf</Lastname>
          <LastnameHeading>Wolf</LastnameHeading>
          <Firstname>Vera</Firstname>
          <Initials>V</Initials>
        </PersonNames>
        <Address>Applied Audiological Research, Sivantos GmbH &#47; WS Audiology, Henri-Dunant-Str. 100, 91058 Erlangen, Germany<Affiliation>Applied Audiological Research, Sivantos GmbH &#47; WS Audiology, Erlangen, Germany</Affiliation></Address>
        <Email>vera.wolf&#64;wsa.com</Email>
        <Creatorrole corresponding="yes" presenting="no">author</Creatorrole>
      </Creator>
      <Creator>
        <PersonNames>
          <Lastname>Mueller</Lastname>
          <LastnameHeading>Mueller</LastnameHeading>
          <Firstname>Michael</Firstname>
          <Initials>M</Initials>
        </PersonNames>
        <Address>
          <Affiliation>Signal Processing, Sivantos GmbH &#47; WS Audiology, Erlangen, Germany</Affiliation>
        </Address>
        <Creatorrole corresponding="no" presenting="no">author</Creatorrole>
      </Creator>
    </CreatorList>
    <PublisherList>
      <Publisher>
        <Corporation>
          <Corporatename>German Medical Science GMS Publishing House</Corporatename>
        </Corporation>
        <Address>D&#252;sseldorf</Address>
      </Publisher>
    </PublisherList>
    <SubjectGroup>
      <SubjectheadingDDB>610</SubjectheadingDDB>
    </SubjectGroup>
    <DatePublishedList>
      
    <DatePublished>20240611</DatePublished></DatePublishedList>
    <Language>engl</Language>
    <License license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
      <AltText language="en">This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 License.</AltText>
      <AltText language="de">Dieser Artikel ist ein Open-Access-Artikel und steht unter den Lizenzbedingungen der Creative Commons Attribution 4.0 License (Namensnennung).</AltText>
    </License>
    <SourceGroup>
      <Journal>
        <ISSN>2628-9083</ISSN>
        <Volume>6</Volume>
        <JournalTitle>GMS Zeitschrift f&#252;r Audiologie - Audiological Acoustics</JournalTitle>
        <JournalTitleAbbr>GMS Z Audiol (Audiol Acoust)</JournalTitleAbbr>
      </Journal>
    </SourceGroup>
    <ArticleNo>07</ArticleNo>
  </MetaData>
  <OrigData>
    <Abstract language="de" linked="yes"><Pgraph>H&#246;rger&#228;te haben sich in den letzten Jahren enorm weiterentwickelt: Innovative Signalverarbeitung und pr&#228;zise Spracherkennung erm&#246;glichen ein deutlich verbessertes Sprachverstehen. Gleichzeitig hat sich der Anpassprozess nicht wesentlich ver&#228;ndert: Eine Empfehlung f&#252;r eine geeignete Verst&#228;rkungseinstellung wird meist mit Hilfe von Anpassfo<TextGroup><PlainText>r</PlainText></TextGroup>meln unter Verwendung des Audiogramms ermittelt. Dadurch werden jedoch m&#246;gliche Unterschiede in Bezug auf Lautheitswahrnehmung, L&#228;rmtoleranz sowie individuelle Klangvorlieben vernachl&#228;ssigt. Diese Nachteile lassen sich beheben durch einen digitalen Assistenten in Form einer Smartphone-App, welcher eine Endnutzer-gesteuerte Feinanpassung erm&#246;glicht. Daraus ergeben sich mehrere Vorteile gegen&#252;ber der Feineinstellung durch den H&#246;rakustiker. Erstens erm&#246;glichen digitale Assistenten mithilfe von K&#252;nstlicher Intelligenz (KI) gezielte Anpassungen an das individuelle Empfinden. Zweitens l&#228;sst sich auch der Einfluss von Erinnerungsverlusten reduzieren, da sie direkt in akustisch schwierigen Situationen eingesetzt werden k&#246;nnen. Dar&#252;ber hinaus k&#246;nnen die vorgenommenen Anpassungen direkt bewertet werden, und der H&#246;rger&#228;tetr&#228;ger kann die &#196;nderungen akzeptieren oder ablehnen. In diesem Kurzbericht diskutieren wir die Chancen und Herausforderungen eines solchen digitalen Assistenten. Wir konzentrieren uns auf die Frage, wie H&#246;rger&#228;tetr&#228;ger den digitalen Assistenten am liebsten nutzen: Direkt in der problematischen Situation oder eher im Nachhinein&#63; Zu diesem Zweck analysieren wir umfangreiche Nutzerdaten, die zeigen, dass der Assistent sowohl in problematischen Situationen als auch danach verwendet wird. Um diesen Nutzererwartungen gerecht zu werden, zeigen wir, wie beide Varianten in den digitalen Assistenten implementiert werden k&#246;nnen. Unsere Ergebnisse unterstreichen die Notwendigkeit, das App-Design in der Praxis zu validieren, um den Nutzen von digitalen Assistenzsystemen zu maximieren.</Pgraph></Abstract>
    <Abstract language="en" linked="yes"><Pgraph>In recent years, hearing aids have undergone enormous development: innovative signal processing and precise speech recognition have led to considerably improved speech understanding. At the same time, the fitting process has not changed significantly: a recommendation for suitable gain settings is mostly determined using prescriptive fitting formulae based on the wearers&#8217; audiogram. However, this approach neglects differences in loudness perception, noise tolerance, and individual sound preferences. These issues can be addressed when end-users fine-tune their hearing aids via smartphone app-based digital assistants, which offer several advantages over fine-tuning by hearing care professionals. First, digital assistants allow highly individualized adaptations provided by artificial intelligence (AI). Second, the impact of memory bias is reduced as they can be directly used in the acoustically challenging situation. Finally, the applied setting updates can be evaluated directly, and hearing aid wearers may accept or reject the updates. In this short report, we discuss opportunities and challenges of such a digital assistant. We focus on the question of how hearing aid wearers prefer to use the digital assistant: directly in the problematic situation or afterwards. To this end, we analyze large-scale user data which shows that using the assistant in the problematic situation and afterwards are both popular. To meet these user expectations, we show how both modes of operation can be implemented in the digital assistant. Our findings highlight the need for validating app design in the field to maximize the usefulness of digital assistance systems.</Pgraph></Abstract>
    <TextBlock linked="yes" name="Artificial intelligence(AI)-based hearing care support">
      <MainHeadline>Artificial intelligence(AI)-based hearing care support</MainHeadline><Pgraph>Hearing aids are usually fitted and optionally fine-tuned in a quiet office or lab environment. However, problems with audibility or handling of the hearing aids usually occur in more challenging daily-life situations and listening environments. To allow for a reasonable fine-tuning of the hearing aid settings, a wearer must remember specific problems and corresponding situations and accurately report these during a follow-up visit to the hearing care professional. This gap can be bridged by smartphone-based digital assistants, which empower the wearer to fine-tune their hearing aids directly in the situation as required, providing instant improvements which both match the acoustical context and are customized to the wearer&#8217;s preferences. Hearing care professionals can thereby save time by reducing the number of follow-up visits while maximizing the value of each visit by allowing them to focus on the personal aspects of hearing care.</Pgraph><Pgraph>The digital assistant presented here is integrated into a smartphone app which also serves as a remote control for hearing aids. A simple user interface (Figure 1 <ImgLink imgNo="1" imgType="figure"/>) allows easy accessibility for wearers of all ages. With the help of a chatbot, the wearer can, for example, describe a problem with the listening experience in a particular situation or a handling problem. The solution finding process starts with a predefined list of issues and two to three follow-up questions to narrow down the exact problem. Depending on the nature of the problem, the app then either assists the wearer by providing relevant information in the form of text hints and short video clips (for handling problems) or uses a cloud-based AI system to suggest updated settings to improve the listening experience. These updates are immediately applied to the hearing aids, so that the wearer can directly evaluate them in the respective situation. The user is then asked whether the new settings should be retained or discarded. If still technically possible and desired, the suggested solution can be applied again, or an alternative solution is offered. At the next follow-up visit, a list of encountered problems as well as changes made by the assistant is visible to the hearing care professional when reading out the hearing aids via the fitting software (see Figure 2 <ImgLink imgNo="2" imgType="figure"/>). This enables a more detailed understanding of customer needs and thus creates additional value to support the fitting and fine-tuning process.</Pgraph></TextBlock>
    <TextBlock linked="yes" name="Improving hearing aid settings based on AI">
      <MainHeadline>Improving hearing aid settings based on AI</MainHeadline><Pgraph>The digital assistant draws on extensive experience from external market expertise, academic data and decades of research and development (R&#38;D) to propose solutions that deliver the greatest benefit. This includes literature on typical fine-tuning problems and solutions from hearing care professionals <TextLink reference="1"></TextLink>, <TextLink reference="2"></TextLink>, describing the most relevant issues and typical countermeasures. Proprietary solutions used with traditional fitting software, such as Basic Tuning and Fitting Assistant have also been reviewed and considered. In addition, general knowledge was gathered from experienced hearing care professionals in customer service and R&#38;D. Based on these sources, specific fine-tuning measures in terms of amplification, compression, noise reduction and automatic directional microphones as well as appropriate step sizes were determined, so that all solutions result in perceptible improvements and reasonable limits and the changes are in line with the hearing care professional&#8217;s intentions. End users can only change the current hearing aid settings within predefined limits to prevent the gain from becoming too low or too high and to ensure that the primary responsibility remains with the hearing care professional.</Pgraph><Pgraph>In an early stage of prototyping, it was found that four problem classifications were sufficient to address &#126;80&#37; of the most important problems of hearing aid wearers without overwhelming the user. Structured differentiation strategies are subsequently used to trigger the most appropriate mitigation path. To understand the full context, the assistant not only considers the problem description (sound source and attribute) and the current settings, but also incorporates objective parameters of the current situation (e.g., the output of a situation classifier and level information, read out from the hearing aids) as well as relevant individual information about the wearer (<TextGroup><PlainText>Figure 3 </PlainText></TextGroup><ImgLink imgNo="3" imgType="figure"/>). This information is sent to a cloud-based AI system where a deep neural network provides a fine-tuning suggestion tailored to the wearer and the acoustic context. To illustrate an example problem and suitable solutions, consider the possible problem &#8220;sound in general is perceived as (too) sharp&#8221;. In this case, the assistant could, e.g., suggest an increase in low frequency gain or a decrease of high frequency amplification, either overall or only for certain input levels. Each of these possible solutions are suitable for decreasing the perceived sharpness. However, some solutions are only applicable in certain acoustic environments, e.g., increasing or decreasing gain for medium or loud input levels makes no sense in quiet environments.</Pgraph></TextBlock>
    <TextBlock linked="yes" name="How is the assistant used&#63; Analysis of user interactions">
      <MainHeadline>How is the assistant used&#63; Analysis of user interactions</MainHeadline><Pgraph>Anonymous data collected from the digital assistant&#8217;s user interactions allow monitoring hearing aid wearers&#8217; problems and how successful the assistant is in solving them. We have previously presented usage data which indicates that the assistant is mostly used to finetune new hearing aids and allows hearing aid wearers to reach their individually preferred settings <TextLink reference="3"></TextLink>. In addition, combining the problem statements with the feedback on the proposed solutions enables constant improvements to the solutions offered by the assistant. In the following, we discuss another important factor accessible via user data: whether hearing aid wearers prefer to use the digital assistant directly in the problematic situation or not.</Pgraph><Pgraph>The idea underlying the design of the digital assistant is to empower hearing aid wearers by giving them the means to improve their hearing experience directly within problematic situations. The initial design of the smartphone app follows this concept: the solutions provided by the assistant are based on information read out from the hearing aids which describes the acoustic environment, and the solutions are tuned to the specific acoustic conditions. For example, if the assistant is used in a loud environment, then it may suggest updates to the compression settings which only affect the sound processing in loud situations. Thus, if the assistant is used while acoustic conditions differ from the target situation, the solutions may not be effective in solving the problem.</Pgraph></TextBlock>
    <TextBlock linked="yes" name="Detecting and improving usage after the problematic situation">
      <MainHeadline>Detecting and improving usage after the problematic situation</MainHeadline><Pgraph>The acoustic scene classifier on the hearing aids provides an insight into whether the stated problem matches the current acoustic context. The classifier categorizes the current situation into one of six categories, including &#8220;Quiet&#8221;, &#8220;Speech in Quiet&#8221;, and &#8220;Speech in Noise&#8221; (<TextGroup><PlainText>Figure 4 </PlainText></TextGroup><ImgLink imgNo="4" imgType="figure"/>). The usage data shows that for more than 40&#37; of the interactions, the environment is classified as &#8220;Quiet&#8221;.</Pgraph><Pgraph>We can analyze the relationship between the problem and the environment by looking at the correlation of detected acoustic class with the problematic sound sources that users can select (Figure 5 <ImgLink imgNo="5" imgType="figure"/>). Again, most interactions take place in quiet environments, regardless of the specific problem. This is surprising as some problems (e.g., &#8220;Loud Noises&#8221;) cannot realistically occur in quiet scenes.</Pgraph><Pgraph>We can estimate the fraction of interactions in which assistant is not used directly in the problematic situation by defining pairings of problem statements and acoustic class which are in clear contradiction and are thus unlikely to occur (Figure 5 <ImgLink imgNo="5" imgType="figure"/>, left, hatched areas). For example, problem statements concerning other voices should occur when the acoustic scene classifier also detects speech activity. A conservative estimate of how often there is such a contradiction indicates that the assistant is used after the problematic situation in more than a quarter of the cases (Figure 5 <ImgLink imgNo="5" imgType="figure"/>, right). In these cases, the proposed solutions would not help with the actual problem of the hearing aid wearer.</Pgraph><Pgraph>This data shows that the digital assistant is not always used as intended, highlighting a difference between its design concept and the way assistant users intuitively understand (or prefer to use) it. The reasons for these preferences remain to be explored but could include factors like politeness (using a smartphone in some social situations, especially conversations, might not be possible) or lack of time while the problematic situation is ongoing. Using the digital assistant afterwards means hearing aid wearers have enough time to work with the assistant without any distraction and avoid giving the impression of being impolite. In any case, the data suggests the assistant should be modified accordingly.</Pgraph></TextBlock>
    <TextBlock linked="yes" name="Aligning the assistant&#8217;s behavior with users&#8217; expectations">
      <MainHeadline>Aligning the assistant&#8217;s behavior with users&#8217; expectations</MainHeadline><Pgraph>To allow usage of the assistant both according to the original design and according to users&#8217; expectations, the assistant&#8217;s dialog was amended and now includes a new question: are users experiencing the problem at the same time as they are using the assistant (Figure 6 <ImgLink imgNo="6" imgType="figure"/>)&#63; Depending on the answer, different solutions will be proposed by the digital assistant:</Pgraph><Pgraph><UnorderedList><ListItem level="1">If the problematic situation is currently ongoing: the cloud-based machine learning backend will take the current acoustic environment into account and will suggest solutions which are specifically tuned to the current situation.</ListItem><ListItem level="1">Otherwise, the suggested solutions will be somewhat broader in scope and will affect all possible hearing situations. However, they cannot be tuned specifically to the problematic situation.</ListItem></UnorderedList></Pgraph><Pgraph>This change aligns the operating principle of the digital assistant with its users&#8217; expectations and allows them to perform settings updates in their individually preferred manner.</Pgraph></TextBlock>
    <TextBlock linked="yes" name="Discussion">
      <MainHeadline>Discussion</MainHeadline><Pgraph>We have presented a digital assistant <TextLink reference="4"></TextLink> which allows hearing aid wearers to fine tune their devices according to their individual preferences. While the design of the assistant originally supported only modifications directly in the problematic situation, usage data collected from the smartphone app indicated that in many cases users prefer to use the assistant afterwards. We have shown how the digital assistant has been updated to give users the option to choose between both variants. This highlights both the necessity of validating tools like the presented digital assistant in the field and the opportunities which arise from the collection of usage data. We expect that this update increases the digital assistant&#8217;s usefulness both for hearing aid wearers (by allowing them to fine tune their devices whenever they wish) and for the hearing care professionals (by further improving customer satisfaction).</Pgraph></TextBlock>
    <TextBlock linked="yes" name="Notes">
      <MainHeadline>Notes</MainHeadline><SubHeadline>Conference presentation</SubHeadline><Pgraph>This contribution was presented at the 26<Superscript>th</Superscript> Annual Conference of the German Society of Audiology and published as an abstract <TextLink reference="5"></TextLink>. </Pgraph><SubHeadline>Competing interests</SubHeadline><Pgraph>The authors declare that they have no competing interests.</Pgraph></TextBlock>
    <References linked="yes">
      <Reference refNo="1">
        <RefAuthor>Jenstad LM</RefAuthor>
        <RefAuthor>Van Tasell DJ</RefAuthor>
        <RefAuthor>Ewert C</RefAuthor>
        <RefTitle>Hearing aid troubleshooting based on patients&#8217; descriptions</RefTitle>
        <RefYear>2003</RefYear>
        <RefJournal>J Am Acad Audiol</RefJournal>
        <RefPage>347-60</RefPage>
        <RefTotal>Jenstad LM, Van Tasell DJ, Ewert C. Hearing aid troubleshooting based on patients&#8217; descriptions. J Am Acad Audiol. 2003 Sep;14(7):347-60.</RefTotal>
      </Reference>
      <Reference refNo="2">
        <RefAuthor>Thielemans T</RefAuthor>
        <RefAuthor>Pans D</RefAuthor>
        <RefAuthor>Chenault M</RefAuthor>
        <RefAuthor>Anteunis L</RefAuthor>
        <RefTitle>Hearing aid fine-tuning based on Dutch descriptions</RefTitle>
        <RefYear>2017</RefYear>
        <RefJournal>Int J Audiol</RefJournal>
        <RefPage>507-15</RefPage>
        <RefTotal>Thielemans T, Pans D, Chenault M, Anteunis L. Hearing aid fine-tuning based on Dutch descriptions. Int J Audiol. 2017 Jul;56(7):507-15. DOI: 10.1080&#47;14992027.2017.1288302</RefTotal>
        <RefLink>https:&#47;&#47;doi.org&#47;10.1080&#47;14992027.2017.1288302</RefLink>
      </Reference>
      <Reference refNo="4">
        <RefAuthor>H&#248;ydal EH</RefAuthor>
        <RefAuthor>Fischer RL</RefAuthor>
        <RefAuthor>Wolf V</RefAuthor>
        <RefAuthor>Eric Branda E</RefAuthor>
        <RefAuthor>Aubreville M</RefAuthor>
        <RefTitle>Empowering the Wearer: AI-based Signia Assistant Allows Individualized Hearing Care</RefTitle>
        <RefYear>2020</RefYear>
        <RefBookTitle>The Hearing Review</RefBookTitle>
        <RefPage></RefPage>
        <RefTotal>H&#248;ydal EH, Fischer RL, Wolf V, Eric Branda E, Aubreville M. Empowering the Wearer: AI-based Signia Assistant Allows Individualized Hearing Care. In: The Hearing Review. 2020 Jul 15. Available from: https:&#47;&#47;hearingreview.com&#47;hearing-loss&#47;patient-care&#47;hearing-fittings&#47;empowering-the-wearer-ai-based-signia-assistant-allows-individualized-hearing-care</RefTotal>
        <RefLink>https:&#47;&#47;hearingreview.com&#47;hearing-loss&#47;patient-care&#47;hearing-fittings&#47;empowering-the-wearer-ai-based-signia-assistant-allows-individualized-hearing-care</RefLink>
      </Reference>
      <Reference refNo="3">
        <RefAuthor>Wolf V</RefAuthor>
        <RefAuthor>Mueller M</RefAuthor>
        <RefTitle>End-user controlled Fine-Tuning of Hearing Instruments &#8211; Opportunities and Challenges for an interactive Digital Assistant</RefTitle>
        <RefYear></RefYear>
        <RefBookTitle>DAGA 2023 &#8211; 49. Jahrestagung f&#252;r Akustik; 2023 Mar 6-9; Hamburg, Germany</RefBookTitle>
        <RefPage>170-3</RefPage>
        <RefTotal>Wolf V, Mueller M. End-user controlled Fine-Tuning of Hearing Instruments &#8211; Opportunities and Challenges for an interactive Digital Assistant. In: DAGA 2023 &#8211; 49. Jahrestagung f&#252;r Akustik; 2023 Mar 6-9; Hamburg, Germany. p. 170-3. 
Available from: https:&#47;&#47;pub.dega-akustik.de&#47;DAGA&#95;2023&#47;data&#47;articles&#47;000182.pdf</RefTotal>
        <RefLink>https:&#47;&#47;pub.dega-akustik.de&#47;DAGA&#95;2023&#47;data&#47;articles&#47;000182.pdf</RefLink>
      </Reference>
      <Reference refNo="5">
        <RefAuthor>Wolf V</RefAuthor>
        <RefTitle>Digitale Assistenten f&#252;r H&#246;rger&#228;tetr&#228;ger auf Basis von Cloud-gest&#252;tzter KI</RefTitle>
        <RefYear>2024</RefYear>
        <RefBookTitle>26. Jahrestagung der Deutschen Gesellschaft f&#252;r Audiologie. Aalen, 06.-08.03.2024</RefBookTitle>
        <RefPage>Doc019</RefPage>
        <RefTotal>Wolf V. Digitale Assistenten f&#252;r H&#246;rger&#228;tetr&#228;ger auf Basis von Cloud-gest&#252;tzter KI. In: Deutsche Gesellschaft f&#252;r Audiologie e.V., editor. 26. Jahrestagung der Deutschen Gesellschaft f&#252;r Audiologie. Aalen, 06.-08.03.2024. D&#252;sseldorf: German Medical Science GMS Publishing House; 2024. Doc019. 
DOI: 10.3205&#47;24dga01</RefTotal>
        <RefLink>https:&#47;&#47;doi.org&#47;10.3205&#47;24dga01</RefLink>
      </Reference>
    </References>
    <Media>
      <Tables>
        <NoOfTables>0</NoOfTables>
      </Tables>
      <Figures>
        <Figure format="png" height="502" width="1208">
          <MediaNo>1</MediaNo>
          <MediaID>1</MediaID>
          <Caption><Pgraph><Mark1>Figure 1: Signia app with digital assistant</Mark1><LineBreak></LineBreak>Left: Icon (highlighted in purple) to access assistant from app start screen<LineBreak></LineBreak>Center left: Digital assistant start screen<LineBreak></LineBreak>Center right: Chatbot to address problem descriptions and application of changes to the hearing aids<LineBreak></LineBreak>Right: Example for handling support, applying possible solutions or showing additional information</Pgraph></Caption>
        </Figure>
        <Figure format="png" height="1476" width="955">
          <MediaNo>2</MediaNo>
          <MediaID>2</MediaID>
          <Caption><Pgraph><Mark1>Figure 2: Digital assistant information screens for hearing care professionals in the hearing aid fitting software</Mark1><LineBreak></LineBreak>Top: Changes compared to previous fitting session<LineBreak></LineBreak>Bottom: History of stated problems and changes performed with the assistant</Pgraph></Caption>
        </Figure>
        <Figure format="png" height="316" width="772">
          <MediaNo>3</MediaNo>
          <MediaID>3</MediaID>
          <Caption><Pgraph><Mark1>Figure 3: Schematic overview of cloud-based AI system</Mark1><LineBreak></LineBreak>Information is collected from hearing instruments (HIs) and smartphone app (left) and processed in the cloud (DNN and preference module) to suggest solutions which update the hearing aid settings.</Pgraph></Caption>
        </Figure>
        <Figure format="png" height="415" width="495">
          <MediaNo>4</MediaNo>
          <MediaID>4</MediaID>
          <Caption><Pgraph><Mark1>Figure 4: Output of the acoustical scene classifier aggregated over all user interactions</Mark1><LineBreak></LineBreak>Figure shows the percentages of all complaints for which the hearing aid detected each class.</Pgraph></Caption>
        </Figure>
        <Figure format="png" height="570" width="1132">
          <MediaNo>5</MediaNo>
          <MediaID>5</MediaID>
          <Caption><Pgraph><Mark1>Figure 5: Mismatch of stated problems with output of acoustical scene classifier</Mark1><LineBreak></LineBreak>Left: Percentages of each detected class for all sound sources (as selected by assistant users in the app). Hatched bars indicate a mismatch between the problem statement and the environment, suggesting the assistant is not used directly in the problematic situation.<LineBreak></LineBreak>Right: Overall percentage of interactions with and without a problem&#47;environment mismatch</Pgraph></Caption>
        </Figure>
        <Figure format="png" height="287" width="765">
          <MediaNo>6</MediaNo>
          <MediaID>6</MediaID>
          <Caption><Pgraph><Mark1>Figure 6: Asking the hearing aid wearer if the problem is currently experienced in this moment</Mark1><LineBreak></LineBreak>Left: New question in the assistant dialog, allowing the user to state whether the issue is occurring at the same time the assistant is used<LineBreak></LineBreak>Right: If the assistant is used after the problematic situation has ended, it displays a notification informing the hearing aid wearer that using the assistant immediately can lead to better results.</Pgraph></Caption>
        </Figure>
        <NoOfPictures>6</NoOfPictures>
      </Figures>
      <InlineFigures>
        <NoOfPictures>0</NoOfPictures>
      </InlineFigures>
      <Attachments>
        <NoOfAttachments>0</NoOfAttachments>
      </Attachments>
    </Media>
  </OrigData>
</GmsArticle>