<?xml version="1.0" encoding="Windows-31J"?>
<ArticleSet xmlns="http://www.openarchives.org/OAI/2.0/">
  <Article>
    <Journal>
      <PublisherName>Elsevier</PublisherName>
      <JournalTitle>Acta Medica Okayama</JournalTitle>
      <Issn>0167-6393</Issn>
      <Volume>126</Volume>
      <Issue/>
      <PubDate PubStatus="ppublish">
        <Year>2021</Year>
        <Month/>
      </PubDate>
    </Journal>
    <ArticleTitle>Model architectures to extrapolate emotional expressions in DNN-based text-to-speech</ArticleTitle>
    <FirstPage LZero="delete">35</FirstPage>
    <LastPage>43</LastPage>
    <Language>EN</Language>
    <AuthorList>
      <Author>
        <FirstName EmptyYN="N">Katsuki</FirstName>
        <LastName>Inoue</LastName>
        <Affiliation>Graduate school of Interdisciplinary Science and Engineering in Health Systems, Okayama University</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="N">Sunao</FirstName>
        <LastName>Hara</LastName>
        <Affiliation>Graduate school of Interdisciplinary Science and Engineering in Health Systems, Okayama University</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="N">Masanobu</FirstName>
        <LastName>Abe</LastName>
        <Affiliation>Graduate school of Interdisciplinary Science and Engineering in Health Systems, Okayama University</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="N">Nobukatsu</FirstName>
        <LastName>Hojo</LastName>
        <Affiliation>NTT Corporation</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="N">Yusuke</FirstName>
        <LastName>Ijima</LastName>
        <Affiliation>NTT Corporation</Affiliation>
      </Author>
    </AuthorList>
    <PublicationType/>
    <ArticleIdList>
      <ArticleId IdType="doi"/>
    </ArticleIdList>
    <Abstract>This paper proposes architectures that facilitate the extrapolation of emotional expressions in deep neural network (DNN)-based text-to-speech (TTS). In this study, the meaning of “extrapolate emotional expressions” is to borrow emotional expressions from others, and the collection of emotional speech uttered by target speakers is unnecessary. Although a DNN has potential power to construct DNN-based TTS with emotional expressions and some DNN-based TTS systems have demonstrated satisfactory performances in the expression of the diversity of human speech, it is necessary and troublesome to collect emotional speech uttered by target speakers. To solve this issue, we propose architectures to separately train the speaker feature and the emotional feature and to synthesize speech with any combined quality of speakers and emotions. The architectures are parallel model (PM), serial model (SM), auxiliary input model (AIM), and hybrid models (PM&amp;AIM and SM&amp;AIM). These models are trained through emotional speech uttered by few speakers and neutral speech uttered by many speakers. Objective evaluations demonstrate that the performances in the open-emotion test provide insufficient information. They make a comparison with those in the closed-emotion test, but each speaker has their own manner of expressing emotion. However, subjective evaluation results indicate that the proposed models could convey emotional information to some extent. Notably, the PM can correctly convey sad and joyful emotions at a rate of &gt;60%.</Abstract>
    <CoiStatement>No potential conflict of interest relevant to this article was reported.</CoiStatement>
    <ObjectList>
      <Object Type="keyword">
        <Param Name="value">Emotional speech synthesis</Param>
      </Object>
      <Object Type="keyword">
        <Param Name="value">Extrapolation</Param>
      </Object>
      <Object Type="keyword">
        <Param Name="value">DNN-based TTS</Param>
      </Object>
      <Object Type="keyword">
        <Param Name="value">Text-to-speech</Param>
      </Object>
      <Object Type="keyword">
        <Param Name="value">Acoustic model</Param>
      </Object>
      <Object Type="keyword">
        <Param Name="value">Phoneme duration model</Param>
      </Object>
    </ObjectList>
    <ReferenceList/>
  </Article>
  <Article>
    <Journal>
      <PublisherName>岡山医学会</PublisherName>
      <JournalTitle>Acta Medica Okayama</JournalTitle>
      <Issn>0030-1558</Issn>
      <Volume>132</Volume>
      <Issue>2</Issue>
      <PubDate PubStatus="ppublish">
        <Year>2020</Year>
        <Month/>
      </PubDate>
    </Journal>
    <ArticleTitle>サイバーフィジカル情報応用研究コア（Cypher）設立について</ArticleTitle>
    <FirstPage LZero="delete">92</FirstPage>
    <LastPage>94</LastPage>
    <Language>EN</Language>
    <AuthorList>
      <Author>
        <FirstName EmptyYN="N">Masanobu</FirstName>
        <LastName>Abe</LastName>
        <Affiliation>Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University</Affiliation>
      </Author>
    </AuthorList>
    <PublicationType/>
    <ArticleIdList>
      <ArticleId IdType="doi"/>
    </ArticleIdList>
    <Abstract/>
    <CoiStatement>No potential conflict of interest relevant to this article was reported.</CoiStatement>
    <ObjectList>
      <Object Type="keyword">
        <Param Name="value">AI</Param>
      </Object>
      <Object Type="keyword">
        <Param Name="value">Bigdata</Param>
      </Object>
      <Object Type="keyword">
        <Param Name="value">IoT</Param>
      </Object>
    </ObjectList>
    <ReferenceList/>
  </Article>
  <Article>
    <Journal>
      <PublisherName>IEEE</PublisherName>
      <JournalTitle>Acta Medica Okayama</JournalTitle>
      <Issn>2640-009X</Issn>
      <Volume>2019</Volume>
      <Issue/>
      <PubDate PubStatus="ppublish">
        <Year>2019</Year>
        <Month/>
      </PubDate>
    </Journal>
    <ArticleTitle>Speech-like Emotional Sound Generator by WaveNet</ArticleTitle>
    <FirstPage LZero="delete">143</FirstPage>
    <LastPage>147</LastPage>
    <Language>EN</Language>
    <AuthorList>
      <Author>
        <FirstName EmptyYN="N">Kento</FirstName>
        <LastName>Matsumoto</LastName>
        <Affiliation>Okayama University</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="N">Sunao</FirstName>
        <LastName>Hara</LastName>
        <Affiliation>Okayama University</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="N">Masanobu</FirstName>
        <LastName>Abe</LastName>
        <Affiliation>Okayama University</Affiliation>
      </Author>
    </AuthorList>
    <PublicationType/>
    <ArticleIdList>
      <ArticleId IdType="doi"/>
    </ArticleIdList>
    <Abstract>In this paper, we propose a new algorithm to generate Speech-like Emotional Sound (SES). Emotional information plays an important role in human communication, and speech is one of the most useful media to express emotions. Although, in general, speech conveys emotional information as well as linguistic information, we have undertaken the challenge to generate sounds that convey emotional information without linguistic information, which results in making conversations in human-machine interactions more natural in some situations by providing non-verbal emotional vocalizations. We call the generated sounds “speech-like”, because the sounds do not contain any linguistic information. For the purpose, we propose to employ WaveNet as a sound generator conditioned by only emotional IDs. The idea is quite different from WaveNet Vocoder that synthesizes speech using spectrum information as auxiliary features. The biggest advantage of the idea is to reduce the amount of emotional speech data for the training. The proposed algorithm consists of two steps. In the first step, WaveNet is trained to obtain phonetic features using a large speech database, and in the second step, WaveNet is re-trained using a small amount of emotional speech. Subjective listening evaluations showed that the SES could convey emotional information and was judged to sound like a human voice.</Abstract>
    <CoiStatement>No potential conflict of interest relevant to this article was reported.</CoiStatement>
    <ObjectList/>
    <ReferenceList/>
  </Article>
  <Article>
    <Journal>
      <PublisherName/>
      <JournalTitle>Acta Medica Okayama</JournalTitle>
      <Issn/>
      <Volume/>
      <Issue/>
      <PubDate PubStatus="ppublish">
        <Year>2016</Year>
        <Month/>
      </PubDate>
    </Journal>
    <ArticleTitle>Sound collection systems using a crowdsourcing approach to construct sound map based on subjective evaluation</ArticleTitle>
    <FirstPage LZero="delete"/>
    <LastPage/>
    <Language>EN</Language>
    <AuthorList>
      <Author>
        <FirstName EmptyYN="N">Sunao</FirstName>
        <LastName>Hara</LastName>
        <Affiliation>Graduate school of Natural Science and Technology, Okayama University</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="N">Shota</FirstName>
        <LastName>Kobayashi</LastName>
        <Affiliation>Graduate school of Natural Science and Technology, Okayama University</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="N">Masanobu</FirstName>
        <LastName>Abe</LastName>
        <Affiliation>Graduate school of Natural Science and Technology, Okayama University</Affiliation>
      </Author>
    </AuthorList>
    <PublicationType/>
    <ArticleIdList>
      <ArticleId IdType="doi"/>
    </ArticleIdList>
    <Abstract>This paper presents a sound collection system that uses crowdsourcing to gather information for visualizing area characteristics. First, we developed a sound collection system to simultaneously collect physical sounds, their statistics, and subjective evaluations. We then conducted a sound collection experiment using the developed system on 14 participants. We collected 693,582 samples of equivalent Aweighted loudness levels and their locations, and 5,935 samples of sounds and their locations. The data also include subjective evaluations by the participants. In addition, we analyzed the changes in sound properties of some areas before and after the opening of a large-scale shopping mall in a city. Next, we implemented visualizations on the server system to attract users’ interests. Finally, we published the system, which can receive sounds from any Android smartphone user. The sound data were continuously collected and achieved a specified result.</Abstract>
    <CoiStatement>No potential conflict of interest relevant to this article was reported.</CoiStatement>
    <ObjectList>
      <Object Type="keyword">
        <Param Name="value">Environmental sound</Param>
      </Object>
      <Object Type="keyword">
        <Param Name="value">Crowdsourcing</Param>
      </Object>
      <Object Type="keyword">
        <Param Name="value">Loudness</Param>
      </Object>
      <Object Type="keyword">
        <Param Name="value">Crowdedness</Param>
      </Object>
      <Object Type="keyword">
        <Param Name="value">Smart City</Param>
      </Object>
    </ObjectList>
    <ReferenceList/>
  </Article>
  <Article>
    <Journal>
      <PublisherName>Okayama University Medical School</PublisherName>
      <JournalTitle>Acta Medica Okayama</JournalTitle>
      <Issn>0386-300X</Issn>
      <Volume>70</Volume>
      <Issue>3</Issue>
      <PubDate PubStatus="ppublish">
        <Year>2016</Year>
        <Month/>
      </PubDate>
    </Journal>
    <ArticleTitle>Structure of a New Palatal Plate and the Artificial Tongue for Articulation Disorder in a Patient with Subtotal Glossectomy</ArticleTitle>
    <FirstPage LZero="delete">205</FirstPage>
    <LastPage>211</LastPage>
    <Language>EN</Language>
    <AuthorList>
      <Author>
        <FirstName EmptyYN="N">Ken-ichi</FirstName>
        <LastName>Kozaki</LastName>
        <Affiliation>Department of Dental Pharmacology, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="N">Shigehisa</FirstName>
        <LastName>Kawakami</LastName>
        <Affiliation>Department of Occlusal and Oral Functional Rehabilitation, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="N">Takayuki</FirstName>
        <LastName>Konishi</LastName>
        <Affiliation>Division of Physical Medicine and Rehabilitation, Okayama University Hospital</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="N">Keiji</FirstName>
        <LastName>Ohta</LastName>
        <Affiliation>Dental Laboratory Division, Okayama University Hospital</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="N">Jitsuro</FirstName>
        <LastName>Yano</LastName>
        <Affiliation>Department of Occlusal and Oral Functional Rehabilitation, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="N">Tomoo</FirstName>
        <LastName>Onoda</LastName>
        <Affiliation>Department of Otolaryngology-Head and Neck Surgery Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="N">Hiroshi</FirstName>
        <LastName>Matsumoto</LastName>
        <Affiliation>Department of Plastic and Reconstructive Surgery, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="N">Nobuyoshi</FirstName>
        <LastName>Mizukawa</LastName>
        <Affiliation>Department of Oral and Maxillofacial Reconstructive Surgery, Okayama University Hospital</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="N">Yoshihiro</FirstName>
        <LastName>Kimata</LastName>
        <Affiliation>Department of Plastic and Reconstructive Surgery, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="N">Kazunori</FirstName>
        <LastName>Nishizaki</LastName>
        <Affiliation>Department of Otolaryngology-Head and Neck Surgery Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="N">Seiji</FirstName>
        <LastName>Iida</LastName>
        <Affiliation>Department of Oral and Maxillofacial Reconstructive Surgery, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="N">Akio</FirstName>
        <LastName>Gofuku</LastName>
        <Affiliation>Graduate School of Natural Science and Technology, Okayama University</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="N">Masanobu</FirstName>
        <LastName>Abe</LastName>
        <Affiliation>Department of Computer Science, Okayama University</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="N">Shogo</FirstName>
        <LastName>Minagi</LastName>
        <Affiliation>Department of Occlusal and Oral Functional Rehabilitation, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences</Affiliation>
      </Author>
      <Author>
        <FirstName EmptyYN="N"/>
        <LastName>Okayama Dream Speech Project</LastName>
        <Affiliation/>
      </Author>
    </AuthorList>
    <PublicationType>Case Report</PublicationType>
    <ArticleIdList>
      <ArticleId IdType="doi">10.18926/AMO/54420</ArticleId>
    </ArticleIdList>
    <Abstract>A palatal augmentation prosthesis (PAP) is used to facilitate improvement in the speech and swallowing functions of patients with tongue resection or tongue movement disorders. However, a PAP&#700;s effect is limited in cases where articulation disorder is severe due to wide glossectomy and/or segmental mandibulectomy. In this paper, we describe speech outcomes of a patient with an articulation disorder following glossectomy and segmental mandibulectomy. We used a palatal plate (PP) based on a PAP, along with an artificial tongue (KAT). Speech improvement was evaluated by a standardized speech intelligibility test consisting of 100 syllables. The speech intelligibility score was significantly higher when the patient wore both the PP and KAT than when he wore neither (p＝0.013). The conversational intelligibility score was significantly improved with the PP and KAT than without PP and KAT (p＝0.024). These results suggest that speech function can be improved in patients with hard tissue defects with segmental mandibulectomy using both a PP and a KAT. The nature of the design of the PP and that of the KAT will allow these prostheses to address a wide range of tissue defects.</Abstract>
    <CoiStatement>No potential conflict of interest relevant to this article was reported.</CoiStatement>
    <ObjectList>
      <Object Type="keyword">
        <Param Name="value">palatal augmentation prosthesis</Param>
      </Object>
      <Object Type="keyword">
        <Param Name="value">artificial tongue</Param>
      </Object>
      <Object Type="keyword">
        <Param Name="value">articulation disorder</Param>
      </Object>
      <Object Type="keyword">
        <Param Name="value">glossectomy</Param>
      </Object>
      <Object Type="keyword">
        <Param Name="value">mandibulectomy</Param>
      </Object>
    </ObjectList>
    <ReferenceList/>
  </Article>
  <Article>
    <Journal>
      <PublisherName/>
      <JournalTitle>Acta Medica Okayama</JournalTitle>
      <Issn/>
      <Volume/>
      <Issue/>
      <PubDate PubStatus="ppublish">
        <Year>2015</Year>
        <Month/>
      </PubDate>
    </Journal>
    <ArticleTitle>A Spoken Dialog System with Redundant Response to Prevent User Misunderstanding</ArticleTitle>
    <FirstPage LZero="delete">223</FirstPage>
    <LastPage>226</LastPage>
    <Language>EN</Language>
    <AuthorList>
      <Author>
        <FirstName EmptyYN="N">Masaki</FirstName>
        <LastName>Yamaoka</LastName>
        <Affiliation/>
      </Author>
      <Author>
        <FirstName EmptyYN="N">Sunao</FirstName>
        <LastName>Hara</LastName>
        <Affiliation/>
      </Author>
      <Author>
        <FirstName EmptyYN="N">Masanobu</FirstName>
        <LastName>Abe</LastName>
        <Affiliation/>
      </Author>
    </AuthorList>
    <PublicationType/>
    <ArticleIdList>
      <ArticleId IdType="doi"/>
    </ArticleIdList>
    <Abstract>We propose a spoken dialog strategy for car navigation systems to facilitate safe driving. To drive safely, drivers need to concentrate on their driving; however, their concentration may be disrupted due to disagreement with their spoken dialog system. Therefore, we need to solve the problems of user misunderstandings as well as misunderstanding of spoken dialog systems. For this purpose, we introduced a driver workload level in spoken dialog management in order to prevent user misunderstandings. A key strategy of the dialog management is to make speech redundant if the driver’s workload is too high in assuming that the user probably misunderstand the system utterance under such a condition. An experiment was conducted to compare performances of the proposed method and a conventional method using a user simulator. The simulator is developed under the assumption of two types of drivers: an experienced driver model and a novice driver model. Experimental results showed that the proposed strategies achieved better performance than the conventional one for task completion time, task completion rate, and user’s positive speech rate. In particular, these performance differences are greater for novice users than for experienced users.</Abstract>
    <CoiStatement>No potential conflict of interest relevant to this article was reported.</CoiStatement>
    <ObjectList/>
    <ReferenceList/>
  </Article>
  <Article>
    <Journal>
      <PublisherName>IEEE</PublisherName>
      <JournalTitle>Acta Medica Okayama</JournalTitle>
      <Issn/>
      <Volume/>
      <Issue/>
      <PubDate PubStatus="ppublish">
        <Year>2015</Year>
        <Month/>
      </PubDate>
    </Journal>
    <ArticleTitle>Sound collection and visualization system enabled participatory and opportunistic sensing approaches</ArticleTitle>
    <FirstPage LZero="delete">390</FirstPage>
    <LastPage>395</LastPage>
    <Language>EN</Language>
    <AuthorList>
      <Author>
        <FirstName EmptyYN="N">Sunao</FirstName>
        <LastName>Hara</LastName>
        <Affiliation/>
      </Author>
      <Author>
        <FirstName EmptyYN="N">Masanobu</FirstName>
        <LastName>Abe</LastName>
        <Affiliation/>
      </Author>
      <Author>
        <FirstName EmptyYN="N">Noboru</FirstName>
        <LastName>Sonehara</LastName>
        <Affiliation/>
      </Author>
    </AuthorList>
    <PublicationType/>
    <ArticleIdList>
      <ArticleId IdType="doi"/>
    </ArticleIdList>
    <Abstract>This paper presents a sound collection system to
visualize environmental sounds that are collected using a crowd-sourcing approach. An analysis of physical features is generally used to analyze sound properties; however, human beings not
only analyze but also emotionally connect to sounds. If we want to visualize the sounds according to the characteristics of the listener,
we need to collect not only the raw sound, but also the subjective feelings associated with them. For this purpose, we developed a sound collection system using a crowdsourcing approach to collect physical sounds, their statistics, and subjective evaluations simultaneously. We then conducted a sound collection experiment using the developed system on ten participants.We collected 6,257 samples of equivalent loudness levels and their locations, and 516 samples of sounds and their locations. Subjective evaluations by
the participants are also included in the data. Next, we tried to visualize the sound on a map. The loudness levels are visualized as a color map and the sounds are visualized as icons which
indicate the sound type. Finally, we conducted a discrimination experiment on the sound to implement a function of automatic conversion from sounds to appropriate icons. The classifier is
trained on the basis of the GMM-UBM (Gaussian Mixture Model and Universal Background Model) method. Experimental results show that the F-measure is 0.52 and the AUC is 0.79.</Abstract>
    <CoiStatement>No potential conflict of interest relevant to this article was reported.</CoiStatement>
    <ObjectList/>
    <ReferenceList/>
  </Article>
</ArticleSet>
