国际影响力
第11届语音生成国际大会主席,2017年10月,中国天津
第10届中文口语信息处理国际大会主席,2016年10月,中国天津
第9届中文口语信息处理国际大会专题报告,2014年9月,新加坡
2022-现在Frontier in Language Science, 副主编
所获荣誉
1988, 天津市科技进步三等奖,“汉语语言合成系统”,党建武(唯一获奖者)
1994, 日本电气通信基础技术研究所科学发明奖,党建武,本多清志
2014,国家教学成果二等奖,获奖题目:“融合专业教学,提升计算科学思维能力,推进计算机基础教学改革与实践”;党建武(排名第六)
2020,天津大学优秀博士论文,郭凤羽,论文题目:“融合不同层次线索的隐式篇章关系识别研究”,(导师:党建武)
2022,天津大学优秀博士论文,王晓宝,论文题目:“在线社交网络中面向多阶段的情绪传播机理研究”,(导师:党建武、金弟)
科研成果
2004-2023 在日本主持了多项国家级重大项目及JSPS的项目包括“语音合成研究”、“语音生成建模及其在言语障碍中的应用”、“情感语音表征的神经机理研究”、“语音生成与感知的神经机理研究”等,研究费总额:5000多万日元;作为第二主持人参与多个国家级项目,经费总额:13500多万日元。
2013年 国家重大基础研发计划(973)项目 “互联网环境中文信息处理的基础理论与方法”,3280万,项目首席科学家。
2013年 国家基金重点项目 “语音产生过程的神经生理建模与控制”,300万,主持人。
2023年 国家面上基金 “言语交互意图理解的神经机理与建模研究”, 55万,主持人
Selected Journal papers
1.Z Li, G Zhang*, S Okada, L Wang, B Zhao, J Dang* (2024. MBCFNet: A Multimodal Brain–Computer Fusion Network for human intention recognition, Knowledge-Based Systems 296, 111826 (IF: 7.2, 中科院:一区)
2.Y Liao, Y Liu, S Liao*, Q Hu, J Dang (2024). Theoretical analysis of divide-and-conquer ERM: From the perspective of multi-view, Information Fusion 103, 102087 (IF:18.6,中科院一区)
3.Y. Lin, L. Wang*, J. Dang, S. Li, C. Ding (2023), "Disordered Speech Recognition Considering Low Resources and Abnormal Articulation," Speech Communication. 2023, 155: 103002. (IF:3.2,中科院三区)
4.Lili Guo, Shifei Ding*, Longbiao Wang*,Jianwu Dang(2023), DSTCNet: Deep Spectro-Temporal-Channel Attention Network for Speech Emotion Recognition, IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,1-10,2023 (IF:10.4, 中科院: Q1; JCR: Q1)
5.Y Gao, L Wang*, J Liu, J Dang, S Okada(2023),Adversarial Domain Generalized Transformer for Cross-Corpus Speech Emotion Recognition, IEEE Transactions on Affective Computing, 1-12, 2023 (IF:11.2, 中科院: Q2; JCR: Q1)
6.B Zhao, G Zhang*, L Wang, J Dang* (2023),Multimodal evidence for predictive coding in sentence oral reading, Cerebral Cortex, bhad145, 2023 (IF: 3.7, 中科院: Q2; JCR: Q1)
7.H Zhang, L Wang*, KA Lee, M Liu, J Dang, H Meng(2023),Meta-generalization for domain-invariant speaker verification, IEEE/ACM Transactions on Audio, Speech, and Language Processing 31, 1024-1036, 2023 (IF: 5.40, 中科院: Q2; JCR: Q1)
8.Z Li, G Zhang*, L Wang, J Wei, J Dang*(2023), Emotion recognition using spatial-temporal EEG features through convolutional graph attention network,Journal of Neural Engineering 20 (1), 016046 4, 2023 (IF: 4.0, 中科院: Q2; JCR: Q2)
9.K Khysru, J Wei*, J Dang, Research on Tibetan Speech Recognition Based on the Am-do Dialect,Computers, Materials & Continua 73 (3), 2022 (IF: 3.1, 中科院: Q3; JCR: Q2)
10.Y Wang, D Jin*, D He, K Musial, J Dang(2022),Community Detection in Social Networks Considering Social Behaviors, IEEE Access 10, 109969-109982 ,2022 (IF: 3.90, 中科院: Q3; JCR: Q2)
11.D Zhou, G Zhang*, J Dang*, M Unoki, X Liu(2022),Detection of brain network communities during natural speech comprehension from functionally aligned EEG sources, Frontiers in Computational Neuroscience 16, 919215,2022 (IF: 3.2, 中科院: Q4; JCR: Q2)
12.L Guo, L Wang*, J Dang*(2022), Learning affective representations based on magnitude and dynamic relative phase information for speech emotion recognition, ES Chng, S Nakagawa, Speech Communication 136, 118-127, 2022 (IF: 3.2, 中科院: Q3; JCR: Q2)
13.S Song, C Ma, W Sun, J Xu, J Dang, Q Yu*(2022), Efficient learning with augmented spikes: A case study with image classification, Neural Networks 142, 205-212. 2021 (IF: 7.8; 中科院: Q1; JCR: Q1)
14.Z Peng, J Dang*, M Unoki, M Akagi(2021), Multi-resolution modulation-filtered cochleagram feature for LSTM-based dimensional emotion recognition from speech, Neural Networks 140, 261-273, 2021 (IF: 7.8; 中科院: Q1; JCR: Q1)
15.M Liu, L Wang*, J Dang, KA Lee, S Nakagawa(2021), Replay attack detection using variable-frequency resolution phase and magnitude features, Comput. Speech Lang. 66, 101161, 2021 (IF: 4.3, 中科院: Q3; JCR: Q3)
16.Q Yu, C Ma, S Song, G Zhang, J Dang, KC Tan(2021), Constructing accurate and efficient deep spiking neural networks with double-threshold and augmented schemes, IEEE Transactions on Neural Networks and Learning Systems 33 (4), 1714-1726, 2021 (IF:10.4, 中科院: Q1; JCR: Q1)
17.Peng, S., Hu, Q., Dang, J., & Wang, W. (2020). Optimal feasible step-size based working set selection for large scale SVMs training. Neurocomputing, 407, 366-375. (IF:6,中科院:2区)
18.Yu, Q., Li, S., Tang, H., Wang, L., Dang, J., & Tan, K. C. (2020). Toward Efficient Processing and Learning With Spikes: New Approaches for Multispike Learning. IEEE Transactions on Cybernetics. (IF:9.4,中科院:1区)
19.Gaoyan Zhang, Yuke Si, Jianwu Dang* (2019)“Revealing the Dynamic Brain Connectivity from Perception of Speech Sound to Semantic Processing by EEG”, Neuroscience, Vol. 415, pp.70-76. (IF:2.9,中科院:3区)
20.Wei Feng, Xuecheng Nie, Yujun Zhang, Zhi-Qiang Liu, Jianwu Dang (2019) “Story co-segmentation of Chinese broadcast news using weakly-supervised semantic similarity” Neurocomputing, 355, pp.121-133. (IF:6,中科院:2区)
21.Z. Peng, Qinghua Hu, J. Dang* (2019) Multi-kernel SVM based depression recognition using social media data, International Journal of Machine Learning and Cybernetics, Vol. 10, 1, pp 43–57(IF:3.1,中科院:3区)
22.Wei Feng, Xuecheng Nie, Yujun Zhang, Lei Xie, J. Dang, (2018) Unsupervised measure of Chinese lexical semantic similarity using correlated graph model for news story segmentation, Neurocomputing, 318, pp.236-247(IF:6,中科院:2区)
23.Zhao B, Dang J*, Zhang G*. (2017) EEG Source Reconstruction Evidence for the Noun-Verb Neural Dissociation along Semantic Dimensions[J]. Neuroscience, 2017, 359.
24.J. Dang*, J. Wei, K. Honda, and Takayoshi Nakai (2016). “A study on transvelar coupling for non-nasalized sounds,” J. Acoust. Soc. Am. 139 (1), 441–454(IF:2.9,中科院:3区)
25.D. Ying, Y. Yan, J. Dang, and F. Soong (2011,11) "Voice Activity Detection Based On An Unsupervised Learning Framework",IEEE Trans. Audio, Speech and Language Processing,Vol. 19, No.8, 2624 - 2633(IF:4.1,中科院:2区)
26.X. Lu and J. Dang. (2008) “An investigation of dependencies between frequency components and speaker characteristics for text-independent speaker identification”, Speech Communication, 50, 312-322 (IF:3.2, 中科院:3区)
27.Dang, J. and Honda, K. (2004) “Construction and control of a physiological articulatory model,” Journal of Acoustical Society of America, 115(2), 853-870 (IF:2.9,中科院:3区)
28.Dang, J. and Honda, K. (2002) " Estimation of vocal tract shape from sounds via a physiological articulatory model," J. Phonetics, 30, 511-532 (IF:1.9,中科院:1区)
序号/专利号码/保护期/专利名称/授权国家/本人排序
1/ZL201910152781.9/2021-2041/一种基于藏文部件的端到端架构拉萨方言语音识别方法/中国/1
2/ZL201910166373.9/2021-2041/基于注意力驱动循环卷积网络的环境自适应语音增强算法/中国/3
3/ZL201910140461.1/2021-2041/基于生成对抗网络的深度特征的语音去混响方法/中国/3
4/ZL201910087795.7/2021-2041/基于自适应滤波器振幅相位特征提取的录音欺诈检测方法/中国/3
5/ZL201910143499.4/2021-2041/基于关键点编码和卷积神经网络进行鲁棒的声音识别方法/中国/4