Call for Paper - September 2020 Edition
IJCA solicits original research papers for the September 2020 Edition. Last date of manuscript submission is August 20, 2020. Read More

Robust Speech Processing in EW Environment

Print
PDF
International Journal of Computer Applications
© 2012 by IJCA Journal
Volume 38 - Number 11
Year of Publication: 2012
Authors:
Akella Amarendra Babu
Ramadevi Yellasiri
Nagaratna P. Hegde
10.5120/4750-6949

Akella Amarendra Babu, Ramadevi Yellasiri and Nagaratna P Hegde. Article: Robust Speech Processing in EW Environment. International Journal of Computer Applications 38(11):46-50, January 2012. Full text available. BibTeX

@article{key:article,
	author = {Akella Amarendra Babu and Ramadevi Yellasiri and Nagaratna P. Hegde},
	title = {Article: Robust Speech Processing in EW Environment},
	journal = {International Journal of Computer Applications},
	year = {2012},
	volume = {38},
	number = {11},
	pages = {46-50},
	month = {January},
	note = {Full text available}
}

Abstract

Speech communication in Electronic Warfare (EW) environment should be resistant to interception, masquerade and tolerant to communication channel errors. In this paper, we described an algorithm which provides speech compression, strong encryption, error tolerance and speaker authentication features. This Robust Speech Coder (RSC) is backward compatible with the existing codecs with capability to opt for additional features as and when required.

References

  • Wai C. Chu, 2003, Speech Coding Algorithms, Wiley Interscience.
  • S. Collura, Diane F. Brandt, Douglas J. Rahikka, 2002, The 1.2Kbps/2.4Kbps MELP Speech Coding Suite with Integrated Noise Pre-Processing, National Security Agency.
  • William Stallings, 2009, Network Security Essentials Applications and Standards, Pearson Education
  • Lann M Supplee, Ronald P Cohn, John S. Collura, Alan V McCree, 2002, MELP: The New Federal Standard at 2400 bps
  • Arundhati S. Mehendale and M. R. Dixit, 2011, Speaker Identification, Signal & Image Processing: An International Journal (SIPIJ) Vol.2, No.2, June 2011
  • Jelena NIKOLIC, Zoran PERIC, 2008, Lloyd–Max’s Algorithm Implementation in Speech Coding Algorithm Based on Forward Adaptive Technique, INFORMATICA, 2008, Vol. 19, No. 2, 255–270
  • Chetana Prakash, Dhananjaya N., and S. V. Gangashetty, 2011 “Detection of Glottal Closure Instants from Bessel Features using AM-M Signal,” IWSSP 2011, pp 143-146
  • Sri Harish Reddy M., Kishore Prahallad, S. V. Gangashetty, and B. Yagnanaryana, 2011, “Significance of Pitch Synchronous Analysis for Speaker Recognition using AANN Models”, Eleventh INTERSPEECH 2010, pp 669-672
  • A. Amarendra Babu, Suryakant V. Gangashetty, 2011, Algorithms for Speech Coders in Military Communications. In the proceedings of 3rd International Conference on Science Engineering and Technology (SET), 17- 18 Nov 2011,Vol 6 pp 754 – 760 at VIT, Vellore.
  • A. Amarendra Babu, B. Sravan Kumar, K. K. Basheer, 2011, Algorithms for Secure Speech Coding in NetCentric Operations Environment. In the proceedings of 3rd International Conference on Science Engineering and Technology (SET), 17- 18 Nov 2011, Vol 4 pp 1203 – 1208 at VIT, Vellore.
  • B. Yegnanarayana, R. Kumara Swamy and K. S. R. Murty, 2009 Determining mixing parameters from multispeaker data using speech-specific information, IEEE Transactions on Audio, Speech, and Language Processing, vol. 17, no. 6, pp. 1196-1207,Aug. 2009.
  • K. S. R. Murty, B. Yegnanarayana and M. Anand Joseph, 2009, Characterization of glottal activity from speech signals, IEEE Signal Processing Letters, vol. 16, no. 6, pp. 469-472, June 2009.
  • B. Yegnanarayana and K. S. R. Murty, 2009, Event-based instantaneous fundamental frequency estimation from speech signals, IEEE Transactions on Audio, Speech, and Language Processing, vol. 17, no. 4, pp. 614-624, May 2009.
  • Srinivas Desai, E. Veera Raghavendra, B. Yegnanarayana, Alan W. Black and S. Kishore Prahallad, 2009, Voice conversion using artificial neural networks, Proc. International Conference on Acoustics, Speech, and Signal Processing, 2009, Taipei, Taiwan, pp. 3893- 3896, Aptil 19-24, 2009.
  • A. K. Sao and B. Yegnanarayana, 2009, Analytic phase-based representation for face recognition, Proc. International Conference on Advances in Pattern Recognition, 2009 Kolkata, India, pp. 453-456, Feb. 04-06, 2009.
  • Anvita Bajpai and B. Yegnanarayana, 2008, Combining evidence from subsegmental and segmental features for audio clip classification, Proc. IEEE Region 10 Conference (TENCON) 2008, Hyderabad, India, pp. 1-5, Nov. 19-21, 2008.
  • B. Yegnanarayana, S. Rajendran, Hussien Seid Worku and N. Dhananjaya, 2008, Analysis of glottal stops in speech signals, Proc. INTERSPEECH 2008, Brisbane, Australia, pp. 1481-1484, Sep. 22-26, 2008.
  • N. Dhananjaya, S. Rajendran and B. Yegnanarayana, 2008, Features for automatic detection of voice bars in continuous speech, Proc. INTERSPEECH 2008, Brisbane, Australia, pp. 1321-1324, Sep. 22-26, 2008.
  • E. Veera Raghavendra, B. Yegnanarayana, Alan W. Black and S. Kishore Prahallad, 2008, Building sleek synthesizers for multi-lingual screen reader, Proc. INTER-SPEECH 2008, Brisbane, Australia, pp. 1865-1868, Sep. 22-26, 2008.
  • C. Krishna Mohan, N. Dhananjaya, B. Yegnanarayana, 2008, Video shot segmentation using late fusion technique, Proc. Seventh International Conference on Machine Learning and Applications, 2008, San Diego, California, USA, pp. 267-270, Dec. 11-13, 2008.