Skip navigation
Please use this identifier to cite or link to this item: http://repository.iitr.ac.in/handle/123456789/21700
Full metadata record
DC FieldValueLanguage
dc.contributor.authorPadhy S.-
dc.contributor.authorTiwari J.-
dc.contributor.authorRathore S.-
dc.contributor.authorKumar, Neetesh Sharath-
dc.date.accessioned2022-03-02T11:41:02Z-
dc.date.available2022-03-02T11:41:02Z-
dc.date.issued2019-
dc.identifier.citation2019 IEEE Conference on Information and Communication Technology, CICT 2019 (2019)-
dc.identifier.isbn9.78173E+12-
dc.identifier.urihttps://doi.org/10.1109/CICT48419.2019.9066252-
dc.identifier.urihttp://repository.iitr.ac.in/handle/123456789/21700-
dc.description.abstractHearing impaired people have to tackle a lot of challenges, particularly during emergencies, making them dependent on others. The presence of emergency situations is mostly comprehended through auditory means. This raises a need for developing such systems that sense emergency sounds and communicate it to the deaf effectively. The present study is conducted to differentiate emergency audio signals from non-emergency situations using Multi-Channel Convolutional Neural Networks (CNN). Various data augmentation techniques have been explored, with particular attention to the method of Mixup, in order to improve the performance of the model. The experimental results showed a cross-validation accuracy of 88.28 % and testing accuracy of 88.09 %. To put the model into practical lives of the hearing impaired an android application was developed that made the phone vibrate every time there was an emergency sound. The app could be connected to an android wear device such as a smartwatch that will be with the wearer every time, effectively making them aware of emergency situations. © 2019 IEEE.-
dc.language.isoen_US-
dc.publisherInstitute of Electrical and Electronics Engineers Inc.-
dc.relation.ispartof2019 IEEE Conference on Information and Communication Technology, CICT 2019-
dc.relation.ispartof2019 IEEE Conference on Information and Communication Technology, CICT 2019-
dc.subjectAssistive technology-
dc.subjectAudio data augmentation-
dc.subjectConvolutional neural networks-
dc.subjectMel spectrograms-
dc.subjectMixup-
dc.subjectMulti-channel-
dc.subjectSound classification-
dc.titleEmergency signal classification for the hearing impaired using multi-channel convolutional neural network architecture-
dc.typeConference Paper-
dc.scopusid57216695823-
dc.scopusid57216695224-
dc.scopusid57216694483-
dc.scopusid57207838186-
dc.affiliationPadhy, S., Information Technology, ABV-IIITM Gwalior, Madhya Pradesh, 474015, India-
dc.affiliationTiwari, J., Information Technology, ABV-IIITM Gwalior, Madhya Pradesh, 474015, India-
dc.affiliationRathore, S., Information Technology, ABV-IIITM Gwalior, Madhya Pradesh, 474015, India-
dc.affiliationKumar, N., ABV-IIITM Gwalior, Department of Information Technology, Madhya Pradesh, 474015, India-
dc.identifier.conferencedetails2019 IEEE Conference on Information and Communication Technology, CICT 2019, 6 - 8, December, 2019-
Appears in Collections:Conference Publications [CS]

Files in This Item:
There are no files associated with this item.
Show simple item record


Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.