Home Project-material DESIGN AND IMPLEMENTATION OF TEXT TO SPEECH APPLICATION FOR VISION IMPAIRED STUDENTS (A CASE STUDY OF PACELLI SCHOOL FOR THE BLIND AND PARTIALLY SIGHTED, SURULERE LAGOS)

DESIGN AND IMPLEMENTATION OF TEXT TO SPEECH APPLICATION FOR VISION IMPAIRED STUDENTS (A CASE STUDY OF PACELLI SCHOOL FOR THE BLIND AND PARTIALLY SIGHTED, SURULERE LAGOS)

Dept: COMPUTER SCIENCE File: Word(doc) Chapters: 1-5 Views: 2

Abstract

...

1.1       Background of the Study

As our society farther expands, there have been many supports for second class citizens, disabled. One of many supports that is urgent is the guarantee of mobility for blind people. There have been many efforts but even now, it is not easy for blind people to independently move. As electronic technologies have improved, a research about Electrical Aided: EA for blind people has started. With a current product, Human Tech of Japan developed Navigation for blind people, using GPS and cell phone. This system is consisted of cell phone of the user (blind people), a subminiature of GPS receiver, a magnetic direction sensor, a control unit and speech synthesis equipment with PC of base station. Text-To-Speech has been available for decades (since 1939). Unfortunately, quality of the output-especially in terms of naturalness-has historically been sub-optimal. Terms such as “robotic” have been used to describe synthetic speech. Recently, the overall quality of Text-To-Speech from some vendors has dramatically improved. Quality is now evident not only in the remarkable naturalness of inflection and intonation, but also in the ability to process text such as numbers, abbreviations and addresses in the appropriate context.

 

Text-to-speech (TTS) is a type of speech synthesis application that is used to create a spoken sound version of the text in a computer document, such as a help file or a Web page. TTS can enable the reading of computer display information for the visually challenged person, or may simply be used to augment the reading of a text message. Current TTS applications include voice-enabled e-mail and spoken prompts in voice response systems.

 

Long before electronic signal processing was invented, there were those who tried to build machines to create human speech. Some early legends of the existence of “Brazen Heads” involved Silvester (2013), Albertus (2013), and Roger (2013). In 1779, the Danish scientist Christian Kratzenstein, working at the Russian Academy of Sciences, built models of the human vocal tract that could produce the five long vowel sounds (in International Phonetic Alphabet notation, they are [a], [e], [i], [o] and [u]). This was followed by the bellows-operated “acoustic-mechanical speech machine” by Wolfgang von Kempelen of Pressburg, Hungary, described in a 1791 paper. This machine added models of the tongue and lips, enabling it to produce consonants as well as vowels. According to Charles (2016), wheatstone produced a “speaking machine” based on von Kempelen’s design, and in 1857, M. Faber built the “Euphonia”. Wheatstone’s design was resurrected in 1923 by Paget.

 

In the 1930s, Bell Labs developed the vocoder, which automatically analyzed speech into its fundamental tone and resonances. From his work on the vocoder, Homer Dudley developed a keyboard-operated voice synthesizer called The Voder (Voice Demonstrator), which he exhibited at the 1939 New York World’s Fair. The Pattern playback was built by Dr. Franklin S. Cooper and his colleagues at Haskins Laboratories in the late 1940s and completed in 1950. There were several different versions of this hardware device but only one currently survives. The machine converts pictures of the acoustic patterns of speech in the form of a spectrogram back into sound. Using this device, Allen et al (2017) were able to discover acoustic cues for the perception of phonetic segments (consonants and vowels).

 

1.2       Statement of the Problem

The challenge that is picked up that lead to this piece of project work is that the blind find it not easy to know exactly word there are typing even though they know the key board very well, still they just assume that there are correct. At the end of the day they will find themselves making a lot of mistake in their typing works. This lead to the development of this project, Text to Speech Application.

 

1.3       Aim/Objectives of the Study

The aim of this research is to design and implement text to speech application for visually impaired students and the main objective of this project is to

  1. Create an application that will convent text to speech in order for the visually impaired student to know exactly what they are typing and presenting in the computer system.
  2. The visually impaired student will be well assured of what they are typing and know if they are to correct their mistake if any typographical error is their work.

 

1.4       Scope of the Study

The scope of this research work converts text into spoken word, by analyzing and processing the text using Natural Language Processing (NLP) and then using Digital Signal Processing (DSP) technology to convert this processed text into synthesized speech representation of the text.

 

1.5       Significance of the Study

The significance of this project work is serving as a helping tool for the vision impaired students; therefore, this goes a long way by creating a text to speech synthesis application. The blind student will use the software to voice out what they have type.

 

1.6       Limitation of the Study

The limitations encounter in this research work includes:

  1. Limited time to carryout research on the subject. Not enough time to gather information for this research work.
  2. The epileptic nature of power supply in the country. After we have gather the little material – information for this work, they are shortage of power supply to organize our work.
  3. Another limitation is Finance: doing a research work definitely needs money. Finance is one of the greatest challenges we have during this project.

 

1.7       Definition of Terms

GPS:                           Global Position System: Is a radio navigation system (Amos, 2019).

Phonetic:                    Relating to the sounds of spoken language (Dawson, 2016).

Robot:                         Robot is a machine built to carry out some complex task or group of tasks especially one which can be programmed (Karel, 2013).

Text:                           Text is a writing consisting of multiple glyphs, characters, symbols or sentences (Culler, 2014).

Speech:                       Speech is the ability to speak or to use vocalization to communicate (Mittal, 2017).



Recent Project Materials

Abstract migration norms is defined as all policies and laws that govern the movement of people from one cou...
Word(doc) 1-5 46 Read More
Abstract A study on the removal of lead from soil samples in zamfara using modified kaolinite clay was studi...
Word(doc) 1-5 12 Read More
Abstract The study examines the impact of Corona Virus on small and medium scale enterprises in Nigeria. CO...
Word(doc) 1-5 16 Read More
Abstract Weed flora of different management techniques under different cropping systems have been reported b...
Word(doc) 1-5 6 Read More
View More Topics

Browse by Departments