
Lab 33 was about creating a webpage that recognized the user's speech and displayed it on the webpage. I created a div in the HTML that would be used to display the text of what the user was saying later on. The rest was JS, starting by initiating the recognition process by using window.SpeechRecognition = window.SpeechRecognition || window.webkitSpeechRecognition; . Once we tell the window we want to start listening to the user, we store what the user will say in a variable, const recognition = new speechRecognition. Once this is done, we add an event listener to this variable which now starts listening to the user and adds the words the user is saying into the div.
These skills are important for web designers because it can make a site more convenient to use. A website that recognizes what someone is saying can implement if statements to make certain actions happen when the user says certain phrases. If a site has multiple pages to navigate through before getting to a specific page the user wants to get to, speech recognition can be used so that the user just says where they want to go and the website will load that page. Just like the previous lab, this isn't a heavily use feature because of security reasons but if used, it can make a site more convenient.
Comments
Post a Comment