Simulator/Simulator Results Analyzer Usage Scenarios

Microsoft Speech Platform SDK 11

Collapse image Expand Image Copy image CopyHover image

Simulator takes as input an EMMA document that references a set of utterances (for example ".wav" files), associated grammars, and optional transcriptions. Configured against a specific recognizer, the tool returns a set of recognition results.

You can use Simulator together with Simulator Results Analyzer to debug and tune the components of speech recognition. The two tools play complimentary roles:

  • Simulator is primarily concerned with performing recognitions and getting raw results.

  • Simulator Results Analyzer is primarily concerned with extracting actionable measurements from the raw results produced by Simulator.

Usage Scenarios

The following examples describe the scenarios in which you can use Simulator and Simulator Results Analyzer together to generate recognition results and to analyze the factors that contribute to recognition.

Text-Based Analysis

You have a new grammar and want to measure its performance before deploying it into a live service. However, you do not have any audio files with which to test the grammar. You assemble a set of text phrases and incorporate them in an EMMA document that you can provide to Simulator as input. When you run Simulator, it performs emulated recognition on the input text phrases, generating a set of recognition results. Then, you use Simulator Results Analyzer to analyze the results obtained by emulation.

Audio-Based Analysis

Having analyzed a grammar by providing text as input, you now want to analyze the grammar using audio for input. You obtain a set of audio files that you can use to test the grammar. You add references to these audio files in the EMMA document that you will use as input to Simulator. When you run Simulator, you receive recognition results that were produced by recognition of the audio files. Then you use Simulator Results Analyzer to analyze the results obtained by recognition of audio input.

Analysis of Transcribed Audio

You have an EMMA document that contains references to audio files. The EMMA document can be created from scratch or obtained from an utterance-capture web service. You modify the EMMA document by adding a transcription for each utterance and then use Simulator to provide recognitions for each utterance. Then, you use Simulator Results Analyzer to perform an analysis against the recognition results.

The output from Simulator Results Analyzer includes calculation of correct-accept/false-accept (CA/FA) rates for the complete range of confidencelevel settings on the speech recognition engine, identifies out-of-grammar phrases, and calculates error rates that illustrate the overall accuracy of your grammars. You can use this data to create graphs and tables that highlight significant data points in the recognition results, or as input to other tools for additional analysis.

Comparison of Results from Different Runs

You have a set of recognition results from a previous run of Simulator which used an older version of the application grammar. Using a newer version of the grammar as input to Simulator, you generate an updated set of recognition results. You can use Simulator Results Analyzer to produce a set of graphs that compare the relative performance of the older and newer grammars. This will assist you to determine whether one grammar does a better job of correctly recognizing a set of utterances versus another grammar.

Offsite Analysis of Run Data

As an IVR application developer, you perform a log tuning run and obtain results that are unexpectedly different from previous runs. You then call customer support to find out why the recognition results were not as expected. You send the output from the recognition engine to the support analyst, who examines the output and determines that engine version and configuration did not match the ones used in the previous runs.

See Also