Neural Networks with Keras Cookbook
上QQ阅读APP看书,第一时间看更新

Classifying common audio

In the previous sections, we have understood the strategy to perform modeling on a structured dataset and also on unstructured text data.

In this section, we will be learning about performing a classification exercise where the input is raw audio.

The strategy we will be adopting is that we will be extracting features from the input audio, where each audio signal is represented as a vector of a fixed number of features.

There are multiple ways of extracting features from an audio—however, for this exercise, we will be extracting the Mel Frequency Cepstral Coefficients (MFCC) corresponding to the audio file.

Once we extract the features, we shall perform the classification exercise in a way that is very similar to how we built a model for MNIST dataset classification—where we had hidden layers connecting the input and output layers.

In the following section, we will be performing classification on top of an audio dataset where there are ten possible classes of output.