As we mentioned earlier, we are using LSTM for use case one and two implementations of CNN (simple CNN and Mobilenet V1) for use case two. All of these DL implementations support transfer learning for both use cases that do not require training from scratch.
Model training
Use case one
We consider a stacked LSTM, which is a popular DL model for sequence prediction, including time series problems. A stacked LSTM architecture consists of two or more LSTM layers. We implemented the HAR for use case one, using a two-layered stacked LSTM architecture. The following diagram presents a two-layered LSTM, where the first layer provides a sequence of outputs instead of a single value output to the second LSTM layer:
We can train...