Questions
- What happens if the input values are not scaled in the input dataset?
- What could happen if the background has a white pixel color while the content has a black pixel color when training a neural network?
- What is the impact of batch size on a model’s training time and memory?
- What is the impact of the input value range have on weight distribution at the end of training?
- How does batch normalization help improve accuracy?
- Why do weights behave differently during training and evaluation in the dropout layer?
- How do we know if a model has overfitted on training data?
- How does regularization help in avoiding overfitting?
- How do L1 and L2 regularization differ from each other?
- How does dropout help in reducing overfitting?
Learn more on Discord
Join our community’s Discord space for discussions with the authors and other readers: