Before any training can begin, ML techniques in general, and so DL techniques, have a set of parameters that have to be chosen. They are referred to as hyperparameters. Keeping focus on DL, we can say that some of these (the number of layers and their size) define the architecture of a neural network, while others define the learning process (learning rate, regularization, and so on). Hyperparameter optimization is an attempt to automate this process (that has a significant impact on the results achieved by training a neural network) using a dedicated software that applies some search strategies. DL4J provides a tool, Arbiter, for hyperparameter optimization of neural nets. This tool doesn't fully automate the process—a manual intervention from data scientists or developers is needed in order to specify the search spaces (the ranges of valid...
United States
Great Britain
India
Germany
France
Canada
Russia
Spain
Brazil
Australia
Singapore
Hungary
Philippines
Mexico
Thailand
Ukraine
Luxembourg
Estonia
Lithuania
Norway
Chile
South Korea
Ecuador
Colombia
Taiwan
Switzerland
Indonesia
Cyprus
Denmark
Finland
Poland
Malta
Czechia
New Zealand
Austria
Turkey
Sweden
Italy
Egypt
Belgium
Portugal
Slovenia
Ireland
Romania
Greece
Argentina
Malaysia
South Africa
Netherlands
Bulgaria
Latvia
Japan
Slovakia