- How are pre-trained word vectors obtained?
From an existing database such as GLOVE or word2vec - How do we map from an image feature embedding to word embedding in Zero-shot learning?
By creating a suitable neural network that returns a vector of the same shape as word-embedding and training with mse-loss (comparing prediction with actual word-embedding) - Why is the Siamese network called so?
Because we always produce and compare two outputs with each other, for identicalness. Siamese stands for twins. - How does the Siamese network come up with the similarity between the two images?
The loss function forces the network to predict that the outputs have a smaller distance if the images are similar.
United States
Great Britain
India
Germany
France
Canada
Russia
Spain
Brazil
Australia
Singapore
Hungary
Ukraine
Luxembourg
Estonia
Lithuania
South Korea
Turkey
Switzerland
Colombia
Taiwan
Chile
Norway
Ecuador
Indonesia
New Zealand
Cyprus
Denmark
Finland
Poland
Malta
Czechia
Austria
Sweden
Italy
Egypt
Belgium
Portugal
Slovenia
Ireland
Romania
Greece
Argentina
Netherlands
Bulgaria
Latvia
South Africa
Malaysia
Japan
Slovakia
Philippines
Mexico
Thailand