We'll now discuss two important concepts in GPU programming—thread synchronization and thread intercommunication. Sometimes, we need to ensure that every single thread has reached the same exact line in the code before we continue with any further computation; we call this thread synchronization. Synchronization works hand-in-hand with thread intercommunication, that is, different threads passing and reading input from each other; in this case, we'll usually want to make sure that all of the threads are aligned at the same step in computation before any data is passed around. We'll start here by learning about the CUDA __syncthreads device function, which is used for synchronizing a single block in a kernel.
United States
Great Britain
India
Germany
France
Canada
Russia
Spain
Brazil
Australia
Singapore
Hungary
Philippines
Mexico
Thailand
Ukraine
Luxembourg
Estonia
Lithuania
Norway
Chile
South Korea
Ecuador
Colombia
Taiwan
Switzerland
Indonesia
Cyprus
Denmark
Finland
Poland
Malta
Czechia
New Zealand
Austria
Turkey
Sweden
Italy
Egypt
Belgium
Portugal
Slovenia
Ireland
Romania
Greece
Argentina
Malaysia
South Africa
Netherlands
Bulgaria
Latvia
Japan
Slovakia