Is it possible to run a cuda kernel on multiple gpus

This is a fairly simple question but googling doesn't seem to have the answer, so.

What I want to know is if I have two gpu cards (identical) capable of running cuda, can my kernel span these cards? Or is it bound to one card or the other? I.e. is cuda presented with the entire set of available gpu cores, or just the ones on the card it is run on.

If so, is there anything special I need to know about in order to make it happen and are there any examples over and above the cuda sdk worth knowing about?

Target language is of course C/C++.

Thanks in advance.

Answers


A single CUDA kernel launch is bound to a single GPU. In order to use multiple GPUs, multiple kernel launches will be required.

The cuda device runtime API focuses on whichever device is selected. Any given kernel launch will be launched on whichever device was most recently selected using cudaSetDevice()

Examples of multi-GPU programming are given in the cuda samples simple multi-gpu with P2P and simple multi-gpu


Need Your Help

wordpress url parsing using php - parse into php variable

php wordpress parsing url canonicalization

hello wordpress/ url/ php experts I really appreciate your help in advance.

Why video defaultPlaybackRate doesn't work in Chrome?

javascript jquery html5-video

I saw some examples of defaultPlaybackRate and they say it work on Chrome. So I use their example codes and run on Chrome, it doesn't change the speed to 3.0x when I click the button. Anyone can te...

About UNIX Resources Network

Original, collect and organize Developers related documents, information and materials, contains jQuery, Html, CSS, MySQL, .NET, ASP.NET, SQL, objective-c, iPhone, Ruby on Rails, C, SQL Server, Ruby, Arrays, Regex, ASP.NET MVC, WPF, XML, Ajax, DataBase, and so on.