Best practices for string processing on the GPU?

I was wondering just how realistic is it to process strings, rather than numerics, on the GPU? Specifically, what I'm interested in is using C++ AMP to perform comparisons between an array of strings and a target string.

I've started with basics such as passing a wchar_t* strings[] into a function, but it turns out that you cannot even create a view with a type smaller than an int!

So my question is - are there any best practices out there, or is this generally a bad idea? I'm also interested in things like warp divergence - for instance, how efficient would it be to compute string lengths on a large array?


You can work with chars in C++ AMP as per this blog post:

IMO warp divergence is no different in string processing as it would be in other algorithms, so I wouldn't pre-worry about that aspect of things. First get it right, then get it fast, then tune it to be faster.

In September we will post on our blog a string processing sample that shows performance benefits with C++ AMP over a CPU multi-core implementation - staty tuned for that. In short, yes it can be worth offloading string manipulation algorithms to accelerators such as the GPU.

Need Your Help

Browser testing on Windows 8?

testing windows-7 windows-8 cross-browser

I have a Mac and I am deciding whether to install Windows 7 or 8 on Boot Camp.

Making a matrix symmetric with regard to row and column name in R

r matrix data.frame adjacency-matrix symmetric

I want to make my matrix symmetric with regard to the row names and columns names,

About UNIX Resources Network

Original, collect and organize Developers related documents, information and materials, contains jQuery, Html, CSS, MySQL, .NET, ASP.NET, SQL, objective-c, iPhone, Ruby on Rails, C, SQL Server, Ruby, Arrays, Regex, ASP.NET MVC, WPF, XML, Ajax, DataBase, and so on.