# How to parallelize correctly a nested for loops

I'm working with OpenMP to parallelize a scalar nested for loop:

double P[N][N]; double x=0.0,y=0.0; for (int i=0; i<N; i++) { for (int j=0; j<N; j++) { P[i][j]=someLongFunction(x,y); y+=1; } x+=1; }

In this loop the important thing is that matrix P must be the same in both scalar and parallel versions:

All my possible trials didn't succeed...

## Answers

The problem here is that you have added iteration-to-iteration dependencies with:

x+=1; y+=1;

Therefore, as the code stands right now, it is not parallelizable. Attempting to do so will result in incorrect results. (as you are probably seeing)

Fortunately, in your case, you can directly compute them without introducing this dependency:

for (int i=0; i<N; i++) { for (int j=0; j<N; j++) { P[i][j]=someLongFunction((double)i, (double)N*i + j); } }

Now you can try throwing an OpenMP pragma over this and see if it works:

#pragma omp parallel for for (int i=0; i<N; i++) { for (int j=0; j<N; j++) { P[i][j]=someLongFunction((double)i, (double)N*i + j); } }