iOS Cocos2D optimization
I´m building a game that reads a bidimensional array so that it creates the map, but the walls are all separated from the corners and floors, each wall, each corner and each floor is an individual image, and this is consuming a lot of CPU, but I really want to create a random feeling of the map, and that´s why I´m using an image for each corner and wall.
I was thinking that maybe I could generate a texture built by merging 2 or more different textures, to enhance performance.
Does anyone know how that I could do that? Or maybe another solution? Does converting the images to PVR would make any difference?
For starters, you should use a texture atlas, created with a tool like TexturePacker, grouping as much of your 'images' onto a single atlas. Basically load it once and create as many sprites from it as you want without having to reload. Using PVR will speed up the load, and benefit your bundle size.
Secondly, especially for the map background, you should use a CCSpriteBatchNode that you init with the above sprite sheet. Then, when you create a tile, just create the sprite and add it to the batch node. Add the batch node to your scene. The benefit of this is that regardless of the number of sprites (tiles) contained in the batch node, this will all be drawn in a single GL call. Now, that is where you will gain the most benefit from a performance standpoint.
Finally, dont rely on the FPS information when running with the simulator. The simulator does not make use of the host's GPU, and its performance is well (much) below what you get on a device. So before posting a question about performance, make certain you measure on a device.