I'm looking to improve the performance of my Draw() method for my tilemap. I've poked around for a few hours and done things like:
- Removed any new calls from the for loops where I'm drawing the tiles.
- Made a texture atlas/spritesheet; all my tiles for the map are on the same file to avoid texture swapping.
- Culling - I only show the tiles that are on the screen (plus an extra two to avoid tiles popping in at the bounds of the camera).
The code is as follows:
public void Draw(SpriteBatch spriteBatch)
{
int firstTileX = (int)Camera2D.Position.X / Tile.WIDTH;
int firstTileY = (int)Camera2D.Position.Y / Tile.HEIGHT;
// For smooth camera movement on the tilemap.
int offsetX = (int)Camera2D.Position.X % Tile.WIDTH;
int offsetY = (int)Camera2D.Position.Y % Tile.HEIGHT;
// ScreenScale factor included for culling.
int tileCountX = (int)(Camera2D.Width / (GraphicsConfig.ScreenScaleX * Tile.WIDTH)) + 2;
int tileCountY = (int)(Camera2D.Height / (GraphicsConfig.ScreenScaleY * Tile.HEIGHT)) + 2;
// No index out of range exceptions allowed!
if (tileCountX + firstTileX >= _width)
tileCountX = (_width - 1) - firstTileX;
if (tileCountY + firstTileY >= _height)
tileCountY = (_width - 1) - firstTileY;
Vector2 tmpPositionHolder = new Vector2();
for (int i = 0; i < tileCountY; ++i)
{
for (int j = 0; j < tileCountX; ++j)
{
int row = i + firstTileY;
int col = j + firstTileX;
// If the camera's position is off the map (e.g. firstTileX < 0 and/or firstTileY < 0).
if (row < 0 || col < 0)
continue;
tmpPositionHolder.X = ((j * Tile.WIDTH) - offsetX);
tmpPositionHolder.Y = ((i * Tile.HEIGHT) - offsetY);
spriteBatch.Draw(AssetManager.Textures["Textures/tile_spritesheet_v2"],
tmpPositionHolder,
_data[row, col].Source,
Color.White);
}
}
}
I will note that I do use a Dictionary to store my textures in a static class; I've been pondering changing this but I've had a really difficult time finding a current argument for storing the textures per class or in a asset manager (and loading/unloading as appropriate).
I've looked through the source for MonoGame and noted that, to avoid redundant method calls, I could try filling out all of the parameters for spriteBatch.Draw() (origin, scale, etc) and/or using a Rectangle from the get-go (again, after looking through MonoGame's source) but the performance it would appear to save seems trivial.
The virtual resolution is set to 400x300 and the current window is set to 800x600 (scale factor of 2 in both X and Y) and the tiles are 16x16. I get a noticeable improvement in performance if I shrink tileCountX or tileCountY by 1 (~15FPS from 150 to 165), which makes me feel like my problem is indeed related to how I'm rendering my tiles.
Help is appreciated, thank you!
Edit: Updated code -
public void Draw(SpriteBatch spriteBatch)
{
int firstTileX = (int)Camera2D.Position.X / Tile.WIDTH;
int firstTileY = (int)Camera2D.Position.Y / Tile.HEIGHT;
// For smooth camera movement on the tilemap.
int offsetX = (int)Camera2D.Position.X % Tile.WIDTH;
int offsetY = (int)Camera2D.Position.Y % Tile.HEIGHT;
// ScreenScale factor included to improve HSR.
int tileCountX = (int)(Camera2D.Width / (GraphicsConfig.ScreenScaleX * Tile.WIDTH)) + 2;
int tileCountY = (int)(Camera2D.Height / (GraphicsConfig.ScreenScaleY * Tile.HEIGHT)) + 2;
// No index out of range exceptions allowed!
if (tileCountX + firstTileX >= _width)
tileCountX = (_width - 1) - firstTileX;
if (tileCountY + firstTileY >= _height)
tileCountY = (_width - 1) - firstTileY;
Vector2 tmpPositionHolder = new Vector2();
for (int i = 0; i < tileCountY; ++i)
{
for (int j = 0; j < tileCountX; ++j)
{
int row = i + firstTileY;
int col = j + firstTileX;
// If the camera's position is off the map (e.g. firstTileX < 0 and/or firstTileY < 0).
if (row < 0 || col < 0)
continue;
tmpPositionHolder.X = ((j * Tile.WIDTH) - offsetX);
tmpPositionHolder.Y = ((i * Tile.HEIGHT) - offsetY);
spriteBatch.Draw(_texture, tmpPositionHolder, _data[row, col].Source, Color.White);
}
}
}
AssetManager.Textures["Textures/tile_spritesheet_v2"]
line to above thefor
loops? – craftworkgames Nov 7 at 5:09