I'm writing a game using C++ and OpenGL 2.1. I was thinking how could I separate the data/logic from rendering. At the moment I use a base class 'Renderable' that gives a pure virtual method to implement drawing. But every object has so specialized code, only the object knows how to properly set shader uniforms and organize vertex array buffer data. I end up with a lot of gl* function calls all over my code. Is there any generic way to draw the objects?
An idea is to use the Visitor design pattern. You need a Renderer implementation that knows how to render props. Every object can call the renderer instance to handle the render job. In a few lines of pseudocode:
The gl* stuff is implemented by the renderer's methods, and the objects only store the data needed to be rendered, position, texture type, size ...etc. Also, you can setup different renderers (debugRenderer, hqRenderer, ...etc) and use these dynamically, without changing the objects. This also can be easy combined with Entity/Component systems. | |||||||||||||||||||||
|
It completely depends on whether you can make assumptions about what's common for all renderable entities or not. In my engine, all objects are rendered the same way, so the just need to provide vbos, textures and transformations. Then the renderer fetches all of them, thus no OpenGL functions calls are needed in the different objects at all. | |||||||||||||||||
|
I know you've already accepted Zhen's answer but I'd like to put another out there just in case it helps anyone else. To reiterate the problem, the OP wants the ability to keep the rendering code separate from the logic and data. My solution is to use a different class all together to render the component, which is separate from the DISCLAIMER: I hand-wrote these classes in the editor so there's a good chance that I've missed something in code, but hopefully, you'll get the idea. To show a (partial) example:
(Partial)
| ||||
|
This advice isn't really specific to rendering but should help come up with a system that keeps things largely separate. Firstly try and keep the 'GameObject' data separate from the position information. It's worth noting that simple XYZ positional information might not be so simple. If you are using a physics engine then you position data could be stored within in the 3rd party engine. You would either need to synchronize between them (which would involve a lot of pointless memory copying) or query the information directly from the engine. But not all objects need physics, some will be fixed in place so a simple set of floats works fine there. Some might even be attached to other objects, so their position is actually an offset of another position. In an advanced setup you might have position stored only on the GPU the only time it would be needed computer side is for scripting, storage and network replication. So you will likely have several possible choices for your positional data. Here it makes sense to use inheritance. Rather than an object owning it's position, instead that object should itself be owned by a indexing data structure. For example a 'Level' might have an Octree, or maybe a physics engine 'scene'. When you want to render (or setup a rendering scene), you query your special structure for objects that are visible to the camera. This also helps give good memory management. This way an object that isn't actually in an area doesn't even have a position which makes sense rather than returning 0.0 coords or the coords that it had when it was last in an area. If you no longer keep the coordinates in the object, instead of object.getX() you would end up having level.getX(object). The problem with that is looking up the object in the level will likely be a slow operation since it will have to look through all it's objects and match the one your querying. To avoid that I would probably create a special 'link' class. One that binds between a level and an object. I call it a "Location". This would contain the xyz coordinates as well as the handle to the level and a handle to the object. This link class would be stored in the spacial structure/level and the object would have a weak reference to it (if the level/location is destroyed the objects refrence needs to be updated to null. It might also be worth having the Location class actually 'own' the object, that way if a level is deleted, so is the special index structure, the locations it contains and its Objects.
Now the position information is stored only in the one place. Not duplicated between the Object, the Spacial indexing structure, renderer and so on. Spacial data structures like Octrees often don't even need to have the coordinates of the objects they store. There position is stored in the relative location of the nodes in the structure itself (it could be thought of as a kind of like lossy compression, sacrificing accuracy for fast lookup times). With the location object in the Octree then the actual coordinates are found inside it once the query is done. Or if you are using a physics engine to manage your object locations or a mixture between the two, the Location class should handle that transparently while keeping all your code in the one place. Another advantage is now the position and refrence to the level is stored in the same location. You could implement object.TeleportTo(other_object) and have it work across levels. Similarly AI path-finding could follow something into a different area. With regards to rendering. Your render can have a similar binding to the Location. Except it would have the rendering specific stuff in there. You probably don't need the 'Object' or 'Level' to be stored in this structure. The Object could be useful if you are trying to do something like colour picking, or rendering a hitbar floating above it and so on but otherwise the renderer only cares about the mesh and such. RenderableStuff would be a Mesh, could also have bounding boxes and so on.
You might not need to do this every frame, you could ensure you take a bigger region than the camera currently shows. Cache it, track object movements to see if there bounding box comes within range, track camera movement and so on. But don't start messing with that kind of stuff until you have benchmarked it. You physics engine itself might have a similar abstraction, since it also doesn't need the Object data, just the collision mesh and physics properties. All your core object data would contain would be the name of the mesh the object uses. The game engine can then go ahead and load this in whatever format it likes without burdening your object class with a bunch of render specific things (which might be specific to your rendering API, ie DirectX vs OpenGL). It also keeps different components separate. This makes it easy to do things like replace your physics engine since that stuff is mostly self contained in the one location. It also makes unittesting much easier. You can test things like physics queries without having to have any actual fake objects setup since all you need is the Location class. You can also optimize stuff easier. It makes it more obvious what queries you need to perform on what classes and single locations to optimize them (for example the above level.getVisibleObject would be where you could cache things if the camera doesn't move to much). | |||
|
Build a rendering-command system. A high-level object, which has access to both the There are more advantages to this than just abstraction; eventually as your rendering complexity grows you can sort and group each render command by texture or shader for example in
| |||
|
m_renderable
member. That way, you can separate your logic better. Do not enforce the renderable "interface" on general objects that also have physics, ai and whatnot.. After that, you can manage renderables separately. You need a layer of abstractization over OpenGL function calls in order to decouple things even more. So, don't expect a good engine to have any GL API calls inside its various renderable implementations. That's it, in a micro-nutshell. – teodron Oct 2 at 7:20