Tell me more ×
Game Development Stack Exchange is a question and answer site for professional and independent game developers. It's 100% free, no registration required.

I noticed in Unity3D that each gameObject(entity) have its own renderer component, as far I understand, such component handle rendering logic.

I wonder if it is a common practice in entity/component based engines, when single entity have renderer components and logic components such as position, behavior altogether in one box?

Such approach sound odd to me, in my understanding entity itself belongs to logic part and shouldn't contain any render specific things inside.

With such approach it is impossible to swap renderers, it would require to rewrite all that customized renderers.

The way I would do it is, that entity would contain only logic specific components, like AI,transform,scripts plus reference to mesh, or sprite. Then some entity with Camera component would store all references to object that is visible to the camera. And in order to render all that stuff I would have to pass Camera reference to Renderer class and render all sprites,meshes of visible entities.

Is such approach somehow wrong?

share|improve this question
I removed the part of the question that asks "why does Unity do this," because it's (a) a second question and (b) not likely relevant to your actual problem. – Josh Petrie Mar 6 at 16:23
"With such approach it is impossible to swap renderers" 1) That's a bit of an assumption, it seems like the "renderer" component could just be a front end to whatever platform-specific rendering code it needs (pimpl idiom, for example) and 2) How much does that really matter for your needs? – Tetrad Mar 6 at 17:37

3 Answers

up vote 4 down vote accepted

Such approach sound odd to me, in my understanding entity itself belongs to logic part and shouldn't contain any render specific things inside.

In some entity systems, components only contain logic. In others, they only contain data. In yet others, they contain both. I'd certainly argue that putting the actual render commands (as in the OpenGL or D3D code) into the rendering component isn't ideal (see my answer here regarding the question "should objects render themselves?", which is the same principle under discussion). However, it is certainly possible to do so and even to do so in a fashion that allows the implementation of the rendering components to be swapped without having to alter the consumers of the component system. Doing so just involves any typical implementation-hiding technique.

It's acceptable, and common, to have a "visual component" that contains a reference to some renderable object that comes from the lower-level rendering subsystem and have that component export behavior or interface to allow the appearance data to be specified by other components (such as ones containing scripts).

The way I would do it is, that entity would contain only logic specific components, like AI,transform,scripts plus reference to mesh, or sprite. Then some entity with Camera component would store all references to object that is visible to the camera. And in order to render all that stuff I would have to pass Camera reference to Renderer class and render all sprites,meshes of visible entities.

Is such approach somehow wrong?

I don't really see what you gain by having a "camera" component. It seems like a very heavyweight operation inject into the entity system. Does, for example, the presence of the camera component mean that you always get a scene rendered from that camera's perspective? How then do you determine which of those scenes to present to the user, and where? It feels like -- without knowing more about this design -- you'd be shoving a lot of unrelated responsibility into the camera component. I'd prefer to see that responsibility handled by something external to the entity system.

Otherwise, that sounds like a perfectly usable system. It will have it's pros and cons, of course, but those will become apparent in practice and many of them will be specific to your needs and/or the needs of your game.

share|improve this answer

You are right and wrong at the same time. The entity/component model is very flexible and that is one reason why it is used. What you very well point out is that it muddles the simulation (game logic) and presentation (graphic & sound).

The first thing to note is that a component does not mean that it will actually do something. For example a MeshRenderComponent may not actually render the mesh. It can be implemented as containing all relevant data and hand the actual rendering to a "system" of the engine.

This has two advantages. First, the entity can be fully instantiated on the server. All components that don't "work" on the server are just dumb data containers. The second, is that a system with all data resident can run tighter and more efficient loops, then when moving over components.

The quint essence, is that the data of entity/component model is muddled, but you can still keep the different systems properly isolated.

The second thing to note is, the good old question: "Does it matter?"

This is a way harder question to answer. From a software engineering standpoint the decision is clear, you have a pure simulation and a pure presentation layer. (Model-View-Controller) But when it comes to artists and level designers, it gets harder. You mention Unit3D and you may see where I am coming from.

A level designer says, I want this splash here over the waterfall. The engineer will implement a water simulation and determine where these effect need to go, the level designer just want to place it there, who cares that it is not perfectly accurate.

As it turns out the way that Unity3D muddles logic and presentation is how most non engineers actually think how things should work. (And don't try to explain why multi-player is so hard to implement with such a model...)

share|improve this answer
Unity3D docs says, that Renderer components are actually rendering the stuff. The thing I also can't wrap my head around is that, as far as I know each component is being updated at fixed timestep. That means that if we believe what Unity docs say and the timestep is 40ms, then despite I could have 1000fps, in reality the entities visual representation would render/update only at 25fps? – Denis Narushevich Mar 6 at 15:32
That is Unity3D specific and I can't answer you that. Then again the 40ms probably relates to the logic tick time... not frame rate, since rendering just cannot be done "initiated" by the component. (Painters algorithm and all.) – rioki Mar 7 at 13:20

It seems that Renderer is just a component (http://docs.unity3d.com/Documentation/ScriptReference/Renderer.html).

As far as I understand Renderer provides common functionality to more specific inherited components:

http://docs.unity3d.com/Documentation/Components/class-MeshRenderer.html http://docs.unity3d.com/Documentation/Components/class-ParticleRenderer.html http://docs.unity3d.com/Documentation/Components/class-LineRenderer.html

As far as I know Unity supports several underlying renderers for its various platforms, Direct3D and OpenGL among them:

Does Unity for PC use Direct3D or OpenGL?

So it must be possible to swap underlying renderers even if it is not available to end user (scripter).

If you have already read general information on component-entity-system architectures, then you know that entities are generally collections of components. A component may generally be anything including physics, animation, game logic etc. Why treat graphics differently?

Some component-entity-system architectures tend to put more logic in components, while some concentrate it all in systems and use components as pure data.

Read this if you had not already:

http://stackoverflow.com/questions/1901251/component-based-game-engine-design/3495647#3495647

share|improve this answer
I agree, that graphical representation of entity should be component, but disagree, that any rendering should be done through component. The ball in real world have it's visual representation, but ball does nothing to be seen by man's eye, it's the light which delivers light particles to viewers eye. Than why entity comp should be doing any rendering? As written in unity docs "MeshRenderer renders meshes inserted by the MeshFilter or TextMesh.". So I'm trying to find out if it's common and if there is alternative approach which I could use in my realisation of simple entity/component engine. – Denis Narushevich Mar 6 at 15:14
@DenisNarushevich Ball emits reflected light in a particular way (texture) so it does something to be seen. Otherwise you can say that ball also does nothing to move - it's the unified impulse of its molecules that moves it. You "give" your ball a stream of raw light and it reflects it the way it wants and thus produces a rendering result. An eye is a different component that takes a stream of reflected light and transforms it into set of electric impulses. – Den Mar 6 at 16:24
@DenisNarushevich There is no "common" approach here. In my personal engine I only hold graphic data in rendering components and have a rendering system that iterates over components, but this is just my preference. IMHO you either put it in components or in systems. – Den Mar 6 at 17:04

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.