Sign up ×
Game Development Stack Exchange is a question and answer site for professional and independent game developers. It's 100% free, no registration required.

Is it necessary to use graphics APIs to get hardware acceleration in a 3D game? To what degree is it possible to free of dependencies to graphics-card APIs like OpenGL, DirectX, CUDA, OpenCL or whatever else?

Can I make my own graphics API or library for my game? Even if it's hard, is it theoretically possible for my 3D application to independently contact the graphics driver and render everything on the GPU?

share|improve this question
17  
CUDA and OpenCL are not graphics APIs. – Soapy 19 hours ago
1  
You can perhaps achieve something with the looks of Doom or Wolfenstein by using your CPU for graphics. But why would you? – rlam12 15 hours ago
3  
To "independently contact the graphics driver" you always need some API! – Josef 14 hours ago
9  
No you don't need an API for graphics, you can access the driver directly, it doesn't even stop there, you can write your own driver to speak to the hardware directly if you'd ever want that. But your next question is much more important: "Should I use a graphics API?", yes, yes you should. – Kevin 13 hours ago
2  
You people realise that an API is also made at one point? The driver is exposed to external calls which an API implements and wraps in nice easy to use API calls. – Kevin 8 hours ago

7 Answers 7

Practically its necessary, yes. It's necessary because unless you want to spend years writing what is essentially driver code for the multitude of different hardware configurations out there, you need to use an API that unifies against existing drivers written by GPU vendors for all popular operating systems and hardware.

The only realistic alternative is that you don't use 3D acceleration, and instead go for software rendering which, provided it's written in something truly portable like C, will be able to run on just about any system/device. That's fine for smaller games... something like SDL is suited to this purpose... there are others out there as well. But this lacks inherent 3D capabilities so you'd have to build them yourself... which is not a trivial task.

Also remember that CPU rendering is inefficient and suffers poor performance compared to GPUs.

share|improve this answer
8  
To pile on for the OP's benefit: software rendering also has significantly poorer performance. While this may not matter in terms of frame rate for simpler games, I doubt many customers will appreciate the reduction in battery life they suffer because software rendering is keeping their CPU much more active than it would otherwise need to be. – Chris Hayes 19 hours ago
1  
Good point, @ChrisHayes... edited to add. One sometimes assumes such things are obvious. – Arcane Engineer 19 hours ago
    
It seems like your first paragraph is saying that technically it's not necessary to use an API. Sure, it saves a tremendous amount of programmer time, perhaps more than one person has to offer, but it's not necessary. As I would expect, unless there is some kind of cryptographic verification going on between the graphics card driver and higher-level software libraries. – David Z 11 hours ago
    
@ChrisHayes But will customers appreciate "Your browser does not support WebGL" or "Your browser supports WebGL but your GPU does not"? The latter happens with GPUs designed for fixed function OpenGL 1 graphics, such as Intel graphics prior to HD Graphics, and laptops and tablets generally can't be upgraded. – tepples 10 hours ago
2  
@DavidZ Have you a written a driver? Do you know what it's like interfacing with complex hardware when you don't even have the specs, when even the engineers employed by the card's manufacturer took man-months to develop the driver? Now multiply that by however many architectures you have to support. If I say, "It's impossible to climb Everest without equipment," of course there is a chance you can, but it really is so miniscule a chance... why would anyone argue the point? – Arcane Engineer 9 hours ago

I think a lot of the answers miss an important point: you can write apps that access hardware directly, but not on modern operating systems. It's not just a time problem, but a "you don't have a choice" problem.

Windows, Linux, OSX, etc. all ban direct hardware access to arbitrary applications. This is important for security reasons: you don't want any random app to be able to read arbitrary GPU memory for the same reason you don't want any random app to be able to read system memory. Things like the framebuffer for your bank account or whatnot live in GPU memory. You want that stuff isolated and protected and access controlled by your OS.

Only drivers can talk directly to most hardware, and the only way to talk to the driver is via the OS' exposed HAL and each driver's proprietary and incompatible interface. That driver interface will not only be different for each vendor but will even differ between versions of the driver itself, making it near impossible to talk directly to the interface in a consumer application. These layers are often covered by access controls that further restrict the ability for an application to access them.

So no, your game cannot just use the hardware directly, unless of course you're only targeting insecure operating systems like DOS, and your only feasible option for a game on modern consumer operating systems is to target a public API like DirectX or OpenGL.

share|improve this answer
    
Kudos, good addition to the discussion. You learn something new every day. – Arcane Engineer 8 hours ago
    
What restricts your choice of writing a kernel module to do the hardware access? Also, if you are targeting a single card (the one in your machine), compatibility issues fall away; though it is still far from a trivial thing to do; but within the realm of feasible, given reverse engineered drivers; especially if you only have a limited scope of what you want to do with the card. – Joel Bosveld 4 hours ago
    
Driver signing on many OSes makes it difficult to distribute your module. You could certainly build up all kinds of OS drivers and hacks locally, but you're not going to be able to distribute your game very widely, which I assumed is a desired aspect as the OP asked about making an actual game, not a pointless tech demo. :p – Sean Middleditch 2 hours ago

APIs like OpenGL or DirectX are partialy implemented by the operating system and partially implemented by the graphic driver itself.

That means when you would want to create your own low-level API which makes use of the capabilities of modern GPUs, you essentially need to write an own graphic driver. Making sure that your driver works with all common graphic cards would be quite a challenge, especially because the vendors are often not very open with their specifications.

share|improve this answer

Boot your PC into MS-DOS. Then, using your copy of the PC Game Programmers Encyclopedia, you can write directly into the card's VESA registers and into video memory. I still have the code I wrote 20 years ago to do this and render rudimentary 3D in software.

Alternatively, you can just use DirectX; it's a really thin abstraction layer, and also lets you write directly into video buffers and then swap them to the screen.

There's also the extreme approach of building your own video hardware.

share|improve this answer

The other answers answer your main question quite nicely: technically, it's possible, but in practice, if your goal is to broaden your customer base, you're actually doing the opposite. While wasting a huge amount of work on supporting the thousands of different hardware and OS configurations you need to support.

However, that doesn't mean you have to be 100% dependent on one particular API. You can always code up your own abstraction layer, and then swap the more specific implementations (e.g. a DirectX implementation, an OpenGL implementation, ...) in and out.

Even then, it probably isn't worth it. Just pick something that works well enough for your purposes, and be done with it. You're an indie game maker - you need to keep yourself focused on the things that make the game, not on the minutae. Just make the game!

share|improve this answer

In summary: Theoretically you can, but it's unfeasible, and you won't get any advantage. The limitations APIs have today become less every day, you have CUDA and OpenCL and shaders. So having full control is no more a problem.


Fuller explanation:

Answering this is a boring yes. The real question is why?

I hardly imagine why would you want to do this other than for learning purposes. You have to know that at some point, developers had the freedom to do anything with their CPU rendering implementations, everyone had there own API, they were in control of everything in the "pipeline". With the introduction of fixed pipeline and GPUs, everybody switched..

With GPUs you get "better" performance, but lose a lot of the freedom. Graphics developers are pushing to get that freedom back. Hence, more customization pipeline everyday. You can do almost anything using CUDA/OpenCL and even shaders, without touching the drivers.

share|improve this answer

You want to talk to hardware. You need to use a way to talk to the hardware. That way is the interface to the hardware. That's what OpenGL, DirectX, CUDA, OpenCL and whatever else are.

If you get on a lower level you are just using a lower level interface, but you are still using an interface, only one that doesn't work with as many different cards as the higher level one.

share|improve this answer

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.