1. Introduction
This section is non-normative.
Graphics Processing Units, or GPUs for short, have been essential in enabling rich rendering and computational applications in personal computing. WebGPU is an API that exposes the capabilities of GPU hardware for the Web. The API is designed from the ground up to efficiently map to (post-2014) native GPU APIs. WebGPU is not related to WebGL and does not explicitly target OpenGL ES.
WebGPU sees physical GPU hardware as GPUAdapters. It provides a connection to an adapter via GPUDevice, which manages resources, and the device’s GPUQueues, which execute commands. GPUDevice may have its own memory with high-speed access to the processing units. GPUBuffer and GPUTexture are the physical resources backed by GPU memory. GPUCommandBuffer and GPURenderBundle are containers for user-recorded commands. GPUShaderModule contains shader code. The other resources,
such as GPUSampler or GPUBindGroup, configure the way physical resources are used by the GPU.
GPUs execute commands encoded in GPUCommandBuffers by feeding data through a pipeline,
which is a mix of fixed-function and programmable stages. Programmable stages execute shaders, which are special programs designed to run on GPU hardware.
Most of the state of a pipeline is defined by
a GPURenderPipeline or a GPUComputePipeline object. The state not included
in these pipeline objects is set during encoding with commands,
such as beginRenderPass() or setBlendConstant().
2. Malicious use considerations
This section is non-normative. It describes the risks associated with exposing this API on the Web.
2.1. Security Considerations
The security requirements for WebGPU are the same as ever for the web, and are likewise non-negotiable. The general approach is strictly validating all the commands before they reach GPU, ensuring that a page can only work with its own data.
2.1.1. CPU-based undefined behavior
A WebGPU implementation translates the workloads issued by the user into API commands specific to the target platform. Native APIs specify the valid usage for the commands (for example, see vkCreateDescriptorSetLayout) and generally don’t guarantee any outcome if the valid usage rules are not followed. This is called "undefined behavior", and it can be exploited by an attacker to access memory they don’t own, or force the driver to execute arbitrary code.
In order to disallow insecure usage, the range of allowed WebGPU behaviors is defined for any input.
An implementation has to validate all the input from the user and only reach the driver
with the valid workloads. This document specifies all the error conditions and handling semantics.
For example, specifying the same buffer with intersecting ranges in both "source" and "destination"
of copyBufferToBuffer() results in GPUCommandEncoder generating an error, and no other operation occurring.
See § 22 Errors & Debugging for more information about error handling.
2.1.2. GPU-based undefined behavior
WebGPU shaders are executed by the compute units inside GPU hardware. In native APIs,
some of the shader instructions may result in undefined behavior on the GPU.
In order to address that, the shader instruction set and its defined behaviors are
strictly defined by WebGPU. When a shader is provided to createShaderModule(),
the WebGPU implementation has to validate it
before doing any translation (to platform-specific shaders) or transformation passes.
2.1.3. Uninitialized data
Generally, allocating new memory may expose the leftover data of other applications running on the system. In order to address that, WebGPU conceptually initializes all the resources to zero, although in practice an implementation may skip this step if it sees the developer initializing the contents manually. This includes variables and shared workgroup memory inside shaders.
The precise mechanism of clearing the workgroup memory can differ between platforms. If the native API does not provide facilities to clear it, the WebGPU implementation transforms the compute shader to first do a clear across all invocations, synchronize them, and continue executing developer’s code.
GPULoadOp "load" to "clear").
As a result, all implementations should issue a developer console warning about this potential performance penalty, even if there is no penalty in that implementation.
2.1.4. Out-of-bounds access in shaders
Shaders can access physical resources either directly
(for example, as a "uniform" GPUBufferBinding), or via texture units,
which are fixed-function hardware blocks that handle texture coordinate conversions.
Validation in the WebGPU API can only guarantee that all the inputs to the shader are provided and
they have the correct usage and types.
The WebGPU API can not guarantee that the data is accessed within bounds
if the texture units are not involved.
In order to prevent the shaders from accessing GPU memory an application doesn’t own, the WebGPU implementation may enable a special mode (called "robust buffer access") in the driver that guarantees that the access is limited to buffer bounds.
Alternatively, an implementation may transform the shader code by inserting manual bounds checks.
When this path is taken, the out-of-bound checks only apply to array indexing. They aren’t needed
for plain field access of shader structures due to the minBindingSize validation on the host side.
If the shader attempts to load data outside of physical resource bounds, the implementation is allowed to:
-
return a value at a different location within the resource bounds
-
return a value vector of "(0, 0, 0, X)" with any "X"
-
partially discard the draw or dispatch call
If the shader attempts to write data outside of physical resource bounds, the implementation is allowed to:
-
write the value to a different location within the resource bounds
-
discard the write operation
-
partially discard the draw or dispatch call
2.1.5. Invalid data
When uploading floating-point data from CPU to GPU, or generating it on the GPU, we may end up with a binary representation that doesn’t correspond to a valid number, such as infinity or NaN (not-a-number). The GPU behavior in this case is subject to the accuracy of the GPU hardware implementation of the IEEE-754 standard. WebGPU guarantees that introducing invalid floating-point numbers would only affect the results of arithmetic computations and will not have other side effects.
2.1.6. Driver bugs
GPU drivers are subject to bugs like any other software. If a bug occurs, an attacker could possibly exploit the incorrect behavior of the driver to get access to unprivileged data. In order to reduce the risk, the WebGPU working group will coordinate with GPU vendors to integrate the WebGPU Conformance Test Suite (CTS) as part of their driver testing process, like it was done for WebGL. WebGPU implementations are expected to have workarounds for some of the discovered bugs, and disable WebGPU on drivers with known bugs that can’t be worked around.
2.1.7. Timing attacks
2.1.7.1. Content-timeline timing
WebGPU is designed to later support multi-threaded use via Web Workers. As such, it is designed not to open
the users to modern high-precision timing attacks. Some of the objects,
like GPUBuffer or GPUQueue, have shared state which can be simultaneously accessed.
This allows race conditions to occur, similar to those of accessing a SharedArrayBuffer from multiple Web Workers, which makes the thread scheduling observable.
WebGPU addresses this by limiting the ability to deserialize (or share) objects only to
the agents inside the agent cluster, and only if
the cross-origin isolated policies are in place.
This restriction matches the mitigations against the malicious SharedArrayBuffer use. Similarly, the user agent may also
serialize the agents sharing any handles to prevent any concurrency entirely.
In the end, the attack surface for races on shared state in WebGPU will be
a small subset of the SharedArrayBuffer attacks.
2.1.7.2. Device/queue-timeline timing
Writable storage buffers and other cross-invocation communication may be usable to construct high-precision timers on the queue timeline.
The optional "timestamp-query" feature also provides high precision
timing of GPU operations. To mitigate security and privacy concerns, the timing query
values are aligned to a lower precision: see current queue timestamp. Note in particular:
-
The device timeline typically runs in a process that is shared by multiple origins, so cross-origin isolation (provided by COOP/COEP) does not provide isolation of device/queue-timeline timers.
-
Queue timeline work is issued from the device timeline, and may execute on GPU hardware that does not provide the isolation expected of CPU processes (such as Meltdown mitigations).
-
GPU hardware is not typically susceptible to Spectre-style attacks, but WebGPU may be implemented in software, and software implementations may run in a shared process, preventing isolation-based mitigations.
2.1.8. Row hammer attacks
Row hammer is a class of attacks that exploit the leaking of states in DRAM cells. It could be used on GPU. WebGPU does not have any specific mitigations in place, and relies on platform-level solutions, such as reduced memory refresh intervals.
2.1.9. Denial of service
WebGPU applications have access to GPU memory and compute units. A WebGPU implementation may limit the available GPU memory to an application, in order to keep other applications responsive. For GPU processing time, a WebGPU implementation may set up "watchdog" timer that makes sure an application doesn’t cause GPU unresponsiveness for more than a few seconds. These measures are similar to those used in WebGL.
2.1.10. Workload identification
WebGPU provides access to constrained global resources shared between different programs (and web pages) running on the same machine. An application can try to indirectly probe how constrained these global resources are, in order to reason about workloads performed by other open web pages, based on the patterns of usage of these shared resources. These issues are generally analogous to issues with Javascript, such as system memory and CPU execution throughput. WebGPU does not provide any additional mitigations for this.
2.1.11. Memory resources
WebGPU exposes fallible allocations from machine-global memory heaps, such as VRAM. This allows for probing the size of the system’s remaining available memory (for a given heap type) by attempting to allocate and watching for allocation failures.
GPUs internally have one or more (typically only two) heaps of memory shared by all running applications. When a heap is depleted, WebGPU would fail to create a resource. This is observable, which may allow a malicious application to guess what heaps are used by other applications, and how much they allocate from them.
2.1.12. Computation resources
If one site uses WebGPU at the same time as another, it may observe the increase in time it takes to process some work. For example, if a site constantly submits compute workloads and tracks completion of work on the queue, it may observe that something else also started using the GPU.
A GPU has many parts that can be tested independently, such as the arithmetic units, texture sampling units, atomic units, etc. A malicious application may sense when some of these units are stressed, and attempt to guess the workload of another application by analyzing the stress patterns. This is analogous to the realities of CPU execution of Javascript.
2.1.13. Abuse of capabilities
Malicious sites could abuse the capabilities exposed by WebGPU to run computations that don’t benefit the user or their experience and instead only benefit the site. Examples would be hidden crypto-mining, password cracking or rainbow tables computations.
It is not possible to guard against these types of uses of the API because the browser is not able to distinguish between valid workloads and abusive workloads. This is a general problem with all general-purpose computation capabilities on the Web: JavaScript, WebAssembly or WebGL. WebGPU only makes some workloads easier to implement, or slightly more efficient to run than using WebGL.
To mitigate this form of abuse, browsers can throttle operations on background tabs, could warn that a tab is using a lot of resource, and restrict which contexts are allowed to use WebGPU.
User agents can heuristically issue warnings to users about high power use, especially due to potentially malicious usage. If a user agent implements such a warning, it should include WebGPU usage in its heuristics, in addition to JavaScript, WebAssembly, WebGL, and so on.
2.2. Privacy Considerations
The privacy considerations for WebGPU are similar to those of WebGL. GPU APIs are complex and must expose various aspects of a device’s capabilities out of necessity in order to enable developers to take advantage of those capabilities effectively. The general mitigation approach involves normalizing or binning potentially identifying information and enforcing uniform behavior where possible.
A user agent must not reveal more than 32 distinguishable configurations or buckets.
2.2.1. Machine-specific features and limits
WebGPU can expose a lot of detail on the underlying GPU architecture and the device geometry. This includes available physical adapters, many limits on the GPU and CPU resources that could be used (such as the maximum texture size), and any optional hardware-specific capabilities that are available.
User agents are not obligated to expose the real hardware limits, they are in full control of how much the machine specifics are exposed. One strategy to reduce fingerprinting is binning all the target platforms into a few number of bins. In general, the privacy impact of exposing the hardware limits matches the one of WebGL.
The default limits are also deliberately high enough to allow most applications to work without requesting higher limits. All the usage of the API is validated according to the requested limits, so the actual hardware capabilities are not exposed to the users by accident.
2.2.2. Machine-specific artifacts
There are some machine-specific rasterization/precision artifacts and performance differences that can be observed roughly in the same way as in WebGL. This applies to rasterization coverage and patterns, interpolation precision of the varyings between shader stages, compute unit scheduling, and more aspects of execution.
Generally, rasterization and precision fingerprints are identical across most or all of the devices of each vendor. Performance differences are relatively intractable, but also relatively low-signal (as with JS execution performance).
Privacy-critical applications and user agents should utilize software implementations to eliminate such artifacts.
2.2.3. Machine-specific performance
Another factor for differentiating users is measuring the performance of specific operations on the GPU. Even with low precision timing, repeated execution of an operation can show if the user’s machine is fast at specific workloads. This is a fairly common vector (present in both WebGL and Javascript), but it’s also low-signal and relatively intractable to truly normalize.
WebGPU compute pipelines expose access to GPU unobstructed by the fixed-function hardware. This poses an additional risk for unique device fingerprinting. User agents can take steps to dissociate logical GPU invocations with actual compute units to reduce this risk.
2.2.4. User Agent State
This specification doesn’t define any additional user-agent state for an origin.
However it is expected that user agents will have compilation caches for the result of expensive
compilation like GPUShaderModule, GPURenderPipeline and GPUComputePipeline.
These caches are important to improve the loading time of WebGPU applications after the first
visit.
For the specification, these caches are indifferentiable from incredibly fast compilation, but
for applications it would be easy to measure how long createComputePipelineAsync() takes to resolve. This can leak information across origins (like "did the user access a site with
this specific shader") so user agents should follow the best practices in storage partitioning.
The system’s GPU driver may also have its own cache of compiled shaders and pipelines. User agents may want to disable these when at all possible, or add per-partition data to shaders in ways that will make the GPU driver consider them different.
2.2.5. Driver bugs
In addition to the concerns outlined in Security Considerations, driver bugs may introduce differences in behavior that can be observed as a method of differentiating users. The mitigations mentioned in Security Considerations apply here as well, including coordinating with GPU vendors and implementing workarounds for known issues in the user agent.
2.2.6. Adapter Identifiers
Past experience with WebGL has demonstrated that developers have a legitimate need to be able to identify the GPU their code is running on in order to create and maintain robust GPU-based content. For example, to identify adapters with known driver bugs in order to work around them or to avoid features that perform more poorly than expected on a given class of hardware.
But exposing adapter identifiers also naturally expands the amount of fingerprinting information available, so there’s a desire to limit the precision with which we identify the adapter.
There are several mitigations that can be applied to strike a balance between enabling robust content and preserving privacy. First is that user agents can reduce the burden on developers by identifying and working around known driver issues, as they have since browsers began making use of GPUs.
When adapter identifiers are exposed by default they should be as broad as possible while still being useful. Possibly identifying, for example, the adapter’s vendor and general architecture without identifying the specific adapter in use. Similarly, in some cases identifiers for an adapter that is considered a reasonable proxy for the actual adapter may be reported.
In cases where full and detailed information about the adapter is useful (for example: when filing bug reports) the user can be asked for consent to reveal additional information about their hardware to the page.
Finally, the user agent will always have the discretion to not report adapter identifiers at all if it considers it appropriate, such as in enhanced privacy modes.
3. Fundamentals
3.1. Conventions
3.1.1. Syntactic Shorthands
In this specification, the following syntactic shorthands are used:
- The
.("dot") syntax, common in programming languages. -
The phrasing "
Foo.Bar" means "theBarmember of the value (or interface)Foo." IfFoois an ordered map, asserts that the keyBarexists.Editorial note: Some phrasing in this spec may currently assume this resolves to
undefinedifBardoesn’t exist.The phrasing "
Foo.Baris provided" means "theBarmember exists in the map valueFoo" - The
?.("optional chaining") syntax, adopted from JavaScript. -
The phrasing "
Foo?.Bar" means "ifFooisnullorundefinedorBardoes not exist inFoo,undefined; otherwise,Foo.Bar".For example, where
bufferis aGPUBuffer,buffer?.\[[device]].\[[adapter]]means "ifbufferisnullorundefined, thenundefined; otherwise, the\[[adapter]]internal slot of the\[[device]]internal slot ofbuffer. - The
??("nullish coalescing") syntax, adopted from JavaScript. -
The phrasing "
x??y" means "x, ifxis not null/undefined, andyotherwise". - slot-backed attribute
-
A WebIDL attribute which is backed by an internal slot of the same name. It may or may not be mutable.
3.1.2. WebGPU Interfaces
A WebGPU interface defines a WebGPU object. It can be used:
-
On the content timeline where it was created, where it is a JavaScript-exposed WebIDL interface.
-
On all other timelines, where only immutable properties can be accessed.
The following special property types can be defined on WebGPU interfaces:
- immutable property
-
A read-only slot set during initialization of the object. It can be accessed from any timeline.
Note: Since the slot is immutable, implementations may have a copy on multiple timelines, as needed. Immutable properties are defined in this way to avoid describing multiple copies in this spec.
If named
[[with brackets]], it is an internal slot. If namedwithoutBrackets, it is areadonlyslot-backed attribute. - content timeline property
-
A property which is only accessible from the content timeline where the object was created.
If named
[[with brackets]], it is an internal slot. If namedwithoutBrackets, it is a slot-backed attribute.
Any interface which includes GPUObjectBase is a WebGPU interface.
interface mixin GPUObjectBase {attribute USVString label ; };
GPUObjectBase parent,
interface T, GPUObjectDescriptorBase descriptor)
(where T extends GPUObjectBase):
-
Let device be parent.
[[device]]. -
Let object be a new instance of T.
-
Let internals be a new (uninitialized) instance of the type of T.
\[[internals]](which may overrideGPUObjectBase.[[internals]]) that is accessible only from the device timeline of device. -
Set object.
[[device]]to device. -
Set object.
[[internals]]to internals. -
Return [object, internals].
GPUObjectBase has the following immutable properties:
[[internals]], of type internal object, readonly, overridable-
The internal object.
Operations on the contents of this object assert they are running on the device timeline, and that the device is valid.
For each interface that subtypes
GPUObjectBase, this may be overridden with a subtype of internal object. This slot is initially set to an uninitialized object of that type. [[device]], of type device, readonly-
The device that owns the internal object.
Operations on the contents of this object assert they are running on the device timeline, and that the device is valid.
GPUObjectBase has the following content timeline properties:
label, of type USVString-
A developer-provided label which is used in an implementation-defined way. It can be used by the browser, OS, or other tools to help identify the underlying internal object to the developer. Examples include displaying the label in
GPUErrormessages, console warnings, browser developer tools, and platform debugging utilities.NOTE:Implementations should use labels to enhance error messages by using them to identify WebGPU objects.However, this need not be the only way of identifying objects: implementations should also use other available information, especially when no label is available. For example:
-
The label of the parent
GPUTexturewhen printing aGPUTextureView. -
The label of the parent
GPUCommandEncoderwhen printing aGPURenderPassEncoderorGPUComputePassEncoder. -
The label of the source
GPUCommandEncoderwhen printing aGPUCommandBuffer. -
The label of the source
GPURenderBundleEncoderwhen printing aGPURenderBundle.
NOTE:Thelabelis a property of theGPUObjectBase. TwoGPUObjectBase"wrapper" objects have completely separate label states, even if they refer to the same underlying object (for example returned bygetBindGroupLayout()). Thelabelproperty will not change except by being set from JavaScript.This means one underlying object could be associated with multiple labels. This specification does not define how the label is propagated to the device timeline. How labels are used is completely implementation-defined: error messages could show the most recently set label, all known labels, or no labels at all.
It is defined as a
USVStringbecause some user agents may supply it to the debug facilities of the underlying native APIs. -
[[device]] that owns them, from being garbage collected. This cannot be
guaranteed, however, as holding a strong reference to a parent object may be required in some
implementations.
As a result, developers should assume that a WebGPU interface may not be garbage collected until all child objects of that interface have also been garbage collected. This may cause some resources to remain allocated longer than anticipated.
Calling the destroy method on a WebGPU interface (such as GPUDevice.destroy() or GPUBuffer.destroy()) should be
favored over relying on garbage collection if predictable release of allocated resources is
needed.
3.1.3. Internal Objects
An internal object tracks state of WebGPU objects that may only be used on the device timeline, in device timeline slots, which may be mutable.
- device timeline slot
-
An internal slot which is only accessible from the device timeline.
All reads/writes to the mutable state of an internal object occur from steps executing on a single well-ordered device timeline. These steps may have been issued from a content timeline algorithm on any of multiple agents.
Note: An "agent" refers to a JavaScript "thread" (i.e. main thread, or Web Worker).
3.1.4. Object Descriptors
An object descriptor holds the information needed to create an object,
which is typically done via one of the create* methods of GPUDevice.
dictionary {GPUObjectDescriptorBase USVString label = ""; };
GPUObjectDescriptorBase has the following members:
label, of type USVString, defaulting to""-
The initial value of
GPUObjectBase.label.
3.2. Asynchrony
3.2.1. Invalid Internal Objects & Contagious Invalidity
Object creation operations in WebGPU don’t return promises, but nonetheless are internally
asynchronous. Returned objects refer to internal objects which are manipulated on a device timeline. Rather than fail with exceptions or rejections, most errors that occur on a device timeline are communicated through GPUErrors generated on the associated device.
Internal objects are either valid or invalid. An invalid object will never become valid at a later time, but some valid objects may become invalid.
Objects are invalid from creation if it wasn’t possible to create them.
This can happen, for example, if the object descriptor doesn’t describe a valid
object, or if there is not enough memory to allocate a resource.
It can also happen if an object is created with or from another invalid object
(for example calling createView() on an invalid GPUTexture)
(for example the GPUTexture of a createView() call):
this case is referred to as contagious invalidity.
Internal objects of most types cannot become invalid after they are created, but still
may become unusable, e.g. if the owning device is lost or destroyed, or the object has a special internal state,
like buffer state "destroyed".
Internal objects of some types can become invalid after they are created; specifically, devices, adapters, GPUCommandBuffers, and command/pass/bundle encoders.
GPUObjectBase object is valid to use with a targetObject if and only if the following requirements are met:
-
object must be valid.
-
object.
[[device]]must be valid. -
object.
[[device]]must equal targetObject.[[device]].
3.2.2. Promise Ordering
Several operations in WebGPU return promises.
WebGPU does not make any guarantees about the order in which these promises settle (resolve or reject), except for the following:
-
For some
GPUQueueq, if p1 = q.onSubmittedWorkDone()is called before p2 = q.onSubmittedWorkDone(), then p1 must settle before p2. -
For some
GPUQueueq andGPUBufferb on the sameGPUDevice, if p1 = b.mapAsync()is called before p2 = q.onSubmittedWorkDone(), then p1 must settle before p2.
Applications must not rely on any other promise settlement ordering.
3.3. Coordinate Systems
Rendering operations use the following coordinate systems:
-
Normalized device coordinates (or NDC) have three dimensions, where:
-
-1.0 ≤ x ≤ 1.0
-
-1.0 ≤ y ≤ 1.0
-
0.0 ≤ z ≤ 1.0
-
The bottom-left corner is at (-1.0, -1.0, z).
-
-
Clip space coordinates have four dimensions: (x, y, z, w)
-
Clip space coordinates are used for the the clip position of a vertex (i.e. the position output of a vertex shader), and for the clip volume.
-
Normalized device coordinates and clip space coordinates are related as follows: If point p = (p.x, p.y, p.z, p.w) is in the clip volume, then its normalized device coordinates are (p.x ÷ p.w, p.y ÷ p.w, p.z ÷ p.w).
-
-
Framebuffer coordinates address the pixels in the framebuffer
-
They have two dimensions.
-
Each pixel extends 1 unit in x and y dimensions.
-
The top-left corner is at (0.0, 0.0).
-
x increases to the right.
-
y increases down.
-
See § 17 Render Passes and § 23.3.5 Rasterization.
-
-
Viewport coordinates combine framebuffer coordinates in x and y dimensions, with depth in z.
-
Normally 0.0 ≤ z ≤ 1.0, but this can be modified by setting
[[viewport]].minDepthandmaxDepthviasetViewport()
-
-
Fragment coordinates match viewport coordinates.
-
UV coordinates are used to sample textures, and have two dimensions:
-
0 ≤ u ≤ 1.0
-
0 ≤ v ≤ 1.0
-
(0.0, 0.0) is in the first texel in texture memory address order.
-
(1.0, 1.0) is in the last texel texture memory address order.
-
-
Window coordinates, or present coordinates, match framebuffer coordinates, and are used when interacting with an external display or conceptually similar interface.
Note: WebGPU’s coordinate systems match DirectX’s coordinate systems in a graphics pipeline.
3.4. Programming Model
3.4.1. Timelines
WebGPU’s behavior is described in terms of "timelines". Each operation (defined as algorithms) occurs on a timeline. Timelines clearly define both the order of operations, and which state is available to which operations.
Note: This "timeline" model describes the constraints of the multi-process models of browser engines (typically with a "content process" and "GPU process"), as well as the GPU itself as a separate execution unit in many implementations. Implementing WebGPU does not require timelines to execute in parallel, so does not require multiple processes, or even multiple threads.
- Content timeline
-
Associated with the execution of the Web script. It includes calling all methods described by this specification.
To issue steps to the content timeline from an operation on
GPUDevicedevice, queue a global task for GPUDevicedevicewith those steps. - Device timeline
-
Associated with the GPU device operations that are issued by the user agent. It includes creation of adapters, devices, and GPU resources and state objects, which are typically synchronous operations from the point of view of the user agent part that controls the GPU, but can live in a separate OS process.
- Queue timeline
-
Associated with the execution of operations on the compute units of the GPU. It includes actual draw, copy, and compute jobs that run on the GPU.
- Immutable value example definition
-
Can be used on any timeline.
- Content-timeline example definition
-
Can only be used on the content timeline.
- Device-timeline example definition
-
Can only be used on the device timeline.
- Queue-timeline example definition
-
Can only be used on the queue timeline.
Immutable value example definition. Content-timeline example definition.
Immutable value example definition. Device-timeline example definition.
Immutable value example definition. Queue-timeline example definition.
In this specification, asynchronous operations are used when the return value depends on work that happens on any timeline other than the Content timeline. They are represented by promises and events in API.
GPUComputePassEncoder.dispatchWorkgroups():
-
User encodes a
dispatchWorkgroupscommand by calling a method of theGPUComputePassEncoderwhich happens on the Content timeline. -
User issues
GPUQueue.submit()that hands over theGPUCommandBufferto the user agent, which processes it on the Device timeline by calling the OS driver to do a low-level submission. -
The submit gets dispatched by the GPU invocation scheduler onto the actual compute units for execution, which happens on the Queue timeline.
GPUDevice.createBuffer():
-
User fills out a
GPUBufferDescriptorand creates aGPUBufferwith it, which happens on the Content timeline. -
User agent creates a low-level buffer on the Device timeline.
GPUBuffer.mapAsync():
-
User requests to map a
GPUBufferon the Content timeline and gets a promise in return. -
User agent checks if the buffer is currently used by the GPU and makes a reminder to itself to check back when this usage is over.
-
After the GPU operating on Queue timeline is done using the buffer, the user agent maps it to memory and resolves the promise.
3.4.2. Memory Model
This section is non-normative.
Once a GPUDevice has been obtained during an application initialization routine,
we can describe the WebGPU platform as consisting of the following layers:
-
User agent implementing the specification.
-
Operating system with low-level native API drivers for this device.
-
Actual CPU and GPU hardware.
Each layer of the WebGPU platform may have different memory types that the user agent needs to consider when implementing the specification:
-
The script-owned memory, such as an
ArrayBuffercreated by the script, is generally not accessible by a GPU driver. -
A user agent may have different processes responsible for running the content and communication to the GPU driver. In this case, it uses inter-process shared memory to transfer data.
-
Dedicated GPUs have their own memory with high bandwidth, while integrated GPUs typically share memory with the system.
Most physical resources are allocated in the memory of type that is efficient for computation or rendering by the GPU. When the user needs to provide new data to the GPU, the data may first need to cross the process boundary in order to reach the user agent part that communicates with the GPU driver. Then it may need to be made visible to the driver, which sometimes requires a copy into driver-allocated staging memory. Finally, it may need to be transferred to the dedicated GPU memory, potentially changing the internal layout into one that is most efficient for GPUs to operate on.
All of these transitions are done by the WebGPU implementation of the user agent.
Note: This example describes the worst case, while in practice
the implementation may not need to cross the process boundary,
or may be able to expose the driver-managed memory directly to
the user behind an ArrayBuffer, thus avoiding any data copies.
3.4.3. Resource Usages
A physical resource can be used on GPU with an internal usage:
- input
-
Buffer with input data for draw or dispatch calls. Preserves the contents. Allowed by buffer
INDEX, bufferVERTEX, or bufferINDIRECT. - constant
-
Resource bindings that are constant from the shader point of view. Preserves the contents. Allowed by buffer
UNIFORMor textureTEXTURE_BINDING. - storage
-
Writable storage resource binding. Allowed by buffer
STORAGEor textureSTORAGE_BINDING. - storage-read
-
Read-only storage resource bindings. Preserves the contents. Allowed by buffer
STORAGEor textureSTORAGE_BINDING. - attachment
-
Texture used as an output attachment in a render pass. Allowed by texture
RENDER_ATTACHMENT. - attachment-read
-
Texture used as a read-only attachment in a render pass. Preserves the contents. Allowed by texture
RENDER_ATTACHMENT.
We define subresource to be either a whole buffer, or a texture subresource.
-
Each usage in U is input, constant, storage-read, or attachment-read.
-
Each usage in U is storage.
-
U contains exactly one element: attachment.
Enforcing that the usages are only combined into a compatible usage list allows the API to limit when data races can occur in working with memory. That property makes applications written against WebGPU more likely to run without modification on different platforms.
Generally, when an implementation processes an operation that uses a subresource in a different way than its current usage allows, it schedules a transition of the resource
into the new state. In some cases, like within an open GPURenderPassEncoder, such a
transition is impossible due to the hardware limitations.
We define these places as usage scopes.
The main usage rule is, for any one subresource, its list of internal usages within one usage scope must be a compatible usage list.
For example, binding the same buffer for storage as well as for input within the same GPURenderPassEncoder would put the encoder
as well as the owning GPUCommandEncoder into the error state.
This combination of usages does not make a compatible usage list.
Note: race condition of multiple writable storage buffer/texture usages in a single usage scope is allowed.
The subresources of textures included in the views provided to GPURenderPassColorAttachment.view and GPURenderPassColorAttachment.resolveTarget are considered to be used as attachment for the usage scope of this render pass.
3.4.4. Synchronization
For each subresource of a physical resource, its set of internal usage flags is tracked on the Queue timeline.
On the Queue timeline, there is an ordered sequence of usage scopes. For the duration of each scope, the set of internal usage flags of any given subresource is constant. A subresource may transition to new usages at the boundaries between usage scopes.
This specification defines the following usage scopes:
-
Outside of a pass (in
GPUCommandEncoder), each (non-state-setting) command is one usage scope (e.g.copyBufferToTexture()). -
In a compute pass, each dispatch command (
dispatchWorkgroups()ordispatchWorkgroupsIndirect()) is one usage scope. A subresource is "used" in the usage scope if it is potentially accessible by the command. Within a dispatch, for each bind group slot that is used by the currentGPUComputePipeline's[[layout]], every subresource referenced by that bind group is "used" in the usage scope. State-setting compute pass commands, like setBindGroup(), do not contribute directly to a usage scope; they instead change the state that is checked in dispatch commands. -
One render pass is one usage scope. A subresource is "used" in the usage scope if it’s referenced by any (state-setting or non-state-setting) command. For example, in setBindGroup(), every subresource in
bindGroupis "used" in the render pass’s usage scope.
The above should probably talk about GPU commands. But we don’t have a way to reference specific GPU commands (like dispatch) yet.
-
In a render pass, subresources used in any setBindGroup() call, regardless of whether the currently bound pipeline’s shader or layout actually depends on these bindings, or the bind group is shadowed by another 'set' call.
-
A buffer used in any
setVertexBuffer()call, regardless of whether any draw call depends on this buffer, or this buffer is shadowed by another 'set' call. -
A buffer used in any
setIndexBuffer()call, regardless of whether any draw call depends on this buffer, or this buffer is shadowed by another 'set' call. -
A texture subresource used as a color attachment, resolve attachment, or depth/stencil attachment in
GPURenderPassDescriptorbybeginRenderPass(), regardless of whether the shader actually depends on these attachments. -
Resources used in bind group entries with visibility 0, or visible only to the compute stage but used in a render pass (or vice versa).
During command encoding, every usage of a subresource is recorded in one of the usage scopes in the command buffer.
For each usage scope, the implementation performs usage scope validation by composing the list of all internal usage flags of each subresource used in the usage scope.
If any of those lists is not a compatible usage list, GPUCommandEncoder.finish() will generate a validation error.
3.5. Core Internal Objects
3.5.1. Adapters
An adapter identifies an implementation of WebGPU on the system: both an instance of compute/rendering functionality on the platform underlying a browser, and an instance of a browser’s implementation of WebGPU on top of that functionality.
Adapters do not uniquely represent underlying implementations:
calling requestAdapter() multiple times returns a different adapter object each time.
Each adapter object can only be used to create one device:
upon a successful requestDevice(), the adapter becomes invalid.
Additionally, adapter objects may expire at any time.
Note: This ensures applications use the latest system state for adapter selection when creating a device.
It also encourages robustness to more scenarios by making them look similar: first initialization,
reinitialization due to an unplugged adapter, reinitialization due to a test GPUDevice.destroy() call, etc.
An adapter may be considered a fallback adapter if it has significant performance caveats in exchange for some combination of wider compatibility, more predictable behavior, or improved privacy. It is not required that a fallback adapter is available on every system.
An adapter has the following internal slots:
[[features]], of type ordered set<GPUFeatureName>, readonly-
The features which can be used to create devices on this adapter.
[[limits]], of type supported limits, readonly-
The best limits which can be used to create devices on this adapter.
Each adapter limit must be the same or better than its default value in supported limits.
[[fallback]], of type boolean-
If set to
trueindicates that the adapter is a fallback adapter.
Adapters are exposed via GPUAdapter.
3.5.2. Devices
A device is the logical instantiation of an adapter, through which internal objects are created. It can be shared across multiple agents (e.g. dedicated workers).
A device is the exclusive owner of all internal objects created from it:
when the device becomes invalid (is lost or destroyed),
it and all objects created on it (directly, e.g. createTexture(), or indirectly, e.g. createView()) become
implicitly unusable.
A device has the following internal slots:
[[adapter]], of type adapter, readonly-
The adapter from which this device was created.
[[features]], of type ordered set<GPUFeatureName>, readonly-
The features which can be used on this device. No additional features can be used, even if the underlying adapter can support them.
[[limits]], of type supported limits, readonly-
The limits which can be used on this device. No better limits can be used, even if the underlying adapter can support them.
GPUDeviceDescriptor descriptor:
-
Set device.
[[adapter]]to adapter. -
Set device.
[[features]]to the set of values in descriptor.requiredFeatures. -
Let device.
[[limits]]be a supported limits object with the default values. For each (key, value) pair in descriptor.requiredLimits, set the member corresponding to key in device.[[limits]]to the better value of value or the default value in supported limits.
Any time the user agent needs to revoke access to a device, it calls lose the device(device, "unknown") on the device’s device timeline,
potentially ahead of other operations currently queued on that timeline.
If an operation fails with side effects that would observably change the state of objects on the device or potentially corrupt internal implementation/driver state, the device should be lost to prevent these changes from being observable.
Note: For all device losses not initiated by the application (via destroy(),
user agents should consider issuing developer-visible warnings unconditionally,
even if the lost promise is handled.
These scenarios should be rare, and the signal is vital to developers because most of the WebGPU
API tries to behave like nothing is wrong to avoid interrupting the runtime flow of the application:
no validation errors are raised, most promises resolve normally, etc.
-
Make device invalid.
-
Let gpuDevice be the content timeline
GPUDevicecorresponding to device. -
Issue the following steps on the content timeline of gpuDevice:
-
Resolve device.
lostwith a newGPUDeviceLostInfowithreasonset to reason andmessageset to an implementation-defined value.Note:
messageshould not disclose unnecessary user/system information and should never be parsed by applications.
-
-
Complete any outstanding
mapAsync()steps. -
Complete any outstanding
onSubmittedWorkDone()steps.
Note: No errors are generated after device loss. See § 22 Errors & Debugging.
Devices are exposed via GPUDevice.
3.6. Optional Capabilities
WebGPU adapters and devices have capabilities, which describe WebGPU functionality that differs between different implementations, typically due to hardware or system software constraints. A capability is either a feature or a limit.
A user agent must not reveal more than 32 distinguishable configurations or buckets.
The capabilities of an adapter must conform to § 4.2.1 Adapter Capability Guarantees.
Only supported capabilities may be requested in requestDevice();
requesting unsupported capabilities results in failure.
The capabilities of a device are exactly the ones which were requested in requestDevice(). These capabilities are enforced regardless of the
capabilities of the adapter.
For privacy considerations, see § 2.2.1 Machine-specific features and limits.
3.6.1. Features
A feature is a set of optional WebGPU functionality that is not supported on all implementations, typically due to hardware or system software constraints.
Functionality that is part of a feature may only be used if the feature was requested at device
creation (in requiredFeatures).
Otherwise, using existing API surfaces in a new way typically results in a validation error,
and using optional API surfaces results in the following:
-
Using a new method or enum value always throws a
TypeError. -
Using a new dictionary member with a (correctly-typed) non-default value typically results in a validation error.
-
Using a new WGSL
enabledirective always results in acreateShaderModule()validation error.
GPUFeatureName feature is enabled for a GPUObjectBase object if and only if object.[[device]].[[features]] contains feature. See the Feature Index for a description of the functionality each feature enables.
3.6.2. Limits
Each limit is a numeric limit on the usage of WebGPU on a device.
Each limit has a default value.
Every adapter is guaranteed to support the default value or better.
The default is used if a value is not explicitly specified in requiredLimits.
One limit value may be better than another. A better limit value always relaxes validation, enabling strictly more programs to be valid. For each limit class, "better" is defined.
Different limits have different limit classes:
- maximum
-
The limit enforces a maximum on some value passed into the API.
Higher values are better.
May only be set to values ≥ the default. Lower values are clamped to the default.
- alignment
-
The limit enforces a minimum alignment on some value passed into the API; that is, the value must be a multiple of the limit.
Lower values are better.
May only be set to powers of 2 which are ≤ the default. Values which are not powers of 2 are invalid. Higher powers of 2 are clamped to the default.
Note: Setting "better" limits may not necessarily be desirable, as they may have a performance impact. Because of this, and to improve portability across devices and implementations, applications should generally request the "worst" limits that work for their content (ideally, the default values).
A supported limits object has a value for every limit defined by WebGPU:
| Limit name | Type | Limit class | Default |
|---|---|---|---|
maxTextureDimension1D
| GPUSize32
| maximum | 8192 |
The maximum allowed value for the size.width of a texture created with dimension "1d".
| |||
maxTextureDimension2D
| GPUSize32
| maximum | 8192 |
The maximum allowed value for the size.width and size.height of a texture created with dimension "2d".
| |||
maxTextureDimension3D
| GPUSize32
| maximum | 2048 |
The maximum allowed value for the size.width, size.height and size.depthOrArrayLayers of a texture created with dimension "3d".
| |||
maxTextureArrayLayers
| GPUSize32
| maximum | 256 |
The maximum allowed value for the size.depthOrArrayLayers of a texture created with dimension "2d".
| |||
maxBindGroups
| GPUSize32
| maximum | 4 |
The maximum number of GPUBindGroupLayouts allowed in bindGroupLayouts when creating a GPUPipelineLayout.
| |||
maxBindGroupsPlusVertexBuffers
| GPUSize32
| maximum | 24 |
The maximum number of bind group and vertex buffer slots used simultaneously,
counting any empty slots below the highest index.
Validated in createRenderPipeline() and in draw calls.
| |||
maxBindingsPerBindGroup
| GPUSize32
| maximum | 1000 |
The number of binding indices available when creating a GPUBindGroupLayout.
Note: This limit is normative, but arbitrary.
With the default binding slot limits, it is impossible
to use 1000 bindings in one bind group, but this allows | |||
maxDynamicUniformBuffersPerPipelineLayout
| GPUSize32
| maximum | 8 |
The maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout which are uniform buffers with dynamic offsets.
See Exceeds the binding slot limits.
| |||
maxDynamicStorageBuffersPerPipelineLayout
| GPUSize32
| maximum | 4 |
The maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout which are storage buffers with dynamic offsets.
See Exceeds the binding slot limits.
| |||
maxSampledTexturesPerShaderStage
| GPUSize32
| maximum | 16 |
For each possible GPUShaderStage stage,
the maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout which are sampled textures.
See Exceeds the binding slot limits.
| |||
maxSamplersPerShaderStage
| GPUSize32
| maximum | 16 |
For each possible GPUShaderStage stage,
the maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout which are samplers.
See Exceeds the binding slot limits.
| |||
maxStorageBuffersPerShaderStage
| GPUSize32
| maximum | 8 |
For each possible GPUShaderStage stage,
the maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout which are storage buffers.
See Exceeds the binding slot limits.
| |||
maxStorageTexturesPerShaderStage
| GPUSize32
| maximum | 4 |
For each possible GPUShaderStage stage,
the maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout which are storage textures.
See Exceeds the binding slot limits.
| |||
maxUniformBuffersPerShaderStage
| GPUSize32
| maximum | 12 |
For each possible GPUShaderStage stage,
the maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout which are uniform buffers.
See Exceeds the binding slot limits.
| |||
maxUniformBufferBindingSize
| GPUSize64
| maximum | 65536 bytes |
The maximum GPUBufferBinding.size for bindings with a GPUBindGroupLayoutEntry entry for which entry.buffer?.type is "uniform".
| |||
maxStorageBufferBindingSize
| GPUSize64
| maximum | 134217728 bytes (128 MiB) |
The maximum GPUBufferBinding.size for bindings with a GPUBindGroupLayoutEntry entry for which entry.buffer?.type is "storage" or "read-only-storage".
| |||
minUniformBufferOffsetAlignment
| GPUSize32
| alignment | 256 bytes |
The required alignment for GPUBufferBinding.offset and
the dynamic offsets provided in setBindGroup(),
for bindings with a GPUBindGroupLayoutEntry entry for which entry.buffer?.type is "uniform".
| |||
minStorageBufferOffsetAlignment
| GPUSize32
| alignment | 256 bytes |
The required alignment for GPUBufferBinding.offset and
the dynamic offsets provided in setBindGroup(),
for bindings with a GPUBindGroupLayoutEntry entry for which entry.buffer?.type is "storage" or "read-only-storage".
| |||
maxVertexBuffers
| GPUSize32
| maximum | 8 |
The maximum number of buffers when creating a GPURenderPipeline.
| |||
maxBufferSize
| GPUSize64
| maximum | 268435456 bytes (256 MiB) |
The maximum size of size when creating a GPUBuffer.
| |||
maxVertexAttributes
| GPUSize32
| maximum | 16 |
The maximum number of attributes in total across buffers when creating a GPURenderPipeline.
| |||
maxVertexBufferArrayStride
| GPUSize32
| maximum | 2048 bytes |
The maximum allowed arrayStride when creating a GPURenderPipeline.
| |||
maxInterStageShaderComponents
| GPUSize32
| maximum | 60 |
| The maximum allowed number of components of input or output variables for inter-stage communication (like vertex outputs or fragment inputs). | |||
maxInterStageShaderVariables
| GPUSize32
| maximum | 16 |
| The maximum allowed number of input or output variables for inter-stage communication (like vertex outputs or fragment inputs). | |||
maxColorAttachments
| GPUSize32
| maximum | 8 |
The maximum allowed number of color attachments in GPURenderPipelineDescriptor.fragment.targets, GPURenderPassDescriptor.colorAttachments,
and GPURenderPassLayout.colorFormats.
| |||
maxColorAttachmentBytesPerSample
| GPUSize32
| maximum | 32 |
| The maximum number of bytes necessary to hold one sample (pixel or subpixel) of render pipeline output data, across all color attachments. | |||
maxComputeWorkgroupStorageSize
| GPUSize32
| maximum | 16384 bytes |
The maximum number of bytes of workgroup storage used for a compute stage GPUShaderModule entry-point.
| |||
maxComputeInvocationsPerWorkgroup
| GPUSize32
| maximum | 256 |
The maximum value of the product of the workgroup_size dimensions for a
compute stage GPUShaderModule entry-point.
| |||
maxComputeWorkgroupSizeX
| GPUSize32
| maximum | 256 |
The maximum value of the workgroup_size X dimension for a
compute stage GPUShaderModule entry-point.
| |||
maxComputeWorkgroupSizeY
| GPUSize32
| maximum | 256 |
The maximum value of the workgroup_size Y dimensions for a
compute stage GPUShaderModule entry-point.
| |||
maxComputeWorkgroupSizeZ
| GPUSize32
| maximum | 64 |
The maximum value of the workgroup_size Z dimensions for a
compute stage GPUShaderModule entry-point.
| |||
maxComputeWorkgroupsPerDimension
| GPUSize32
| maximum | 65535 |
The maximum value for the arguments of dispatchWorkgroups(workgroupCountX, workgroupCountY, workgroupCountZ).
| |||
3.6.2.1. GPUSupportedLimits
GPUSupportedLimits exposes the limits supported by an adapter or device.
See GPUAdapter.limits and GPUDevice.limits.
[Exposed =(Window ,DedicatedWorker ),SecureContext ]interface GPUSupportedLimits {readonly attribute unsigned long ;maxTextureDimension1D readonly attribute unsigned long ;maxTextureDimension2D readonly attribute unsigned long ;maxTextureDimension3D readonly attribute unsigned long ;maxTextureArrayLayers readonly attribute unsigned long ;maxBindGroups readonly attribute unsigned long ;maxBindGroupsPlusVertexBuffers readonly attribute unsigned long ;maxBindingsPerBindGroup readonly attribute unsigned long ;maxDynamicUniformBuffersPerPipelineLayout readonly attribute unsigned long ;maxDynamicStorageBuffersPerPipelineLayout readonly attribute unsigned long ;maxSampledTexturesPerShaderStage readonly attribute unsigned long ;maxSamplersPerShaderStage readonly attribute unsigned long ;maxStorageBuffersPerShaderStage readonly attribute unsigned long ;maxStorageTexturesPerShaderStage readonly attribute unsigned long ;maxUniformBuffersPerShaderStage readonly attribute unsigned long long ;maxUniformBufferBindingSize readonly attribute unsigned long long ;maxStorageBufferBindingSize readonly attribute unsigned long ;minUniformBufferOffsetAlignment readonly attribute unsigned long ;minStorageBufferOffsetAlignment readonly attribute unsigned long ;maxVertexBuffers readonly attribute unsigned long long ;maxBufferSize readonly attribute unsigned long ;maxVertexAttributes readonly attribute unsigned long ;maxVertexBufferArrayStride readonly attribute unsigned long ;maxInterStageShaderComponents readonly attribute unsigned long ;maxInterStageShaderVariables readonly attribute unsigned long ;maxColorAttachments readonly attribute unsigned long ;maxColorAttachmentBytesPerSample readonly attribute unsigned long ;maxComputeWorkgroupStorageSize readonly attribute unsigned long ;maxComputeInvocationsPerWorkgroup readonly attribute unsigned long ;maxComputeWorkgroupSizeX readonly attribute unsigned long ;maxComputeWorkgroupSizeY readonly attribute unsigned long ;maxComputeWorkgroupSizeZ readonly attribute unsigned long ; };maxComputeWorkgroupsPerDimension
3.6.2.2. GPUSupportedFeatures
GPUSupportedFeatures is a setlike interface. Its set entries are
the GPUFeatureName values of the features supported by an adapter or
device. It must only contain strings from the GPUFeatureName enum.
[Exposed =(Window ,DedicatedWorker ),SecureContext ]interface GPUSupportedFeatures {readonly setlike <DOMString >; };
GPUSupportedFeatures set entries is DOMString to allow user
agents to gracefully handle valid GPUFeatureNames which are added in later revisions of the spec
but which the user agent has not been updated to recognize yet. If the set entries type was GPUFeatureName the following code would throw an TypeError rather than reporting false:
3.6.2.3. WGSLLanguageFeatures
WGSLLanguageFeatures is the setlike interface of navigator.gpu..
Its set entries are the string names of the WGSL language extensions supported by the implementation (regardless of the adapter or device).wgslLanguageFeatures
[Exposed =(Window ,DedicatedWorker ),SecureContext ]interface WGSLLanguageFeatures {readonly setlike <DOMString >; };
3.6.2.4. GPUAdapterInfo
GPUAdapterInfo exposes various identifying information about an adapter.
None of the members in GPUAdapterInfo are guaranteed to be populated. It is at the user
agent’s discretion which values to reveal, and it is likely that on some devices none of the values
will be populated. As such, applications must be able to handle any possible GPUAdapterInfo values,
including the absence of those values.
For privacy considerations, see § 2.2.6 Adapter Identifiers.
[Exposed =(Window ,DedicatedWorker ),SecureContext ]interface GPUAdapterInfo {readonly attribute DOMString vendor ;readonly attribute DOMString architecture ;readonly attribute DOMString device ;readonly attribute DOMString description ; };
GPUAdapterInfo has the following attributes:
vendor, of type DOMString, readonly-
The name of the vendor of the adapter, if available. Empty string otherwise.
architecture, of type DOMString, readonly-
The name of the family or class of GPUs the adapter belongs to, if available. Empty string otherwise.
device, of type DOMString, readonly-
A vendor-specific identifier for the adapter, if available. Empty string otherwise.
Note: This is a value that represents the type of adapter. For example, it may be a PCI device ID. It does not uniquely identify a given piece of hardware like a serial number.
description, of type DOMString, readonly-
A human readable string describing the adapter as reported by the driver, if available. Empty string otherwise.
Note: Because no formatting is applied to
descriptionattempting to parse this value is not recommended. Applications which change their behavior based on theGPUAdapterInfo, such as applying workarounds for known driver issues, should rely on the other fields when possible.
-
Let adapterInfo be a new
GPUAdapterInfo. -
If the vendor is known, set adapterInfo.
vendorto the name of adapter’s vendor as a normalized identifier string. To preserve privacy, the user agent may instead set adapterInfo.vendorto the empty string or a reasonable approximation of the vendor as a normalized identifier string. -
If |the architecture is known, set adapterInfo.
architectureto a normalized identifier string representing the family or class of adapters to which adapter belongs. To preserve privacy, the user agent may instead set adapterInfo.architectureto the empty string or a reasonable approximation of the architecture as a normalized identifier string. -
If the device is known, set adapterInfo.
deviceto a normalized identifier string representing a vendor-specific identifier for adapter. To preserve privacy, the user agent may instead set adapterInfo.deviceto to the empty string or a reasonable approximation of a vendor-specific identifier as a normalized identifier string. -
If a description is known, set adapterInfo.
descriptionto a description of the adapter as reported by the driver. To preserve privacy, the user agent may instead set adapterInfo.descriptionto the empty string or a reasonable approximation of a description. -
Return adapterInfo.
[a-z0-9]+(-[a-z0-9]+)*
3.7. Extension Documents
"Extension Documents" are additional documents which describe new functionality which is
non-normative and not part of the WebGPU/WGSL specifications.
They describe functionality that builds upon these specifications, often including one or more new
API feature flags and/or WGSL enable directives, or interactions with other draft
web specifications.
WebGPU implementations must not expose extension functionality; doing so is a spec violation. New functionality does not become part of the WebGPU standard until it is integrated into the WebGPU specification (this document) and/or WGSL specification.
3.8. Origin Restrictions
WebGPU allows accessing image data stored in images, videos, and canvases. Restrictions are imposed on the use of cross-domain media, because shaders can be used to indirectly deduce the contents of textures which have been uploaded to the GPU.
WebGPU disallows uploading an image source if it is not origin-clean.
This also implies that the origin-clean flag for a
canvas rendered using WebGPU will never be set to false.
For more information on issuing CORS requests for image and video elements, consult:
3.9. Task Sources
3.9.1. WebGPU Task Source
WebGPU defines a new task source called the WebGPU task source.
It is used for the uncapturederror event and GPUDevice.lost.
GPUDevice device,
with a series of steps steps:
-
Queue a global task on the WebGPU task source, with the global object that was used to create device, and the steps steps.
3.9.2. Automatic Expiry Task Source
WebGPU defines a new task source called the automatic expiry task source. It is used for the automatic, timed expiry (destruction) of certain objects:
-
GPUTextures returned bygetCurrentTexture() -
GPUExternalTextures created fromHTMLVideoElements
GPUDevice device and a series of steps steps:
-
Queue a global task on the automatic expiry task source, with the global object that was used to create device, and the steps steps.
Tasks from the automatic expiry task source should be processed with high priority; in particular, once queued, they should run before user-defined (JavaScript) tasks.
Implementation note: It is valid to implement a high-priority expiry "task" by instead inserting additional steps at a fixed point inside the event loop processing model rather than running an actual task.
3.10. Color Spaces and Encoding
WebGPU does not provide color management. All values within WebGPU (such as texture elements) are raw numeric values, not color-managed color values.
WebGPU does interface with color-managed outputs (via GPUCanvasConfiguration) and inputs
(via copyExternalImageToTexture() and importExternalTexture()).
Thus, color conversion must be performed between the WebGPU numeric values and the external color values.
Each such interface point locally defines an encoding (color space, transfer function, and alpha
premultiplication) in which the WebGPU numeric values are to be interpreted.
WebGPU allows all of the color spaces in the PredefinedColorSpace enum.
Note, each color space is defined over an extended range, as defined by the referenced CSS definitions,
to represent color values outside of its space (in both chrominance and luminance).
An out-of-gamut premultiplied RGBA value is one where any of the R/G/B channel values
exceeds the alpha channel value. For example, the premultiplied sRGB RGBA value [1.0, 0, 0, 0.5]
represents the (unpremultiplied) color [2, 0, 0] with 50% alpha, written rgb(srgb 2 0 0 / 50%) in CSS.
Just like any color value outside the sRGB color gamut, this is a well defined point in the extended color space
(except when alpha is 0, in which case there is no color).
However, when such values are output to a visible canvas, the result is undefined
(see GPUCanvasAlphaMode "premultiplied").
3.10.1. Color Space Conversions
A color is converted between spaces by translating its representation in one space to a representation in another according to the definitions above.
If the source value has fewer than 4 RGBA channels, the missing green/blue/alpha channels are set to 0, 0, 1, respectively, before converting for color space/encoding and alpha premultiplication.
After conversion, if the destination needs fewer than 4 channels, the additional channels
are ignored.
Note: Grayscale images generally represent RGB values (V, V, V), or RGBA values (V, V, V, A) in their color space.
Colors are not lossily clamped during conversion: converting from one color space to another will result in values outside the range [0, 1] if the source color values were outside the range of the destination color space’s gamut. For an sRGB destination, for example, this can occur if the source is rgba16float, in a wider color space like Display-P3, or is premultiplied and contains out-of-gamut values.
Similarly, if the source value has a high bit depth (e.g. PNG with 16 bits per component) or
extended range (e.g. canvas with float16 storage), these colors are preserved through color space
conversion, with intermediate computations having at least the precision of the source.
3.10.2. Color Space Conversion Elision
If the source and destination of a color space/encoding conversion are the same, then conversion is not necessary. In general, if any given step of the conversion is an identity function (no-op), implementations should elide it, for performance.
For optimal performance, applications should set their color space and encoding
options so that the number of necessary conversions is minimized throughout the process.
For various image sources of GPUImageCopyExternalImage:
-
-
Premultiplication is controlled via
premultiplyAlpha. -
Color space is controlled via
colorSpaceConversion.
-
-
2d canvas:
-
Color space is controlled via the
colorSpacecontext creation attribute.
-
WebGL canvas:
-
Premultiplication is controlled via the
premultipliedAlphaoption inWebGLContextAttributes. -
Color space is controlled via the
WebGLRenderingContext'sdrawingBufferColorSpacestate.
-
Note: Check browser implementation support for these features before relying on them.
3.11. Numeric conversions from JavaScript to WGSL
Several parts of the WebGPU API (pipeline-overridable constants and
render pass clear values) take numeric values from WebIDL (double or float) and convert
them to WGSL values (bool, i32, u32, f32, f16).
double or float to WGSL type T,
possibly throwing a TypeError:
Note: This TypeError is generated in the device timeline and never surfaced to JavaScript.
-
Assert idlValue is a finite value, since it is not
unrestricted doubleorunrestricted float. -
Let v be the ECMAScript Number resulting from ! converting idlValue to an ECMAScript value.
-
- If T is
bool -
Return the WGSL
boolvalue corresponding to the result of ! converting v to an IDL value of typeboolean.Note: This algorithm is called after the conversion from an ECMAScript value to an IDL
doubleorfloatvalue. If the original ECMAScript value was a non-numeric, non-boolean value like[]or{}, then the WGSLboolresult may be different than if the ECMAScript value had been converted to IDLbooleandirectly. - If T is
i32 -
Return the WGSL
i32value corresponding to the result of ? converting v to an IDL value of type [EnforceRange]long. - If T is
u32 -
Return the WGSL
u32value corresponding to the result of ? converting v to an IDL value of type [EnforceRange]unsigned long. - If T is
f32 -
Return the WGSL
f32value corresponding to the result of ? converting v to an IDL value of typefloat. - If T is
f16 -
-
Let wgslF32 be the WGSL
f32value corresponding to the result of ? converting v to an IDL value of typefloat. -
Return
f16(wgslF32), the result of ! converting the WGSLf32value tof16as defined in WGSL floating point conversion.
Note: As long as the value is in-range of
f32, no error is thrown, even if the value is out-of-range off16. -
- If T is
GPUColor color to a texel value of texture format format,
possibly throwing a TypeError:
Note: This TypeError is generated in the device timeline and never surfaced to JavaScript.
-
If the components of format (assert they all have the same type) are:
- floating-point types or normalized types
-
Let T be
f32. - signed integer types
-
Let T be
i32. - unsigned integer types
-
Let T be
u32.
-
Let wgslColor be a WGSL value of type
vec4<T>, where the 4 components are the RGBA channels of color, each ? converted to WGSL type T. -
Convert wgslColor to format using the same conversion rules as the § 23.3.7 Output Merging step, and return the result.
Note: For non-integer types, the exact choice of value is implementation-defined. For normalized types, the value is clamped to the range of the type.
Note: In other words, the value written will be as if it was written by a WGSL shader that
outputs the value represented as a vec4 of f32, i32, or u32.
4. Initialization
4.1. navigator.gpu
A GPU object is available in the Window and DedicatedWorkerGlobalScope contexts through the Navigator and WorkerNavigator interfaces respectively and is exposed via navigator.gpu:
interface mixin { [NavigatorGPU SameObject ,SecureContext ]readonly attribute GPU gpu ; };Navigator includes NavigatorGPU ;WorkerNavigator includes NavigatorGPU ;
NavigatorGPU has the following attributes:
gpu, of type GPU, readonly-
A global singleton providing top-level entry points like
requestAdapter().
4.2. GPU
GPU is the entry point to WebGPU.
[Exposed =(Window ,DedicatedWorker ),SecureContext ]interface GPU {Promise <GPUAdapter ?>requestAdapter (optional GPURequestAdapterOptions options = {});GPUTextureFormat getPreferredCanvasFormat (); [SameObject ]readonly attribute WGSLLanguageFeatures wgslLanguageFeatures ; };
GPU has the following methods and attributes:
requestAdapter(options)-
Requests an adapter from the user agent. The user agent chooses whether to return an adapter, and, if so, chooses according to the provided options.
Called on:GPUthis.Arguments:
Arguments for the GPU.requestAdapter(options) method. Parameter Type Nullable Optional Description optionsGPURequestAdapterOptions✘ ✔ Criteria used to select the adapter. Returns:
Promise<GPUAdapter?>Content timeline steps:
-
Let contentTimeline be the current Content timeline.
-
Let promise be a new promise.
-
Issue the initialization steps on the Device timeline of this.
-
Return promise.
Device timeline initialization steps:-
Let adapter be
null. -
If the user agent chooses to return an adapter, it should:
-
Set adapter to a valid adapter, chosen according to the rules in § 4.2.2 Adapter Selection and the criteria in options, adhering to § 4.2.1 Adapter Capability Guarantees.
The supported limits of the adapter must adhere to the requirements defined in § 3.6.2 Limits.
-
If adapter meets the criteria of a fallback adapter set adapter.
[[fallback]]totrue.
-
-
Issue the subsequent steps on contentTimeline.
Content timeline steps:-
If adapter is not
null:-
Resolve promise with a new
GPUAdapterencapsulating adapter.
-
-
Otherwise, Resolve promise with
null.
-
getPreferredCanvasFormat()-
Returns an optimal
GPUTextureFormatfor displaying 8-bit depth, standard dynamic range content on this system. Must only return"rgba8unorm"or"bgra8unorm".The returned value can be passed as the
formattoconfigure()calls on aGPUCanvasContextto ensure the associated canvas is able to display its contents efficiently.Note: Canvases which are not displayed to the screen may or may not benefit from using this format.
Called on:GPUthis.Returns:
GPUTextureFormatContent timeline steps:
-
Return either
"rgba8unorm"or"bgra8unorm", depending on which format is optimal for displaying WebGPU canvases on this system.
-
wgslLanguageFeatures, of type WGSLLanguageFeatures, readonly-
The names of supported WGSL language extensions. Supported language extensions are automatically enabled.
Adapters may become invalid ("expire") at any time.
Upon any change in the system’s state that could affect the result of any requestAdapter() call, the user agent should expire all previously-returned adapters. For example:
-
A physical adapter is added/removed (via plug/unplug, driver update, hang recovery, etc.)
-
The system’s power configuration has changed (laptop unplugged, power settings changed, etc.)
Note: User agents may choose to expire adapters often, even when there has been no system
state change (e.g. seconds or minutes after the adapter was created).
This can help obfuscate real system state changes, and make developers more aware that calling requestAdapter() again is always necessary before calling requestDevice().
If an application does encounter this situation, standard device-loss recovery
handling should allow it to recover.
4.2.1. Adapter Capability Guarantees
Any GPUAdapter returned by requestAdapter() must provide the following guarantees:
-
At least one of the following must be true:
-
"texture-compression-bc"is supported. -
Both
"texture-compression-etc2"and"texture-compression-astc"are supported.
-
-
All supported limits must be either the default value or better.
-
All alignment-class limits must be powers of 2.
-
maxBindingsPerBindGroupmust be must be ≥ (max bindings per shader stage × max shader stages per pipeline), where:-
max bindings per shader stage is (
maxSampledTexturesPerShaderStage+maxSamplersPerShaderStage+maxStorageBuffersPerShaderStage+maxStorageTexturesPerShaderStage+maxUniformBuffersPerShaderStage). -
max shader stages per pipeline is
2, because aGPURenderPipelinesupports both a vertex and fragment shader.
Note:
maxBindingsPerBindGroupdoes not reflect a fundamental limit; implementations should raise it to conform to this requirement, rather than lowering the other limits. -
-
maxBindGroupsmust be ≤maxBindGroupsPlusVertexBuffers. -
maxVertexBuffersmust be ≤maxBindGroupsPlusVertexBuffers. -
minUniformBufferOffsetAlignmentandminStorageBufferOffsetAlignmentmust both be ≥ 32 bytes.Note: 32 bytes would be the alignment of
vec4<f64>. See WebGPU Shading Language § 13.4.1 Alignment and Size. -
maxUniformBufferBindingSizemust be ≤maxBufferSize. -
maxStorageBufferBindingSizemust be ≤maxBufferSize. -
maxStorageBufferBindingSizemust be a multiple of 4 bytes. -
maxVertexBufferArrayStridemust be a multiple of 4 bytes. -
maxComputeWorkgroupSizeXmust be ≤maxComputeInvocationsPerWorkgroup. -
maxComputeWorkgroupSizeYmust be ≤maxComputeInvocationsPerWorkgroup. -
maxComputeWorkgroupSizeZmust be ≤maxComputeInvocationsPerWorkgroup. -
maxComputeInvocationsPerWorkgroupmust be ≤maxComputeWorkgroupSizeX×maxComputeWorkgroupSizeY×maxComputeWorkgroupSizeZ.
4.2.2. Adapter Selection
GPURequestAdapterOptions provides hints to the user agent indicating what
configuration is suitable for the application.
dictionary GPURequestAdapterOptions {GPUPowerPreference powerPreference ;boolean forceFallbackAdapter =false ; };
enum {GPUPowerPreference "low-power" ,"high-performance" , };
GPURequestAdapterOptions has the following members:
powerPreference, of type GPUPowerPreference-
Optionally provides a hint indicating what class of adapter should be selected from the system’s available adapters.
The value of this hint may influence which adapter is chosen, but it must not influence whether an adapter is returned or not.
Note: The primary utility of this hint is to influence which GPU is used in a multi-GPU system. For instance, some laptops have a low-power integrated GPU and a high-performance discrete GPU. This hint may also affect the power configuration of the selected GPU to match the requested power preference.
Note: Depending on the exact hardware configuration, such as battery status and attached displays or removable GPUs, the user agent may select different adapters given the same power preference. Typically, given the same hardware configuration and state and
powerPreference, the user agent is likely to select the same adapter.It must be one of the following values:
undefined(or not present)-
Provides no hint to the user agent.
"low-power"-
Indicates a request to prioritize power savings over performance.
Note: Generally, content should use this if it is unlikely to be constrained by drawing performance; for example, if it renders only one frame per second, draws only relatively simple geometry with simple shaders, or uses a small HTML canvas element. Developers are encouraged to use this value if their content allows, since it may significantly improve battery life on portable devices.
"high-performance"-
Indicates a request to prioritize performance over power consumption.
Note: By choosing this value, developers should be aware that, for devices created on the resulting adapter, user agents are more likely to force device loss, in order to save power by switching to a lower-power adapter. Developers are encouraged to only specify this value if they believe it is absolutely necessary, since it may significantly decrease battery life on portable devices.
forceFallbackAdapter, of type boolean, defaulting tofalse-
When set to
trueindicates that only a fallback adapter may be returned. If the user agent does not support a fallback adapter, will causerequestAdapter()to resolve tonull.Note:
requestAdapter()may still return a fallback adapter ifforceFallbackAdapteris set tofalseand either no other appropriate adapter is available or the user agent chooses to return a fallback adapter. Developers that wish to prevent their applications from running on fallback adapters should check theGPUAdapter.isFallbackAdapterattribute prior to requesting aGPUDevice.
"high-performance" GPUAdapter:
const gpuAdapter= await navigator. gpu. requestAdapter({ powerPreference: 'high-performance' });
4.3. GPUAdapter
A GPUAdapter encapsulates an adapter,
and describes its capabilities (features and limits).
To get a GPUAdapter, use requestAdapter().
[Exposed =(Window ,DedicatedWorker ),SecureContext ]interface GPUAdapter { [SameObject ]readonly attribute GPUSupportedFeatures features ; [SameObject ]readonly attribute GPUSupportedLimits limits ;readonly attribute boolean isFallbackAdapter ;Promise <GPUDevice >requestDevice (optional GPUDeviceDescriptor descriptor = {});Promise <GPUAdapterInfo >requestAdapterInfo (); };
GPUAdapter has the following attributes:
features, of type GPUSupportedFeatures, readonly-
The set of values in
this.[[adapter]].[[features]]. limits, of type GPUSupportedLimits, readonly-
The limits in
this.[[adapter]].[[limits]]. isFallbackAdapter, of type boolean, readonly-
Returns the value of
[[adapter]].[[fallback]].
GPUAdapter has the following internal slots:
[[adapter]], of type adapter, readonly-
The adapter to which this
GPUAdapterrefers.
GPUAdapter has the following methods:
requestDevice(descriptor)-
Requests a device from the adapter.
This is a one-time action: if a device is returned successfully, the adapter becomes invalid.
Called on:GPUAdapterthis.Arguments:
Arguments for the GPUAdapter.requestDevice(descriptor) method. Parameter Type Nullable Optional Description descriptorGPUDeviceDescriptor✘ ✔ Description of the GPUDeviceto request.Content timeline steps:
-
Let contentTimeline be the current Content timeline.
-
Let promise be a new promise.
-
Let adapter be this.
[[adapter]]. -
Issue the initialization steps to the Device timeline of this.
-
Return promise.
Device timeline initialization steps:-
If any of the following requirements are unmet:
-
The set of values in descriptor.
requiredFeaturesmust be a subset of those in adapter.[[features]].
Then issue the following steps on contentTimeline and return:
Content timeline steps:Note: This is the same error that is produced if a feature name isn’t known by the browser at all (in its
GPUFeatureNamedefinition). This converges the behavior when the browser doesn’t support a feature with the behavior when a particular adapter doesn’t support a feature. -
-
If any of the following requirements are unmet:
-
Each key in descriptor.
requiredLimitsmust be the name of a member of supported limits. -
For each limit name key in the keys of supported limits: Let value be descriptor.
requiredLimits[key].-
value must be no better than the value of that limit in adapter.
[[limits]]. -
If the limit’s class is alignment, value must be a power of 2 less than 232.
-
Then issue the following steps on contentTimeline and return:
Content timeline steps:-
Reject promise with an
OperationError.
-
-
If adapter is invalid, or the user agent otherwise cannot fulfill the request:
-
Let device be a new device.
-
Lose the device(device,
"unknown").Note: This makes adapter invalid, if it wasn’t already.
Note: User agents should consider issuing developer-visible warnings in most or all cases when this occurs. Applications should perform reinitialization logic starting with
requestAdapter().
Otherwise:
-
Let device be a new device with the capabilities described by descriptor.
-
Make adapter.
[[adapter]]invalid.
-
-
Issue the subsequent steps on contentTimeline.
Content timeline steps: -
requestAdapterInfo()-
Requests the
GPUAdapterInfofor thisGPUAdapter.Note: Adapter info values are returned with a Promise to give user agents an opportunity to perform potentially long-running checks in the future.
Called on:GPUAdapterthis.Returns:
Promise<GPUAdapterInfo>Content timeline steps:
-
Let promise be a new promise.
-
Let adapter be this.
[[adapter]]. -
Run the following steps in parallel:
-
Resolve promise with a new adapter info for adapter.
-
-
Return promise.
-
GPUDevice with default features and limits:
const gpuAdapter= await navigator. gpu. requestAdapter(); const gpuDevice= await gpuAdapter. requestDevice();
4.3.1. GPUDeviceDescriptor
GPUDeviceDescriptor describes a device request.
dictionary GPUDeviceDescriptor :GPUObjectDescriptorBase {sequence <GPUFeatureName >requiredFeatures = [];record <DOMString ,GPUSize64 >requiredLimits = {};GPUQueueDescriptor defaultQueue = {}; };
GPUDeviceDescriptor has the following members:
requiredFeatures, of type sequence<GPUFeatureName>, defaulting to[]-
Specifies the features that are required by the device request. The request will fail if the adapter cannot provide these features.
Exactly the specified set of features, and no more or less, will be allowed in validation of API calls on the resulting device.
requiredLimits, of type record<DOMString, GPUSize64>, defaulting to{}-
Specifies the limits that are required by the device request. The request will fail if the adapter cannot provide these limits.
Each key must be the name of a member of supported limits. Exactly the specified limits, and no better or worse, will be allowed in validation of API calls on the resulting device.
defaultQueue, of type GPUQueueDescriptor, defaulting to{}-
The descriptor for the default
GPUQueue.
GPUDevice with the "texture-compression-astc" feature if supported:
const gpuAdapter= await navigator. gpu. requestAdapter(); const requiredFeatures= []; if ( gpuAdapter. features. has( 'texture-compression-astc' )) { requiredFeatures. push( 'texture-compression-astc' ) } const gpuDevice= await gpuAdapter. requestDevice({ requiredFeatures});
GPUDevice with a higher maxColorAttachmentBytesPerSample limit:
const gpuAdapter= await navigator. gpu. requestAdapter(); if ( gpuAdapter. limits. maxColorAttachmentBytesPerSample< 64 ) { // When the desired limit isn’t supported, take action to either fall back to a code // path that does not require the higher limit or notify the user that their device // does not meet minimum requirements. } // Request higher limit of max color attachments bytes per sample. const gpuDevice= await gpuAdapter. requestDevice({ requiredLimits: { maxColorAttachmentBytesPerSample: 64 }, });
4.3.1.1. GPUFeatureName
Each GPUFeatureName identifies a set of functionality which, if available,
allows additional usages of WebGPU that would have otherwise been invalid.
enum GPUFeatureName {"depth-clip-control" ,"depth32float-stencil8" ,"texture-compression-bc" ,"texture-compression-etc2" ,"texture-compression-astc" ,"timestamp-query" ,"indirect-first-instance" ,"shader-f16" ,"rg11b10ufloat-renderable" ,"bgra8unorm-storage" ,"float32-filterable" , };
4.4. GPUDevice
A GPUDevice encapsulates a device and exposes
the functionality of that device.
GPUDevice is the top-level interface through which WebGPU interfaces are created.
To get a GPUDevice, use requestDevice().
[Exposed =(Window ,DedicatedWorker ),SecureContext ]interface GPUDevice :EventTarget { [SameObject ]readonly attribute GPUSupportedFeatures features ; [SameObject ]readonly attribute GPUSupportedLimits limits ; [SameObject ]readonly attribute GPUQueue queue ;undefined destroy ();GPUBuffer createBuffer (GPUBufferDescriptor descriptor );GPUTexture createTexture (GPUTextureDescriptor descriptor );GPUSampler createSampler (optional GPUSamplerDescriptor descriptor = {});GPUExternalTexture importExternalTexture (GPUExternalTextureDescriptor descriptor );GPUBindGroupLayout createBindGroupLayout (GPUBindGroupLayoutDescriptor descriptor );GPUPipelineLayout createPipelineLayout (GPUPipelineLayoutDescriptor descriptor );GPUBindGroup createBindGroup (GPUBindGroupDescriptor descriptor );GPUShaderModule createShaderModule (GPUShaderModuleDescriptor descriptor );GPUComputePipeline createComputePipeline (GPUComputePipelineDescriptor descriptor );GPURenderPipeline createRenderPipeline (GPURenderPipelineDescriptor descriptor );Promise <GPUComputePipeline >createComputePipelineAsync (GPUComputePipelineDescriptor descriptor );Promise <GPURenderPipeline >createRenderPipelineAsync (GPURenderPipelineDescriptor descriptor );GPUCommandEncoder createCommandEncoder (optional GPUCommandEncoderDescriptor descriptor = {});GPURenderBundleEncoder createRenderBundleEncoder (GPURenderBundleEncoderDescriptor descriptor );GPUQuerySet createQuerySet (GPUQuerySetDescriptor descriptor ); };GPUDevice includes GPUObjectBase ;
GPUDevice has the following attributes:
features, of type GPUSupportedFeatures, readonly-
A set containing the
GPUFeatureNamevalues of the features supported by the device (i.e. the ones with which it was created). limits, of type GPUSupportedLimits, readonly-
Exposes the limits supported by the device (which are exactly the ones with which it was created).
queue, of type GPUQueue, readonly-
The primary
GPUQueuefor this device.
The [[device]] for a GPUDevice is the device that the GPUDevice refers
to.
GPUDevice has the methods listed in its WebIDL definition above.
Those not defined here are defined elsewhere in this document.
destroy()-
Destroys the device, preventing further operations on it. Outstanding asynchronous operations will fail.
Note: It is valid to destroy a device multiple times.
Called on:GPUDevicethis.Content timeline steps:
-
Issue the subsequent steps on the Device timeline of this.
Device timeline steps:-
Once all currently-enqueued operations on any queue on this device are completed, issue the subsequent steps on the current timeline.
-
Lose the device(this.
[[device]],"destroyed").
Note: Since no further operations can be enqueued on this device, implementations can abort outstanding asynchronous operations immediately and free resource allocations, including mapped memory that was just unmapped.
GPUDevice's allowed buffer usages are:
GPUDevice's allowed texture usages are:
-
Always allowed:
COPY_SRC,COPY_DST,TEXTURE_BINDING,STORAGE_BINDING,RENDER_ATTACHMENT
4.5. Example
GPUAdapter and GPUDevice with error handling:
let gpuDevice= null ; async function initializeWebGPU() { // Check to ensure the user agent supports WebGPU. if ( ! ( 'gpu' in navigator)) { console. error( "User agent doesn’t support WebGPU." ); return false ; } // Request an adapter. const gpuAdapter= await navigator. gpu. requestAdapter(); // requestAdapter may resolve with null if no suitable adapters are found. if ( ! gpuAdapter) { console. error( 'No WebGPU adapters found.' ); return false ; } // Request a device. // Note that the promise will reject if invalid options are passed to the optional // dictionary. To avoid the promise rejecting always check any features and limits // against the adapters features and limits prior to calling requestDevice(). gpuDevice= await gpuAdapter. requestDevice(); // requestDevice will never return null, but if a valid device request can’t be // fulfilled for some reason it may resolve to a device which has already been lost. // Additionally, devices can be lost at any time after creation for a variety of reasons // (ie: browser resource management, driver updates), so it’s a good idea to always // handle lost devices gracefully. gpuDevice. lost. then(( info) => { console. error( `WebGPU device was lost: ${ info. message} ` ); gpuDevice= null ; // Many causes for lost devices are transient, so applications should try getting a // new device once a previous one has been lost unless the loss was caused by the // application intentionally destroying the device. Note that any WebGPU resources // created with the previous device (buffers, textures, etc) will need to be // re-created with the new one. if ( info. reason!= 'destroyed' ) { initializeWebGPU(); } }); onWebGPUInitialized(); return true ; } function onWebGPUInitialized() { // Begin creating WebGPU resources here... } initializeWebGPU();
5. Buffers
5.1. GPUBuffer
A GPUBuffer represents a block of memory that can be used in GPU operations.
Data is stored in linear layout, meaning that each byte of the allocation can be
addressed by its offset from the start of the GPUBuffer, subject to alignment
restrictions depending on the operation. Some GPUBuffers can be
mapped which makes the block of memory accessible via an ArrayBuffer called
its mapping.
GPUBuffers are created via createBuffer().
Buffers may be mappedAtCreation.
[Exposed =(Window ,DedicatedWorker ),SecureContext ]interface GPUBuffer {readonly attribute GPUSize64Out size ;readonly attribute GPUFlagsConstant usage ;readonly attribute GPUBufferMapState mapState ;Promise <undefined >mapAsync (GPUMapModeFlags mode ,optional GPUSize64 offset = 0,optional GPUSize64 size );ArrayBuffer getMappedRange (optional GPUSize64 offset = 0,optional GPUSize64 size );undefined unmap ();undefined destroy (); };GPUBuffer includes GPUObjectBase ;enum GPUBufferMapState {"unmapped" ,"pending" ,"mapped" , };
GPUBuffer has the following immutable properties:
size, of type GPUSize64Out, readonly-
The length of the
GPUBufferallocation in bytes. usage, of type GPUFlagsConstant, readonly-
The allowed usages for this
GPUBuffer. [[internals]], of type buffer internals, readonly,override
GPUBuffer has the following content timeline properties:
mapState, of type GPUBufferMapState, readonly-
The current
GPUBufferMapStateof the buffer:"unmapped"-
The buffer is not mapped for use by
this.getMappedRange(). "pending"-
A mapping of the buffer has been requested, but is pending. It may succeed, or fail validation in
mapAsync(). "mapped"-
The buffer is mapped and
this.getMappedRange()may be used.
The getter steps are:
Content timeline steps:-
If this.
[[mapping]]is notnull, return"mapped". -
If this.
[[pending_map]]is notnull, return"pending". -
Return
"unmapped".
[[pending_map]], of typePromise<void> ornull, initiallynull-
The
Promisereturned by the currently-pendingmapAsync()call.There is never more than one pending map, because
mapAsync()will refuse immediately if a request is already in flight. [[mapping]], of type active buffer mapping ornull, initiallynull-
Set if and only if the buffer is currently mapped for use by
getMappedRange(). Null otherwise (even if there is a[[pending_map]]).An active buffer mapping is a structure with the following fields:
- data, of type Data Block
-
The mapping for this
GPUBuffer. This data is accessed throughArrayBuffers which are views onto this data, returned bygetMappedRange()and stored in views. - mode, of type
GPUMapModeFlags -
The
GPUMapModeFlagsof the map, as specified in the corresponding call tomapAsync()orcreateBuffer(). - range, of type tuple [
unsigned long long,unsigned long long] -
The range of this
GPUBufferthat is mapped. - views, of type list<
ArrayBuffer> -
The
ArrayBuffers returned viagetMappedRange()to the application. They are tracked so they can be detached whenunmap()is called.
To initialize an active buffer mapping with mode mode and range range:-
Let size be range[1] - range[0].
-
Let data be ? CreateByteDataBlock(size).
NOTE:This may result in aRangeErrorbeing thrown. For consistency and predictability:-
For any size at which
new ArrayBuffer()would succeed at a given moment, this allocation should succeed at that moment. -
For any size at which
new ArrayBuffer()deterministically throws aRangeError, this allocation should as well.
-
-
Return an active buffer mapping with:
GPUBuffer's internal object is buffer internals, which
extends internal object with the following device timeline slots:
- state
-
The current internal state of the buffer:
5.1.1. GPUBufferDescriptor
dictionary GPUBufferDescriptor :GPUObjectDescriptorBase {required GPUSize64 size ;required GPUBufferUsageFlags usage ;boolean mappedAtCreation =false ; };
GPUBufferDescriptor has the following members:
size, of type GPUSize64-
The size of the buffer in bytes.
usage, of type GPUBufferUsageFlags-
The allowed usages for the buffer.
mappedAtCreation, of type boolean, defaulting tofalse-
If
truecreates the buffer in an already mapped state, allowinggetMappedRange()to be called immediately. It is valid to setmappedAtCreationtotrueeven ifusagedoes not containMAP_READorMAP_WRITE. This can be used to set the buffer’s initial data.Guarantees that even if the buffer creation eventually fails, it will still appear as if the mapped range can be written/read to until it is unmapped.
5.1.2. Buffer Usages
typedef [EnforceRange ]unsigned long ; [GPUBufferUsageFlags Exposed =(Window ,DedicatedWorker ),SecureContext ]namespace {GPUBufferUsage const GPUFlagsConstant MAP_READ = 0x0001;const GPUFlagsConstant MAP_WRITE = 0x0002;const GPUFlagsConstant COPY_SRC = 0x0004;const GPUFlagsConstant COPY_DST = 0x0008;const GPUFlagsConstant INDEX = 0x0010;const GPUFlagsConstant VERTEX = 0x0020;const GPUFlagsConstant UNIFORM = 0x0040;const GPUFlagsConstant STORAGE = 0x0080;const GPUFlagsConstant INDIRECT = 0x0100;const GPUFlagsConstant QUERY_RESOLVE = 0x0200; };
The GPUBufferUsage flags determine how a GPUBuffer may be used after its creation:
MAP_READ-
The buffer can be mapped for reading. (Example: calling
mapAsync()withGPUMapMode.READ)May only be combined with
COPY_DST. MAP_WRITE-
The buffer can be mapped for writing. (Example: calling
mapAsync()withGPUMapMode.WRITE)May only be combined with
COPY_SRC. COPY_SRC-
The buffer can be used as the source of a copy operation. (Examples: as the
sourceargument of acopyBufferToBuffer()orcopyBufferToTexture()call.) COPY_DST-
The buffer can be used as the destination of a copy or write operation. (Examples: as the
destinationargument of acopyBufferToBuffer()orcopyTextureToBuffer()call, or as the target of awriteBuffer()call.) INDEX-
The buffer can be used as an index buffer. (Example: passed to
setIndexBuffer().) VERTEX-
The buffer can be used as a vertex buffer. (Example: passed to
setVertexBuffer().) UNIFORM-
The buffer can be used as a uniform buffer. (Example: as a bind group entry for a
GPUBufferBindingLayoutwith abuffer.typeof"uniform".) STORAGE-
The buffer can be used as a storage buffer. (Example: as a bind group entry for a
GPUBufferBindingLayoutwith abuffer.typeof"storage"or"read-only-storage".) INDIRECT-
The buffer can be used as to store indirect command arguments. (Examples: as the
indirectBufferargument of adrawIndirect()ordispatchWorkgroupsIndirect()call.) QUERY_RESOLVE-
The buffer can be used to capture query results. (Example: as the
destinationargument of aresolveQuerySet()call.)
5.1.3. Buffer Creation
createBuffer(descriptor)-
Creates a
GPUBuffer.Called on:GPUDevicethis.Arguments:
Arguments for the GPUDevice.createBuffer(descriptor) method. Parameter Type Nullable Optional Description descriptorGPUBufferDescriptor✘ ✘ Description of the GPUBufferto create.Returns:
GPUBufferContent timeline steps:
-
Let [b, bi] be ! create a new WebGPU object(this,
GPUBuffer, descriptor). -
If descriptor.
mappedAtCreationistrue:-
Set b.
[[mapping]]to ? initialize an active buffer mapping with modeWRITEand range[0, descriptor..size]
-
-
Issue the initialization steps on the Device timeline of this.
-
Return b.
Device timeline initialization steps:-
If any of the following requirements are unmet, generate a validation error, make bi invalid, and stop.
-
device must be valid.
-
descriptor.
usagemust not be 0. -
descriptor.
usagemust be a subset of device’s allowed buffer usages. -
If descriptor.
sizemust be ≤ device.[[device]].[[limits]].maxBufferSize. -
If descriptor.
mappedAtCreationistrue:-
descriptor.
sizemust be a multiple of 4.
-
-
Note: If buffer creation fails, and descriptor.
mappedAtCreationisfalse, any calls tomapAsync()will reject, so any resources allocated to enable mapping can and may be discarded or recycled.-
If descriptor.
mappedAtCreationistrue:-
Set bi.state to "unavailable".
Else:
-
-
Create a device allocation for bi where each byte is zero.
If the allocation fails without side-effects, generate an out-of-memory error, make bi invalid, and return.
-
const buffer= gpuDevice. createBuffer({ size: 128 , usage: GPUBufferUsage. UNIFORM| GPUBufferUsage. COPY_DST});
5.1.4. Buffer Destruction
An application that no longer requires a GPUBuffer can choose to lose
access to it before garbage collection by calling destroy(). Destroying a buffer also
unmaps it, freeing any memory allocated for the mapping.
Note: This allows the user agent to reclaim the GPU memory associated with the GPUBuffer once all previously submitted operations using it are complete.
destroy()-
Destroys the
GPUBuffer.Note: It is valid to destroy a buffer multiple times.
Called on:GPUBufferthis.Returns:
undefinedContent timeline steps:
-
Call this.
unmap(). -
Issue the subsequent steps on the Device timeline of this.
[[device]].
Device timeline steps:-
Set this.
[[internals]].state to "destroyed".
Note: Since no further operations can be enqueued using this buffer, implementations can free resource allocations, including mapped memory that was just unmapped.
-
5.2. Buffer Mapping
An application can request to map a GPUBuffer so that they can access its
content via ArrayBuffers that represent part of the GPUBuffer's
allocations. Mapping a GPUBuffer is requested asynchronously with mapAsync() so that the user agent can ensure the GPU
finished using the GPUBuffer before the application can access its content.
A mapped GPUBuffer cannot be used by the GPU and must be unmapped using unmap() before
work using it can be submitted to the Queue timeline.
Once the GPUBuffer is mapped, the application can synchronously ask for access
to ranges of its content with getMappedRange().
The returned ArrayBuffer can only be detached by unmap() (directly, or via GPUBuffer.destroy() or GPUDevice.destroy()),
and cannot be transferred.
A TypeError is thrown by any other operation that attempts to do so.
typedef [EnforceRange ]unsigned long ; [GPUMapModeFlags Exposed =(Window ,DedicatedWorker ),SecureContext ]namespace {GPUMapMode const GPUFlagsConstant READ = 0x0001;const GPUFlagsConstant WRITE = 0x0002; };
The GPUMapMode flags determine how a GPUBuffer is mapped when calling mapAsync():
READ-
Only valid with buffers created with the
MAP_READusage.Once the buffer is mapped, calls to
getMappedRange()will return anArrayBuffercontaining the buffer’s current values. Changes to the returnedArrayBufferwill be discarded afterunmap()is called. WRITE-
Only valid with buffers created with the
MAP_WRITEusage.Once the buffer is mapped, calls to
getMappedRange()will return anArrayBuffercontaining the buffer’s current values. Changes to the returnedArrayBufferwill be stored in theGPUBufferafterunmap()is called.Note: Since the
MAP_WRITEbuffer usage may only be combined with theCOPY_SRCbuffer usage, mapping for writing can never return values produced by the GPU, and the returnedArrayBufferwill only ever contain the default initialized data (zeros) or data written by the webpage during a previous mapping.
mapAsync(mode, offset, size)-
Maps the given range of the
GPUBufferand resolves the returnedPromisewhen theGPUBuffer's content is ready to be accessed withgetMappedRange().The resolution of the returned
Promiseonly indicates that the buffer has been mapped. It does not guarantee the completion of any other operations visible to the content timeline, and in particular does not imply that any otherPromisereturned fromonSubmittedWorkDone()ormapAsync()on otherGPUBuffers have resolved.The resolution of the
Promisereturned fromonSubmittedWorkDone()does imply the completion ofmapAsync()calls made prior to that call, onGPUBuffers last used exclusively on that queue.Called on:GPUBufferthis.Arguments:
Arguments for the GPUBuffer.mapAsync(mode, offset, size) method. Parameter Type Nullable Optional Description modeGPUMapModeFlags✘ ✘ Whether the buffer should be mapped for reading or writing. offsetGPUSize64✘ ✔ Offset in bytes into the buffer to the start of the range to map. sizeGPUSize64✘ ✔ Size in bytes of the range to map. Content timeline steps:
-
Let contentTimeline be the current Content timeline.
-
If this.
[[pending_map]]is notnull: -
Let p be a new
Promise. -
Set this.
[[pending_map]]to p. -
Issue the validation steps on the Device timeline of this.
[[device]]. -
Return p.
Device timeline validation steps:-
If size is
undefined:-
Let rangeSize be max(0, this.
size- offset).
Otherwise:
-
Let rangeSize be size.
-
-
If any of the following conditions are unsatisfied:
Then:
-
Issue the map failure steps on contentTimeline.
-
Return.
-
-
Set this.
[[internals]].state to "unavailable".Note: Since the buffer is mapped, its contents cannot change between this completion and
unmap(). -
If this.
[[device]]is lost, or when it becomes lost:-
Issue the map failure steps on contentTimeline.
Otherwise, at an unspecified point:
-
after the completion of currently-enqueued operations that use this,
-
and no later than the next device timeline operation after the device timeline becomes informed of the completion of all currently-enqueued operations (regardless of whether they use this),
run the following steps:
-
Let internalStateAtCompletion be this.
[[internals]].state.Note: If, and only if, at this point the buffer has become "available" again due to an
unmap()call, then[[pending_map]]!= p below, so mapping will not succeed in the steps below. -
Let dataForMappedRegion be the contents of this starting at offset offset, for rangeSize bytes.
-
Issue the map success steps on the contentTimeline.
-
Content timeline map success steps:-
If this.
[[pending_map]]!= p:Note: The map has been cancelled by
unmap().-
Assert p is rejected.
-
Return.
-
-
Assert p is pending.
-
Assert internalStateAtCompletion is "unavailable".
-
Let mapping be initialize an active buffer mapping with mode mode and range
[offset, offset + rangeSize].If this allocation fails:
-
Set this.
[[pending_map]]tonull, and reject p with aRangeError. -
Return.
-
-
Set the content of mapping.data to dataForMappedRegion.
-
Set this.
[[mapping]]to mapping. -
Set this.
[[pending_map]]tonull, and resolve p.
Content timeline map failure steps:-
If this.
[[pending_map]]!= p:Note: The map has been cancelled by
unmap().-
Assert p is already rejected.
-
Return.
-
-
Assert p is still pending.
-
Set this.
[[pending_map]]tonull, and reject p with anOperationError.
-
getMappedRange(offset, size)-
Returns an
ArrayBufferwith the contents of theGPUBufferin the given mapped range.Called on:GPUBufferthis.Arguments:
Arguments for the GPUBuffer.getMappedRange(offset, size) method. Parameter Type Nullable Optional Description offsetGPUSize64✘ ✔ Offset in bytes into the buffer to return buffer contents from. sizeGPUSize64✘ ✔ Size in bytes of the ArrayBufferto return.Returns:
ArrayBufferContent timeline steps:
-
If size is missing:
-
Let rangeSize be max(0, this.
size- offset).
Otherwise, let rangeSize be size.
-
-
If any of the following conditions are unsatisfied, throw an
OperationErrorand stop.-
this.
[[mapping]]is notnull. -
offset is a multiple of 8.
-
rangeSize is a multiple of 4.
-
offset ≥ this.
[[mapping]].range[0]. -
offset + rangeSize ≤ this.
[[mapping]].range[1]. -
[offset, offset + rangeSize) does not overlap another range in this.
[[mapping]].views.
Note: It is always valid to get mapped ranges of a
GPUBufferthat ismappedAtCreation, even if it is invalid, because the Content timeline might not know it is invalid. -
-
Let data be this.
[[mapping]].data. -
Let view be ! create an ArrayBuffer of size rangeSize, but with its pointer mutably referencing the content of data at offset (offset -
[[mapping]].range[0]).Note: A
RangeErrormay not be thrown here, because the data has already been allocated duringmapAsync()orcreateBuffer(). -
Set view.
[[ArrayBufferDetachKey]]to "WebGPUBufferMapping".Note: This causes a
TypeErrorto be thrown if an attempt is made to DetachArrayBuffer, except byunmap(). -
Append view to this.
[[mapping]].views. -
Return view.
Note: User agents should consider issuing a developer-visible warning if
getMappedRange()succeeds without having checked the status of the map, by waiting formapAsync()to succeed, querying amapStateof"mapped", or waiting for a lateronSubmittedWorkDone()call to succeed. -
unmap()-
Unmaps the mapped range of the
GPUBufferand makes it’s contents available for use by the GPU again.Called on:GPUBufferthis.Returns:
undefinedContent timeline steps:
-
If this.
[[pending_map]]is notnull:-
Reject this.
[[pending_map]]with anAbortError. -
Set this.
[[pending_map]]tonull.
-
-
If this.
[[mapping]]isnull:-
Return.
-
-
For each
ArrayBufferab in this.[[mapping]].views:-
Perform DetachArrayBuffer(ab, "WebGPUBufferMapping").
-
-
Let bufferUpdate be
null. -
If this.
[[mapping]].mode containsWRITE:-
Set bufferUpdate to {
data: this.[[mapping]].data,offset: this.[[mapping]].range[0] }.
Note: When a buffer is mapped without the
WRITEmode, then unmapped, any local modifications done by the application to the mapped rangesArrayBufferare discarded and will not affect the content of later mappings. -
-
Set this.
[[mapping]]tonull. -
Issue the subsequent steps on the Device timeline of this.
[[device]].
Device timeline steps:-
If this.
[[device]]is invalid, return. -
If bufferUpdate is not
null:-
Issue the following steps on the Queue timeline of this.
[[device]].queue:Queue timeline steps:-
Update the contents of this at offset bufferUpdate.
offsetwith the data bufferUpdate.data.
-
-
-
Set this.
[[internals]].state to "available".
-
6. Textures and Texture Views
6.1. GPUTexture
Remove this definition: texture
One texture consists of one or more texture subresources,
each uniquely identified by a mipmap level and,
for 2d textures only, array layer and aspect.
A texture subresource is a subresource: each can be used in different internal usages within a single usage scope.
Each subresource in a mipmap level is approximately half the size,
in each spatial dimension, of the corresponding resource in the lesser level
(see logical miplevel-specific texture extent).
The subresource in level 0 has the dimensions of the texture itself.
These are typically used to represent levels of detail of a texture. GPUSampler and WGSL provide facilities for selecting and interpolating between levels of
detail, explicitly or automatically.
A "2d" texture may be an array of array layers.
Each subresource in a layer is the same size as the corresponding resources in other layers.
For non-2d textures, all subresources have an array layer index of 0.
Each subresource has an aspect.
Color textures have just one aspect: color. Depth-or-stencil format textures may have multiple aspects:
a depth aspect,
a stencil aspect, or both, and may be used in special ways, such as in depthStencilAttachment and in "depth" bindings.
A "3d" texture may have multiple slices, each being the
two-dimensional image at a particular z value in the texture.
Slices are not separate subresources.
[Exposed =(Window ,DedicatedWorker ),SecureContext ]interface GPUTexture {GPUTextureView createView (optional GPUTextureViewDescriptor descriptor = {});undefined destroy ();readonly attribute GPUIntegerCoordinateOut width ;readonly attribute GPUIntegerCoordinateOut height ;readonly attribute GPUIntegerCoordinateOut depthOrArrayLayers ;readonly attribute GPUIntegerCoordinateOut mipLevelCount ;readonly attribute GPUSize32Out sampleCount ;readonly attribute GPUTextureDimension dimension ;readonly attribute GPUTextureFormat format ;readonly attribute GPUFlagsConstant usage ; };GPUTexture includes GPUObjectBase ;
GPUTexture has the following attributes:
width, of type GPUIntegerCoordinateOut, readonly-
The width of this
GPUTexture. height, of type GPUIntegerCoordinateOut, readonly-
The height of this
GPUTexture. depthOrArrayLayers, of type GPUIntegerCoordinateOut, readonly-
The depth or layer count of this
GPUTexture. mipLevelCount, of type GPUIntegerCoordinateOut, readonly-
The number of mip levels of this
GPUTexture. sampleCount, of type GPUSize32Out, readonly-
The number of sample count of this
GPUTexture. dimension, of type GPUTextureDimension, readonly-
The dimension of the set of texel for each of this
GPUTexture's subresources. format, of type GPUTextureFormat, readonly-
The format of this
GPUTexture. usage, of type GPUFlagsConstant, readonly-
The allowed usages for this
GPUTexture.
GPUTexture has the following internal slots:
[[size]], of typeGPUExtent3D-
The size of the texture (same as the
width,height, anddepthOrArrayLayersattributes). [[viewFormats]], of type sequence<GPUTextureFormat>-
The set of
GPUTextureFormats that can be usedGPUTextureViewDescriptor.formatwhen creating views on thisGPUTexture. [[destroyed]], of typeboolean, initially false-
If the texture is destroyed, it can no longer be used in any operation, and its underlying memory can be freed.
Arguments:
-
GPUExtent3DbaseSize -
GPUSize32mipLevel
Returns: GPUExtent3DDict
-
Let extent be a new
GPUExtent3DDictobject. -
Set extent.
depthOrArrayLayersto 1. -
Return extent.
The logical miplevel-specific texture extent of a texture is the size of the texture in texels at a specific miplevel. It is calculated by this procedure:
Arguments:
-
GPUTextureDescriptordescriptor -
GPUSize32mipLevel
Returns: GPUExtent3DDict
-
Let extent be a new
GPUExtent3DDictobject. -
If descriptor.
dimensionis:"1d"-
-
Set extent.
widthto max(1, descriptor.size.width ≫ mipLevel). -
Set extent.
heightto 1. -
Set extent.
depthOrArrayLayersto 1.
-
"2d"-
-
Set extent.
widthto max(1, descriptor.size.width ≫ mipLevel). -
Set extent.
heightto max(1, descriptor.size.height ≫ mipLevel). -
Set extent.
depthOrArrayLayersto descriptor.size.depthOrArrayLayers.
-
"3d"-
-
Set extent.
widthto max(1, descriptor.size.width ≫ mipLevel). -
Set extent.
heightto max(1, descriptor.size.height ≫ mipLevel). -
Set extent.
depthOrArrayLayersto max(1, descriptor.size.depthOrArrayLayers ≫ mipLevel).
-
-
Return extent.
The physical miplevel-specific texture extent of a texture is the size of the texture in texels at a specific miplevel that includes the possible extra padding to form complete texel blocks in the texture. It is calculated by this procedure:
Arguments:
-
GPUTextureDescriptordescriptor -
GPUSize32mipLevel
Returns: GPUExtent3DDict
-
Let extent be a new
GPUExtent3DDictobject. -
Let logicalExtent be logical miplevel-specific texture extent(descriptor, mipLevel).
-
If descriptor.
dimensionis:"1d"-
-
Set extent.
widthto logicalExtent.width rounded up to the nearest multiple of descriptor’s texel block width. -
Set extent.
heightto 1. -
Set extent.
depthOrArrayLayersto 1.
-
"2d"-
-
Set extent.
widthto logicalExtent.width rounded up to the nearest multiple of descriptor’s texel block width. -
Set extent.
heightto logicalExtent.height rounded up to the nearest multiple of descriptor’s texel block height. -
Set extent.
depthOrArrayLayersto logicalExtent.depthOrArrayLayers.
-
"3d"-
-
Set extent.
widthto logicalExtent.width rounded up to the nearest multiple of descriptor’s texel block width. -
Set extent.
heightto logicalExtent.height rounded up to the nearest multiple of descriptor’s texel block height. -
Set extent.
depthOrArrayLayersto logicalExtent.depthOrArrayLayers.
-
-
Return extent.
6.1.1. GPUTextureDescriptor
dictionary GPUTextureDescriptor :GPUObjectDescriptorBase {required GPUExtent3D size ;GPUIntegerCoordinate mipLevelCount = 1;GPUSize32 sampleCount = 1;GPUTextureDimension dimension = "2d";required GPUTextureFormat format ;required GPUTextureUsageFlags usage ;sequence <GPUTextureFormat >viewFormats = []; };
GPUTextureDescriptor has the following members:
size, of type GPUExtent3D-
The width, height, and depth or layer count of the texture.
mipLevelCount, of type GPUIntegerCoordinate, defaulting to1-
The number of mip levels the texture will contain.
sampleCount, of type GPUSize32, defaulting to1-
The sample count of the texture. A
sampleCount>1indicates a multisampled texture. dimension, of type GPUTextureDimension, defaulting to"2d"-
Whether the texture is one-dimensional, an array of two-dimensional layers, or three-dimensional.
format, of type GPUTextureFormat-
The format of the texture.
usage, of type GPUTextureUsageFlags-
The allowed usages for the texture.
viewFormats, of type sequence<GPUTextureFormat>, defaulting to[]-
Specifies what view
formatvalues will be allowed when callingcreateView()on this texture (in addition to the texture’s actualformat).NOTE:Adding a format to this list may have a significant performance impact, so it is best to avoid adding formats unnecessarily.The actual performance impact is highly dependent on the target system; developers must test various systems to find out the impact on their particular application. For example, on some systems any texture with a
formatorviewFormatsentry including"rgba8unorm-srgb"will perform less optimally than a"rgba8unorm"texture which does not. Similar caveats exist for other formats and pairs of formats on other systems.Formats in this list must be texture view format compatible with the texture format.
TwoGPUTextureFormats format and viewFormat are texture view format compatible if:-
format equals viewFormat, or
-
format and viewFormat differ only in whether they are
srgbformats (have the-srgbsuffix).
-
enum {GPUTextureDimension "1d" ,"2d" ,"3d" , };
"1d"-
Specifies a texture that has one dimension, width.
"2d"-
Specifies a texture that has a width and height, and may have layers. Only
"2d"textures may have mipmaps, be multisampled, use a compressed or depth/stencil format, and be used as a render attachment. "3d"-
Specifies a texture that has a width, height, and depth.
6.1.2. Texture Usages
typedef [EnforceRange ]unsigned long ; [GPUTextureUsageFlags Exposed =(Window ,DedicatedWorker ),SecureContext ]namespace {GPUTextureUsage const GPUFlagsConstant COPY_SRC = 0x01;const GPUFlagsConstant COPY_DST = 0x02;const GPUFlagsConstant TEXTURE_BINDING = 0x04;const GPUFlagsConstant STORAGE_BINDING = 0x08;const GPUFlagsConstant RENDER_ATTACHMENT = 0x10; };
The GPUTextureUsage flags determine how a GPUTexture may be used after its creation:
COPY_SRC-
The texture can be used as the source of a copy operation. (Examples: as the
sourceargument of acopyTextureToTexture()orcopyTextureToBuffer()call.) COPY_DST-
The texture can be used as the destination of a copy or write operation. (Examples: as the
destinationargument of acopyTextureToTexture()orcopyBufferToTexture()call, or as the target of awriteTexture()call.) TEXTURE_BINDING-
The texture can be bound for use as a sampled texture in a shader (Example: as a bind group entry for a
GPUTextureBindingLayout.) STORAGE_BINDING-
The texture can be bound for use as a storage texture in a shader (Example: as a bind group entry for a
GPUStorageTextureBindingLayout.) RENDER_ATTACHMENT-
The texture can be used as a color or depth/stencil attachment in a render pass. (Example: as a
GPURenderPassColorAttachment.vieworGPURenderPassDepthStencilAttachment.view.)
Arguments:
6.1.3. Texture Creation
createTexture(descriptor)-
Creates a
GPUTexture.Called on:GPUDevicethis.Arguments:
Arguments for the GPUDevice.createTexture(descriptor) method. Parameter Type Nullable Optional Description descriptorGPUTextureDescriptor✘ ✘ Description of the GPUTextureto create.Returns:
GPUTextureContent timeline steps:
-
? validate GPUExtent3D shape(descriptor.
size). -
? Validate texture format required features of descriptor.
formatwith this.[[device]]. -
? Validate texture format required features of each element of descriptor.
viewFormatswith this.[[device]]. -
Let t be a new
GPUTextureobject. -
Set t.
depthOrArrayLayersto descriptor.size.depthOrArrayLayers. -
Set t.
mipLevelCountto descriptor.mipLevelCount. -
Set t.
sampleCountto descriptor.sampleCount. -
Issue the initialization steps on the Device timeline of this.
-
Return t.
Device timeline initialization steps:-
If any of the following conditions are unsatisfied generate a validation error, make t invalid, and stop.
-
validating GPUTextureDescriptor(this, descriptor) returns
true.
-
-
Set t.
[[viewFormats]]to descriptor.viewFormats.
-
GPUDevice this, GPUTextureDescriptor descriptor):
Return true if all of the following requirements are met, and false otherwise:
-
descriptor.
usagemust not be 0. -
descriptor.
usagemust contain only bits present in this’s allowed texture usages. -
descriptor.
size.width, descriptor.size.height, and descriptor.size.depthOrArrayLayers must be > zero. -
descriptor.
mipLevelCountmust be > zero. -
descriptor.
sampleCountmust be either 1 or 4. -
If descriptor.
dimensionis:"1d"-
-
descriptor.
size.width must be ≤ this.limits.maxTextureDimension1D. -
descriptor.
size.depthOrArrayLayers must be 1. -
descriptor.
sampleCountmust be 1. -
descriptor.
formatmust not be a compressed format or depth-or-stencil format.
-
"2d"-
-
descriptor.
size.width must be ≤ this.limits.maxTextureDimension2D. -
descriptor.
size.height must be ≤ this.limits.maxTextureDimension2D. -
descriptor.
size.depthOrArrayLayers must be ≤ this.limits.maxTextureArrayLayers.
-
"3d"-
-
descriptor.
size.width must be ≤ this.limits.maxTextureDimension3D. -
descriptor.
size.height must be ≤ this.limits.maxTextureDimension3D. -
descriptor.
size.depthOrArrayLayers must be ≤ this.limits.maxTextureDimension3D. -
descriptor.
sampleCountmust be 1. -
descriptor.
formatmust not be a compressed format or depth-or-stencil format.
-
-
descriptor.
size.width must be multiple of texel block width. -
descriptor.
size.height must be multiple of texel block height. -
If descriptor.
sampleCount> 1:-
descriptor.
mipLevelCountmust be 1. -
descriptor.
size.depthOrArrayLayers must be 1. -
descriptor.
usagemust not include theSTORAGE_BINDINGbit. -
descriptor.
usagemust include theRENDER_ATTACHMENTbit. -
descriptor.
formatmust support multisampling according to § 26.1 Texture Format Capabilities.
-
-
descriptor.
mipLevelCountmust be ≤ maximum mipLevel count(descriptor.dimension, descriptor.size). -
If descriptor.
usageincludes theRENDER_ATTACHMENTbit:-
descriptor.
formatmust be a renderable format.
-
-
If descriptor.
usageincludes theSTORAGE_BINDINGbit:-
descriptor.
formatmust be listed in § 26.1.1 Plain color formats table withSTORAGE_BINDINGcapability for the appropriate access mode.
-
-
For each viewFormat in descriptor.
viewFormats, descriptor.formatand viewFormat must be texture view format compatible.
const texture= gpuDevice. createTexture({ size: { width: 16 , height: 16 }, format: 'rgba8unorm' , usage: GPUTextureUsage. TEXTURE_BINDING, });
6.1.4. Texture Destruction
An application that no longer requires a GPUTexture can choose to lose access to it before
garbage collection by calling destroy().
Note: This allows the user agent to reclaim the GPU memory associated with the GPUTexture once
all previously submitted operations using it are complete.
destroy()-
Destroys the
GPUTexture.Called on:GPUTexturethis.Returns:
undefinedContent timeline steps:
-
Set this.
[[destroyed]]to true.
-
6.2. GPUTextureView
A GPUTextureView is a view onto some subset of the texture subresources defined by
a particular GPUTexture.
[Exposed =(Window ,DedicatedWorker ),SecureContext ]interface GPUTextureView { };GPUTextureView includes GPUObjectBase ;
GPUTextureView has the following internal slots:
[[texture]]-
The
GPUTextureinto which this is a view. [[descriptor]]-
The
GPUTextureViewDescriptordescribing this texture view.All optional fields of
GPUTextureViewDescriptorare defined. [[renderExtent]]-
For renderable views, this is the effective
GPUExtent3DDictfor rendering.Note: this extent depends on the
baseMipLevel.
[[descriptor]] desc,
is the subset of the subresources of view.[[texture]] for which each subresource s satisfies the following:
-
The mipmap level of s is ≥ desc.
baseMipLeveland < desc.baseMipLevel+ desc.mipLevelCount. -
The array layer of s is ≥ desc.
baseArrayLayerand < desc.baseArrayLayer+ desc.arrayLayerCount. -
The aspect of s is in the set of aspects of desc.
aspect.
Two GPUTextureView objects are texture-view-aliasing if and only if
their sets of subresources intersect.
6.2.1. Texture View Creation
dictionary :GPUTextureViewDescriptor GPUObjectDescriptorBase {GPUTextureFormat format ;GPUTextureViewDimension dimension ;GPUTextureAspect aspect = "all";GPUIntegerCoordinate baseMipLevel = 0;GPUIntegerCoordinate mipLevelCount ;GPUIntegerCoordinate baseArrayLayer = 0;GPUIntegerCoordinate arrayLayerCount ; };
GPUTextureViewDescriptor has the following members:
format, of type GPUTextureFormat-
The format of the texture view. Must be either the
formatof the texture or one of theviewFormatsspecified during its creation. dimension, of type GPUTextureViewDimension-
The dimension to view the texture as.
aspect, of type GPUTextureAspect, defaulting to"all"-
Which
aspect(s)of the texture are accessible to the texture view. baseMipLevel, of type GPUIntegerCoordinate, defaulting to0-
The first (most detailed) mipmap level accessible to the texture view.
mipLevelCount, of type GPUIntegerCoordinate-
How many mipmap levels, starting with
baseMipLevel, are accessible to the texture view. baseArrayLayer, of type GPUIntegerCoordinate, defaulting to0-
The index of the first array layer accessible to the texture view.
arrayLayerCount, of type GPUIntegerCoordinate-
How many array layers, starting with
baseArrayLayer, are accessible to the texture view.
enum {GPUTextureViewDimension "1d" ,"2d" ,"2d-array" ,"cube" ,"cube-array" ,"3d" , };
"1d"-
The texture is viewed as a 1-dimensional image.
Corresponding WGSL types:
-
texture_1d -
texture_storage_1d
-
"2d"-
The texture is viewed as a single 2-dimensional image.
Corresponding WGSL types:
-
texture_2d -
texture_storage_2d -
texture_multisampled_2d -
texture_depth_2d -
texture_depth_multisampled_2d
-
"2d-array"-
The texture view is viewed as an array of 2-dimensional images.
Corresponding WGSL types:
-
texture_2d_array -
texture_storage_2d_array -
texture_depth_2d_array
-
"cube"-
The texture is viewed as a cubemap. The view has 6 array layers, corresponding to the [+X, -X, +Y, -Y, +Z, -Z] faces of the cube. Sampling is done seamlessly across the faces of the cubemap.
Corresponding WGSL types:
-
texture_cube -
texture_depth_cube
-
"cube-array"-
The texture is viewed as a packed array of
ncubemaps, each with 6 array layers corresponding to the [+X, -X, +Y, -Y, +Z, -Z] faces of the cube. Sampling is done seamlessly across the faces of the cubemaps.Corresponding WGSL types:
-
texture_cube_array -
texture_depth_cube_array
-
"3d"-
The texture is viewed as a 3-dimensional image.
Corresponding WGSL types:
-
texture_3d -
texture_storage_3d
-
Each GPUTextureAspect value corresponds to a set of aspects.
The set of aspects are defined for each value below.
enum GPUTextureAspect {"all" ,"stencil-only" ,"depth-only" , };
"all"-
All available aspects of the texture format will be accessible to the texture view. For color formats the color aspect will be accessible. For combined depth-stencil formats both the depth and stencil aspects will be accessible. Depth-or-stencil formats with a single aspect will only make that aspect accessible.
The set of aspects is [color, depth, stencil].
"stencil-only"-
Only the stencil aspect of a depth-or-stencil format format will be accessible to the texture view.
The set of aspects is [stencil].
"depth-only"-
Only the depth aspect of a depth-or-stencil format format will be accessible to the texture view.
The set of aspects is [depth].
createView(descriptor)-
Creates a
GPUTextureView.NOTE:By defaultcreateView()will create a view with a dimension that can represent the entire texture. For example, callingcreateView()without specifying adimensionon a"2d"texture with more than one layer will create a"2d-array"GPUTextureView, even if anarrayLayerCountof 1 is specified.For textures created from sources where the layer count is unknown at the time of development it is recommended that calls to
createView()are provided with an explicitdimensionto ensure shader compatibility.Called on:GPUTexturethis.Arguments:
Arguments for the GPUTexture.createView(descriptor) method. Parameter Type Nullable Optional Description descriptorGPUTextureViewDescriptor✘ ✔ Description of the GPUTextureViewto create.Returns: view, of type
GPUTextureView.Content timeline steps:
-
? Validate texture format required features of descriptor.
formatwith this.[[device]]. -
Let view be a new
GPUTextureViewobject. -
Issue the initialization steps on the Device timeline of this.
-
Return view.
Device timeline initialization steps:-
Set descriptor to the result of resolving GPUTextureViewDescriptor defaults for this with descriptor.
-
If any of the following conditions are unsatisfied generate a validation error, make view invalid, and stop.
-
this is valid.
-
If the descriptor.
aspectis"all":-
descriptor.
formatmust equal either this.formator one of the formats in this.[[viewFormats]].
Otherwise:
-
descriptor.
formatmust equal the result of resolving GPUTextureAspect( this.format, descriptor.aspect).
-
-
descriptor.
mipLevelCountmust be > 0. -
descriptor.
baseMipLevel+ descriptor.mipLevelCountmust be ≤ this.mipLevelCount. -
descriptor.
arrayLayerCountmust be > 0. -
descriptor.
baseArrayLayer+ descriptor.arrayLayerCountmust be ≤ the array layer count of this. -
If this.
sampleCount> 1, descriptor.dimensionmust be"2d". -
If descriptor.
dimensionis:"1d"-
-
descriptor.
arrayLayerCountmust be1.
"2d"-
-
descriptor.
arrayLayerCountmust be1.
"2d-array""cube"-
-
descriptor.
arrayLayerCountmust be6.
"cube-array"-
-
descriptor.
arrayLayerCountmust be a multiple of6.
"3d"-
-
descriptor.
arrayLayerCountmust be1.
-
-
Let view be a new
GPUTextureViewobject. -
Set view.
[[texture]]to this. -
Set view.
[[descriptor]]to descriptor. -
If this.
usagecontainsRENDER_ATTACHMENT:-
Let renderExtent be compute render extent(this.
[[size]], descriptor.baseMipLevel). -
Set view.
[[renderExtent]]to renderExtent.
-
-
GPUTextureView texture with a GPUTextureViewDescriptor descriptor run the following steps:
-
Let resolved be a copy of descriptor.
-
If resolved.
mipLevelCountis not provided: set resolved.mipLevelCountto texture.mipLevelCount− resolved.baseMipLevel. -
If resolved.
dimensionis not provided and texture.dimensionis: -
If resolved.
arrayLayerCountis not provided and resolved.dimensionis:"1d","2d", or"3d"-
Set resolved.
arrayLayerCountto1. "cube"-
Set resolved.
arrayLayerCountto6. "2d-array"or"cube-array"-
Set resolved.
arrayLayerCountto the array layer count of texture − resolved.baseArrayLayer.
-
Return resolved.
GPUTexture texture, run the
following steps:
-
If texture.
dimensionis:"1d"or"3d"-
Return
1. "2d"-
Return texture.
depthOrArrayLayers.
6.3. Texture Formats
The name of the format specifies the order of components, bits per component, and data type for the component.
-
r,g,b,a= red, green, blue, alpha -
unorm= unsigned normalized -
snorm= signed normalized -
uint= unsigned int -
sint= signed int -
float= floating point
If the format has the -srgb suffix, then sRGB conversions from gamma to linear
and vice versa are applied during the reading and writing of color values in the
shader. Compressed texture formats are provided by features. Their naming
should follow the convention here, with the texture name as a prefix. e.g. etc2-rgba8unorm.
The texel block is a single addressable element of the textures in pixel-based GPUTextureFormats,
and a single compressed block of the textures in block-based compressed GPUTextureFormats.
The texel block width and texel block height specifies the dimension of one texel block.
-
For pixel-based
GPUTextureFormats, the texel block width and texel block height are always 1. -
For block-based compressed
GPUTextureFormats, the texel block width is the number of texels in each row of one texel block, and the texel block height is the number of texel rows in one texel block. See § 26.1 Texture Format Capabilities for an exhaustive list of values for every texture format.
The texel block copy footprint of an aspect of a GPUTextureFormat is the number of
bytes one texel block occupies during an image copy, if applicable.
Note: The texel block memory cost of a GPUTextureFormat is the number of
bytes needed to store one texel block. It is not fully defined for all formats. This value is informative and non-normative.
enum { // 8-bit formatsGPUTextureFormat ,"r8unorm" ,"r8snorm" ,"r8uint" , // 16-bit formats"r8sint" ,"r16uint" ,"r16sint" ,"r16float" ,"rg8unorm" ,"rg8snorm" ,"rg8uint" , // 32-bit formats"rg8sint" ,"r32uint" ,"r32sint" ,"r32float" ,"rg16uint" ,"rg16sint" ,"rg16float" ,"rgba8unorm" ,"rgba8unorm-srgb" ,"rgba8snorm" ,"rgba8uint" ,"rgba8sint" ,"bgra8unorm" , // Packed 32-bit formats"bgra8unorm-srgb" ,"rgb9e5ufloat" ,"rgb10a2uint" ,"rgb10a2unorm" , // 64-bit formats"rg11b10ufloat" ,"rg32uint" ,"rg32sint" ,"rg32float" ,"rgba16uint" ,"rgba16sint" , // 128-bit formats"rgba16float" ,"rgba32uint" ,"rgba32sint" , // Depth/stencil formats"rgba32float" ,"stencil8" ,"depth16unorm" ,"depth24plus" ,"depth24plus-stencil8" , // "depth32float-stencil8" feature"depth32float" , // BC compressed formats usable if "texture-compression-bc" is both // supported by the device/user agent and enabled in requestDevice."depth32float-stencil8" ,"bc1-rgba-unorm" ,"bc1-rgba-unorm-srgb" ,"bc2-rgba-unorm" ,"bc2-rgba-unorm-srgb" ,"bc3-rgba-unorm" ,"bc3-rgba-unorm-srgb" ,"bc4-r-unorm" ,"bc4-r-snorm" ,"bc5-rg-unorm" ,"bc5-rg-snorm" ,"bc6h-rgb-ufloat" ,"bc6h-rgb-float" ,"bc7-rgba-unorm" , // ETC2 compressed formats usable if "texture-compression-etc2" is both // supported by the device/user agent and enabled in requestDevice."bc7-rgba-unorm-srgb" ,"etc2-rgb8unorm" ,"etc2-rgb8unorm-srgb" ,"etc2-rgb8a1unorm" ,"etc2-rgb8a1unorm-srgb" ,"etc2-rgba8unorm" ,"etc2-rgba8unorm-srgb" ,"eac-r11unorm" ,"eac-r11snorm" ,"eac-rg11unorm" , // ASTC compressed formats usable if "texture-compression-astc" is both // supported by the device/user agent and enabled in requestDevice."eac-rg11snorm" ,"astc-4x4-unorm" ,"astc-4x4-unorm-srgb" ,"astc-5x4-unorm" ,"astc-5x4-unorm-srgb" ,"astc-5x5-unorm" ,"astc-5x5-unorm-srgb" ,"astc-6x5-unorm" ,"astc-6x5-unorm-srgb" ,"astc-6x6-unorm" ,"astc-6x6-unorm-srgb" ,"astc-8x5-unorm" ,"astc-8x5-unorm-srgb" ,"astc-8x6-unorm" ,"astc-8x6-unorm-srgb" ,"astc-8x8-unorm" ,"astc-8x8-unorm-srgb" ,"astc-10x5-unorm" ,"astc-10x5-unorm-srgb" ,"astc-10x6-unorm" ,"astc-10x6-unorm-srgb" ,"astc-10x8-unorm" ,"astc-10x8-unorm-srgb" ,"astc-10x10-unorm" ,"astc-10x10-unorm-srgb" ,"astc-12x10-unorm" ,"astc-12x10-unorm-srgb" ,"astc-12x12-unorm" , };"astc-12x12-unorm-srgb"
The depth component of the "depth24plus" and "depth24plus-stencil8" formats may be implemented as either a 24-bit depth value or a "depth32float" value.
The stencil8 format may be implemented as
either a real "stencil8", or "depth24stencil8", where the depth aspect is
hidden and inaccessible.
-
For 24-bit depth, 1 ULP has a constant value of 1 / (224 − 1).
-
For depth32float, 1 ULP has a variable value no greater than 1 / (224).
A format is renderable if it is either a color renderable format, or a depth-or-stencil format.
If a format is listed in § 26.1.1 Plain color formats with RENDER_ATTACHMENT capability, it is a
color renderable format. Any other format is not a color renderable format.
All depth-or-stencil formats are renderable.
A renderable format is also blendable if it can be used with render pipeline blending. See § 26.1 Texture Format Capabilities.
A format is filterable if it supports the GPUTextureSampleType "float" (not just "unfilterable-float");
that is, it can be used with "filtering" GPUSamplers.
See § 26.1 Texture Format Capabilities.
Arguments:
-
GPUTextureFormatformat -
GPUTextureAspectaspect
Returns: GPUTextureFormat or null
-
If aspect is:
"all"-
Return format.
"depth-only""stencil-only"-
If format is a depth-stencil-format: Return the aspect-specific format of format according to § 26.1.2 Depth-stencil formats or
nullif the aspect is not present in format.
-
Return
null.
Use of some texture formats require a feature to be enabled on the GPUDevice. Because new
formats can be added to the specification, those enum values may not be known by the implementation.
In order to normalize behavior across implementations, attempting to use a format that requires a
feature will throw an exception if the associated feature is not enabled on the device. This makes
the behavior the same as when the format is unknown to the implementation.
See § 26.1 Texture Format Capabilities for information about which GPUTextureFormats require features.
GPUTextureFormat format with logical device device by running the following steps:
-
If format requires a feature and device.
[[features]]does not contain the feature:-
Throw a
TypeError.
-
6.4. GPUExternalTexture
A GPUExternalTexture is a sampleable 2D texture wrapping an external video object.
The contents of a GPUExternalTexture object are a snapshot and may not change, either from inside WebGPU
(it is only sampleable) or from outside WebGPU (e.g. due to video frame advancement).
They are bound into bind group layouts using the externalTexture bind group layout entry member.
External textures use several binding slots: see Exceeds the binding slot limits.
The underlying representation of an external texture is unobservable (except for sampling behavior) but typically may include
-
Up to three 2D planes of data (e.g. RGBA, Y+UV, Y+U+V).
-
Metadata for converting coordinates before reading from those planes (crop and rotation).
-
Metadata for converting values into the specified output color space (matrices, gammas, 3D LUT).
The configuration used may not be stable across time, systems, user agents, media sources, or frames within a single video source. In order to account for many possible representations, the binding conservatively uses the following, for each external texture:
-
three sampled texture bindings (for up to 3 planes),
-
one sampled texture binding for a 3D LUT,
-
one sampler binding to sample the 3D LUT, and
-
one uniform buffer binding for metadata.
[Exposed =(Window ,DedicatedWorker ),SecureContext ]interface GPUExternalTexture { };GPUExternalTexture includes GPUObjectBase ;
GPUExternalTexture has the following internal slots:
[[expired]], of typeboolean-
Indicates whether the object has expired (can no longer be used). Initially set to
false.Note: Unlike similar
\[[destroyed]]slots, this can change fromtrueback tofalse. [[descriptor]], of typeGPUExternalTextureDescriptor-
The descriptor with which the texture was created.
6.4.1. Importing External Textures
An external texture is created from an external video object
using importExternalTexture().
An external texture created from an HTMLVideoElement expires (is destroyed) automatically in a
task after it is imported, instead of manually or upon garbage collection like other resources.
When an external texture expires, its [[expired]] slot changes to true.
An external texture created from a VideoFrame expires (is destroyed) when, and only when,
the source VideoFrame is closed,
either explicitly by close(), or by other means.
Note: As noted in decode(), authors should call close() on output VideoFrames to avoid decoder stalls.
If an imported VideoFrame is dropped without being closed, the imported GPUExternalTexture object will keep it alive until it is also dropped.
The VideoFrame cannot be garbage collected until both objects are dropped.
Garbage collection is unpredictable, so this may still stall the video decoder.
Once the GPUExternalTexture expires, importExternalTexture() must be called again.
However, the user agent may un-expire and return the same GPUExternalTexture again, instead of
creating a new one. This will commonly happen unless the execution of the application is scheduled
to match the video’s frame rate (e.g. using requestVideoFrameCallback()).
If the same object is returned again, it will compare equal, and GPUBindGroups, GPURenderBundles, etc. referencing the previous object can still be used.
dictionary :GPUExternalTextureDescriptor GPUObjectDescriptorBase {required (HTMLVideoElement or VideoFrame );source PredefinedColorSpace = "srgb"; };colorSpace
importExternalTexture(descriptor)-
Creates a
GPUExternalTexturewrapping the provided image source.Called on:GPUDevicethis.Arguments:
Arguments for the GPUDevice.importExternalTexture(descriptor) method. Parameter Type Nullable Optional Description descriptorGPUExternalTextureDescriptor✘ ✘ Provides the external image source object (and any creation options). Returns:
GPUExternalTextureContent timeline steps:
-
Let source be descriptor.
source. -
If the current image contents of source are the same as the most recent
importExternalTexture()call with the same descriptor (ignoringlabel), and the user agent chooses to reuse it:-
Let previousResult be the
GPUExternalTexturereturned previously. -
Set previousResult.
[[expired]]tofalse, renewing ownership of the underlying resource. -
Let result be previousResult.
Note: This allows the application to detect duplicate imports and avoid re-creating dependent objects (such as
GPUBindGroups). Implementations still need to be able to handle a single frame being wrapped by multipleGPUExternalTexture, since import metadata likecolorSpacecan change even for the same frame.Otherwise:
-
If source is not origin-clean, throw a
SecurityErrorand stop. -
Let usability be ? check the usability of the image argument(source).
-
If usability is not
good:-
Return an invalid
GPUExternalTexture.
-
Let data be the result of converting the current image contents of source into the color space descriptor.
colorSpacewith unpremultiplied alpha.This may result in values outside of the range [0, 1]. If clamping is desired, it may be performed after sampling.
Note: This is described like a copy, but may be implemented as a reference to read-only underlying data plus appropriate metadata to perform conversion later.
-
Let result be a new
GPUExternalTextureobject wrapping data.
-
-
If source is an
HTMLVideoElement, queue an automatic expiry task with device this and the following steps:-
Set result.
[[expired]]totrue, releasing ownership of the underlying resource.
Note: An
HTMLVideoElementshould be imported in the same task that samples the texture (which should generally be scheduled usingrequestVideoFrameCallbackorrequestAnimationFrame()depending on the application). Otherwise, a texture could get destroyed by these steps before the application is finished using it. -
-
If source is a
VideoFrame, then when source is closed, run the following steps:-
Set result.
[[expired]]totrue.
-
-
Return result.
-
const videoElement= document. createElement( 'video' ); // ... set up videoElement, wait for it to be ready... function frame() { requestAnimationFrame( frame); // Always re-import the video on every animation frame, because the // import is likely to have expired. // The browser may cache and reuse a past frame, and if it does it // may return the same GPUExternalTexture object again. // In this case, old bind groups are still valid. const externalTexture= gpuDevice. importExternalTexture({ source: videoElement}); // ... render using externalTexture... } requestAnimationFrame( frame);
requestVideoFrameCallback is available:
const videoElement= document. createElement( 'video' ); // ... set up videoElement... function frame() { videoElement. requestVideoFrameCallback( frame); // Always re-import, because we know the video frame has advanced const externalTexture= gpuDevice. importExternalTexture({ source: videoElement}); // ... render using externalTexture... } videoElement. requestVideoFrameCallback( frame);
6.4.2. Sampling External Textures
External textures are represented in WGSL with texture_external and may be read using textureLoad and textureSampleBaseClampToEdge.
The sampler provided to textureSampleBaseClampToEdge is used to sample the underlying textures.
The result is in the color space set by colorSpace.
It is implementation-dependent whether, for any given external texture, the sampler (and filtering)
is applied before or after conversion from underlying values into the specified color space.
Note: If the internal representation is an RGBA plane, sampling behaves as on a regular 2D texture. If there are several underlying planes (e.g. Y+UV), the sampler is used to sample each underlying texture separately, prior to conversion from YUV to the specified color space.
7. Samplers
7.1. GPUSampler
A GPUSampler encodes transformations and filtering information that can
be used in a shader to interpret texture resource data.
GPUSamplers are created via createSampler().
[Exposed =(Window ,DedicatedWorker ),SecureContext ]interface GPUSampler { };GPUSampler includes GPUObjectBase ;
GPUSampler has the following internal slots:
[[descriptor]], of typeGPUSamplerDescriptor, readonly-
The
GPUSamplerDescriptorwith which theGPUSamplerwas created. [[isComparison]], of typeboolean-
Whether the
GPUSampleris used as a comparison sampler. [[isFiltering]], of typeboolean-
Whether the
GPUSamplerweights multiple samples of a texture.
7.1.1. GPUSamplerDescriptor
A GPUSamplerDescriptor specifies the options to use to create a GPUSampler.
dictionary :GPUSamplerDescriptor GPUObjectDescriptorBase {GPUAddressMode addressModeU = "clamp-to-edge";GPUAddressMode addressModeV = "clamp-to-edge";GPUAddressMode addressModeW = "clamp-to-edge";GPUFilterMode magFilter = "nearest";GPUFilterMode minFilter = "nearest";GPUMipmapFilterMode mipmapFilter = "nearest";float lodMinClamp = 0;float lodMaxClamp = 32;GPUCompareFunction compare ; [Clamp ]unsigned short maxAnisotropy = 1; };
addressModeU, of type GPUAddressMode, defaulting to"clamp-to-edge"addressModeV, of type GPUAddressMode, defaulting to"clamp-to-edge"addressModeW, of type GPUAddressMode, defaulting to"clamp-to-edge"-
Specifies the
address modesfor the texture width, height, and depth coordinates, respectively. magFilter, of type GPUFilterMode, defaulting to"nearest"-
Specifies the sampling behavior when the sample footprint is smaller than or equal to one texel.
minFilter, of type GPUFilterMode, defaulting to"nearest"-
Specifies the sampling behavior when the sample footprint is larger than one texel.
mipmapFilter, of type GPUMipmapFilterMode, defaulting to"nearest"-
Specifies behavior for sampling between mipmap levels.
lodMinClamp, of type float, defaulting to0lodMaxClamp, of type float, defaulting to32-
Specifies the minimum and maximum levels of detail, respectively, used internally when sampling a texture.
compare, of type GPUCompareFunction-
When provided the sampler will be a comparison sampler with the specified
GPUCompareFunction.Note: Comparison samplers may use filtering, but the sampling results will be implementation-dependent and may differ from the normal filtering rules.
maxAnisotropy, of type unsigned short, defaulting to1-
Specifies the maximum anisotropy value clamp used by the sampler.
Note: Most implementations support
maxAnisotropyvalues in range between 1 and 16, inclusive. The used value ofmaxAnisotropywill be clamped to the maximum value that the platform supports.
explain how LOD is calculated and if there are differences here between platforms.
explain what anisotropic sampling is
GPUAddressMode describes the behavior of the sampler if the sample footprint extends beyond
the bounds of the sampled texture.
Describe a "sample footprint" in greater detail.
enum {GPUAddressMode "clamp-to-edge" ,"repeat" ,"mirror-repeat" , };
"clamp-to-edge"-
Texture coordinates are clamped between 0.0 and 1.0, inclusive.
"repeat"-
Texture coordinates wrap to the other side of the texture.
"mirror-repeat"-
Texture coordinates wrap to the other side of the texture, but the texture is flipped when the integer part of the coordinate is odd.
GPUFilterMode and GPUMipmapFilterMode describe the behavior of the sampler if the sample footprint does not exactly
match one texel.
enum {GPUFilterMode "nearest" ,"linear" , };enum {GPUMipmapFilterMode ,"nearest" , };"linear"
"nearest"-
Return the value of the texel nearest to the texture coordinates.
"linear"-
Select two texels in each dimension and return a linear interpolation between their values.
GPUCompareFunction specifies the behavior of a comparison sampler. If a comparison sampler is
used in a shader, an input value is compared to the sampled texture value, and the result of this
comparison test (0.0f for pass, or 1.0f for fail) is used in the filtering operation.
describe how filtering interacts with comparison sampling.
enum {GPUCompareFunction "never" ,"less" ,"equal" ,"less-equal" ,"greater" ,"not-equal" ,"greater-equal" ,"always" , };
"never"-
Comparison tests never pass.
"less"-
A provided value passes the comparison test if it is less than the sampled value.
"equal"-
A provided value passes the comparison test if it is equal to the sampled value.
"less-equal"-
A provided value passes the comparison test if it is less than or equal to the sampled value.
"greater"-
A provided value passes the comparison test if it is greater than the sampled value.
"not-equal"-
A provided value passes the comparison test if it is not equal to the sampled value.
"greater-equal"-
A provided value passes the comparison test if it is greater than or equal to the sampled value.
"always"-
Comparison tests always pass.
7.1.2. Sampler Creation
createSampler(descriptor)-
Creates a
GPUSampler.Called on:GPUDevicethis.Arguments:
Arguments for the GPUDevice.createSampler(descriptor) method. Parameter Type Nullable Optional Description descriptorGPUSamplerDescriptor✘ ✔ Description of the GPUSamplerto create.Returns:
GPUSamplerContent timeline steps:
-
Let s be a new
GPUSamplerobject. -
Issue the initialization steps on the Device timeline of this.
-
Return s.
Device timeline initialization steps:-
If any of the following conditions are unsatisfied generate a validation error, make s invalid, and stop.
-
this is valid.
-
descriptor.
lodMinClamp≥ 0. -
descriptor.
lodMaxClamp≥ descriptor.lodMinClamp. -
descriptor.
maxAnisotropy≥ 1.Note: Most implementations support
maxAnisotropyvalues in range between 1 and 16, inclusive. The providedmaxAnisotropyvalue will be clamped to the maximum value that the platform supports. -
If descriptor.
maxAnisotropy> 1:-
descriptor.
magFilter, descriptor.minFilter, and descriptor.mipmapFiltermust be"linear".
-
-
-
Set s.
[[descriptor]]to descriptor. -
Set s.
[[isComparison]]tofalseif thecompareattribute of s.[[descriptor]]isnullor undefined. Otherwise, set it totrue. -
Set s.
[[isFiltering]]tofalseif none ofminFilter,magFilter, ormipmapFilterhas the value of"linear". Otherwise, set it totrue.
-
GPUSampler that does trilinear filtering and repeats texture coordinates:
const sampler= gpuDevice. createSampler({ addressModeU: 'repeat' , addressModeV: 'repeat' , magFilter: 'linear' , minFilter: 'linear' , mipmapFilter: 'linear' , });
8. Resource Binding
8.1. GPUBindGroupLayout
A GPUBindGroupLayout defines the interface between a set of resources bound in a GPUBindGroup and their accessibility in shader stages.
[Exposed =(Window ,DedicatedWorker ),SecureContext ]interface GPUBindGroupLayout { };GPUBindGroupLayout includes GPUObjectBase ;
GPUBindGroupLayout has the following internal slots:
[[descriptor]], of typeGPUBindGroupLayoutDescriptor
8.1.1. Bind Group Layout Creation
A GPUBindGroupLayout is created via GPUDevice.createBindGroupLayout().
dictionary :GPUBindGroupLayoutDescriptor GPUObjectDescriptorBase {required sequence <GPUBindGroupLayoutEntry >; };entries
A GPUBindGroupLayoutEntry describes a single shader resource binding to be included in a GPUBindGroupLayout.
dictionary {GPUBindGroupLayoutEntry required GPUIndex32 binding ;required GPUShaderStageFlags visibility ;GPUBufferBindingLayout buffer ;GPUSamplerBindingLayout sampler ;GPUTextureBindingLayout texture ;GPUStorageTextureBindingLayout storageTexture ;GPUExternalTextureBindingLayout externalTexture ; };
GPUBindGroupLayoutEntry dictionaries have the following members:
binding, of type GPUIndex32-
A unique identifier for a resource binding within the
GPUBindGroupLayout, corresponding to aGPUBindGroupEntry.bindingand a @binding attribute in theGPUShaderModule. visibility, of type GPUShaderStageFlags-
A bitset of the members of
GPUShaderStage. Each set bit indicates that aGPUBindGroupLayoutEntry's resource will be accessible from the associated shader stage. buffer, of type GPUBufferBindingLayout-
When provided, indicates the binding resource type for this
GPUBindGroupLayoutEntryisGPUBufferBinding. sampler, of type GPUSamplerBindingLayout-
When provided, indicates the binding resource type for this
GPUBindGroupLayoutEntryisGPUSampler. texture, of type GPUTextureBindingLayout-
When provided, indicates the binding resource type for this
GPUBindGroupLayoutEntryisGPUTextureView. storageTexture, of type GPUStorageTextureBindingLayout-
When provided, indicates the binding resource type for this
GPUBindGroupLayoutEntryisGPUTextureView. externalTexture, of type GPUExternalTextureBindingLayout-
When provided, indicates the binding resource type for this
GPUBindGroupLayoutEntryisGPUExternalTexture.
typedef [EnforceRange ]unsigned long ; [GPUShaderStageFlags Exposed =(Window ,DedicatedWorker ),SecureContext ]namespace {GPUShaderStage const GPUFlagsConstant VERTEX = 0x1;const GPUFlagsConstant FRAGMENT = 0x2;const GPUFlagsConstant COMPUTE = 0x4; };
GPUShaderStage contains the following flags, which describe which shader stages a
corresponding GPUBindGroupEntry for this GPUBindGroupLayoutEntry will be visible to:
VERTEX-
The bind group entry will be accessible to vertex shaders.
FRAGMENT-
The bind group entry will be accessible to fragment shaders.
COMPUTE-
The bind group entry will be accessible to compute shaders.
The binding member of a GPUBindGroupLayoutEntry is determined by which member of the GPUBindGroupLayoutEntry is defined: buffer, sampler, texture, storageTexture, or externalTexture.
Only one may be defined for any given GPUBindGroupLayoutEntry.
Each member has an associated GPUBindingResource type and each binding type has an associated internal usage, given by this table:
| Binding member | Resource type | Binding type | Binding usage |
|---|---|---|---|
buffer
| GPUBufferBinding
| "uniform"
| constant |
"storage"
| storage | ||
"read-only-storage"
| storage-read | ||
sampler
| GPUSampler
| "filtering"
| constant |
"non-filtering"
| |||
"comparison"
| |||
texture
| GPUTextureView
| "float"
| constant |
"unfilterable-float"
| |||
"depth"
| |||
"sint"
| |||
"uint"
| |||
storageTexture
| GPUTextureView
| "write-only"
| storage |
"read-write"
| |||
"read-only"
| storage-read | ||
externalTexture
| GPUExternalTexture
| constant |
GPUBindGroupLayoutEntry values entries exceeds the binding slot limits of supported limits limits if the number of slots used toward a limit exceeds the supported value in limits.
Each entry may use multiple slots toward multiple limits.
-
For each entry in entries, if:
- entry.
buffer?.typeis"uniform"and entry.buffer?.hasDynamicOffsetistrue -
Consider 1
maxDynamicUniformBuffersPerPipelineLayoutslot to be used. - entry.
buffer?.typeis"storage"and entry.buffer?.hasDynamicOffsetistrue -
Consider 1
maxDynamicStorageBuffersPerPipelineLayoutslot to be used.
- entry.
-
For each shader stage stage in «
VERTEX,FRAGMENT,COMPUTE»:-
For each entry in entries for which entry.
visibilitycontains stage, if:- entry.
buffer?.typeis"uniform" -
Consider 1
maxUniformBuffersPerShaderStageslot to be used. - entry.
buffer?.typeis"storage"or"read-only-storage" -
Consider 1
maxStorageBuffersPerShaderStageslot to be used. - entry.
sampleris provided -
Consider 1
maxSamplersPerShaderStageslot to be used. - entry.
textureis provided -
Consider 1
maxSampledTexturesPerShaderStageslot to be used. - entry.
storageTextureis provided -
Consider 1
maxStorageTexturesPerShaderStageslot to be used. - entry.
externalTextureis provided -
Consider 4
maxSampledTexturesPerShaderStageslot, 1maxSamplersPerShaderStageslot, and 1maxUniformBuffersPerShaderStageslot to be used.
- entry.
-
enum {GPUBufferBindingType ,"uniform" ,"storage" , };"read-only-storage" dictionary {GPUBufferBindingLayout GPUBufferBindingType type = "uniform";boolean hasDynamicOffset =false ;GPUSize64 minBindingSize = 0; };
GPUBufferBindingLayout dictionaries have the following members:
type, of type GPUBufferBindingType, defaulting to"uniform"-
Indicates the type required for buffers bound to this bindings.
hasDynamicOffset, of type boolean, defaulting tofalse-
Indicates whether this binding requires a dynamic offset.
minBindingSize, of type GPUSize64, defaulting to0-
Indicates the minimum
sizeof a buffer binding used with this bind point.Bindings are always validated against this size in
createBindGroup().If this is not
0, pipeline creation additionally validates that this value ≥ the minimum buffer binding size of the variable.If this is
0, it is ignored by pipeline creation, and instead draw/dispatch commands validate that each binding in theGPUBindGroupsatisfies the minimum buffer binding size of the variable.Note: Similar execution-time validation is theoretically possible for other binding-related fields specified for early validation, like
sampleTypeandformat, which currently can only be validated in pipeline creation. However, such execution-time validation could be costly or unnecessarily complex, so it is available only forminBindingSizewhich is expected to have the most ergonomic impact.
enum {GPUSamplerBindingType ,"filtering" ,"non-filtering" , };"comparison" dictionary {GPUSamplerBindingLayout GPUSamplerBindingType type = "filtering"; };
GPUSamplerBindingLayout dictionaries have the following members:
type, of type GPUSamplerBindingType, defaulting to"filtering"-
Indicates the required type of a sampler bound to this bindings.
enum {GPUTextureSampleType ,"float" ,"unfilterable-float" ,"depth" ,"sint" , };"uint" dictionary {GPUTextureBindingLayout GPUTextureSampleType sampleType = "float";GPUTextureViewDimension viewDimension = "2d";boolean multisampled =false ; };
GPUTextureBindingLayout dictionaries have the following members:
sampleType, of type GPUTextureSampleType, defaulting to"float"-
Indicates the type required for texture views bound to this binding.
viewDimension, of type GPUTextureViewDimension, defaulting to"2d"-
Indicates the required
dimensionfor texture views bound to this binding. multisampled, of type boolean, defaulting tofalse-
Indicates whether or not texture views bound to this binding must be multisampled.
enum {GPUStorageTextureAccess ,"write-only" ,"read-only" , };"read-write" dictionary {GPUStorageTextureBindingLayout GPUStorageTextureAccess access = "write-only";required GPUTextureFormat format ;GPUTextureViewDimension viewDimension = "2d"; };
GPUStorageTextureBindingLayout dictionaries have the following members:
access, of type GPUStorageTextureAccess, defaulting to"write-only"-
The access mode for this binding, indicating readability and writability.
format, of type GPUTextureFormat-
The required
formatof texture views bound to this binding. viewDimension, of type GPUTextureViewDimension, defaulting to"2d"-
Indicates the required
dimensionfor texture views bound to this binding.
dictionary { };GPUExternalTextureBindingLayout
A GPUBindGroupLayout object has the following internal slots:
[[entryMap]], of type ordered map<GPUSize32,GPUBindGroupLayoutEntry>-
The map of binding indices pointing to the
GPUBindGroupLayoutEntrys, which thisGPUBindGroupLayoutdescribes. [[dynamicOffsetCount]], of typeGPUSize32-
The number of buffer bindings with dynamic offsets in this
GPUBindGroupLayout. [[exclusivePipeline]], of typeGPUPipelineBase?, initiallynull-
The pipeline that created this
GPUBindGroupLayout, if it was created as part of a default pipeline layout. If notnull,GPUBindGroups created with thisGPUBindGroupLayoutcan only be used with the specifiedGPUPipelineBase.
createBindGroupLayout(descriptor)-
Creates a
GPUBindGroupLayout.Called on:GPUDevicethis.Arguments:
Arguments for the GPUDevice.createBindGroupLayout(descriptor) method. Parameter Type Nullable Optional Description descriptorGPUBindGroupLayoutDescriptor✘ ✘ Description of the GPUBindGroupLayoutto create.Returns:
GPUBindGroupLayoutContent timeline steps:
-
For each
GPUBindGroupLayoutEntryentry in descriptor.entries:-
If entry.
storageTextureis provided:-
? Validate texture format required features for entry.
storageTexture.formatwith this.[[device]].
-
-
-
Let layout be a new
GPUBindGroupLayoutobject. -
Issue the initialization steps on the Device timeline of this.
-
Return layout.
Device timeline initialization steps:-
If any of the following conditions are unsatisfied generate a validation error, make layout invalid, and stop.
-
this is valid.
-
Let limits be this.
[[device]].[[limits]]. -
The
bindingof each entry in descriptor is unique. -
The
bindingof each entry in descriptor must be < limits.maxBindingsPerBindGroup. -
descriptor.
entriesmust not exceed the binding slot limits of limits. -
For each
GPUBindGroupLayoutEntryentry in descriptor.entries:-
Exactly one of entry.
buffer, entry.sampler, entry.texture, and entry.storageTextureis provided. -
entry.
visibilitycontains only bits defined inGPUShaderStage. -
If entry.
visibilityincludesVERTEX:-
entry.
buffer?.typemust not be"storage". Note that"read-only-storage"is allowed. -
entry.
storageTexture?.accessmust be"read-only".
-
-
If entry.
texture?.multisampledistrue:-
entry.
texture.viewDimensionis"2d". -
entry.
texture.sampleTypeis not"float".
-
-
If entry.
storageTextureis provided:-
entry.
storageTexture.viewDimensionis not"cube"or"cube-array". -
entry.
storageTexture.formatmust be a format which can support storage usage for the given entry.storageTexture.accessaccording to the § 26.1.1 Plain color formats table.
-
-
-
-
Set layout.
[[descriptor]]to descriptor. -
Set layout.
[[dynamicOffsetCount]]to the number of entries in descriptor wherebufferis provided andbuffer.hasDynamicOffsetistrue. -
For each
GPUBindGroupLayoutEntryentry in descriptor.entries:-
Insert entry into layout.
[[entryMap]]with the key of entry.binding.
-
-
8.1.2. Compatibility
GPUBindGroupLayout objects a and b are considered group-equivalent if and only if all of the following conditions are satisfied:
-
for any binding number binding, one of the following conditions is satisfied:
-
it’s missing from both a.
[[entryMap]]and b.[[entryMap]]. -
a.
[[entryMap]][binding] == b.[[entryMap]][binding]
-
If bind groups layouts are group-equivalent they can be interchangeably used in all contents.
8.2. GPUBindGroup
A GPUBindGroup defines a set of resources to be bound together in a group
and how the resources are used in shader stages.
[Exposed =(Window ,DedicatedWorker ),SecureContext ]interface GPUBindGroup { };GPUBindGroup includes GPUObjectBase ;
A GPUBindGroup object has the following internal slots:
[[layout]], of typeGPUBindGroupLayout, readonly-
The
GPUBindGroupLayoutassociated with thisGPUBindGroup. [[entries]], of type sequence<GPUBindGroupEntry>, readonly-
The set of
GPUBindGroupEntrys thisGPUBindGroupdescribes. [[usedResources]], of type ordered map<subresource, list<internal usage>>, readonly-
The set of buffer and texture subresources used by this bind group, associated with lists of the internal usage flags.
GPUBindGroup bindGroup,
given list<GPUBufferDynamicOffset> dynamicOffsets, are computed as follows:
-
Let result be a new set<(
GPUBindGroupLayoutEntry,GPUBufferBinding)>. -
Let dynamicOffsetIndex be 0.
-
For each
GPUBindGroupEntrybindGroupEntry in bindGroup.[[entries]], sorted by bindGroupEntry.binding:-
Let bindGroupLayoutEntry be bindGroup.
[[layout]].[[entryMap]][bindGroupEntry.binding]. -
Let bound be a copy of bindGroupEntry.
resource. -
Assert bound is a
GPUBufferBinding. -
If bindGroupLayoutEntry.
buffer.hasDynamicOffset:-
Increment bound.
offsetby dynamicOffsets[dynamicOffsetIndex]. -
Increment dynamicOffsetIndex by 1.
-
-
Append (bindGroupLayoutEntry, bound) to result.
-
-
Return result.
8.2.1. Bind Group Creation
A GPUBindGroup is created via GPUDevice.createBindGroup().
dictionary :GPUBindGroupDescriptor GPUObjectDescriptorBase {required GPUBindGroupLayout layout ;required sequence <GPUBindGroupEntry >entries ; };
GPUBindGroupDescriptor dictionaries have the following members:
layout, of type GPUBindGroupLayout-
The
GPUBindGroupLayoutthe entries of this bind group will conform to. entries, of type sequence<GPUBindGroupEntry>-
A list of entries describing the resources to expose to the shader for each binding described by the
layout.
typedef (GPUSampler or GPUTextureView or GPUBufferBinding or GPUExternalTexture );GPUBindingResource dictionary {GPUBindGroupEntry required GPUIndex32 binding ;required GPUBindingResource resource ; };
A GPUBindGroupEntry describes a single resource to be bound in a GPUBindGroup, and has the
following members:
binding, of type GPUIndex32-
A unique identifier for a resource binding within the
GPUBindGroup, corresponding to aGPUBindGroupLayoutEntry.bindingand a @binding attribute in theGPUShaderModule. resource, of type GPUBindingResource-
The resource to bind, which may be a
GPUSampler,GPUTextureView,GPUExternalTexture, orGPUBufferBinding.
dictionary {GPUBufferBinding required GPUBuffer buffer ;GPUSize64 offset = 0;GPUSize64 size ; };
A GPUBufferBinding describes a buffer and optional range to bind as a resource, and has the
following members:
buffer, of type GPUBuffer-
The
GPUBufferto bind. offset, of type GPUSize64, defaulting to0-
The offset, in bytes, from the beginning of
bufferto the beginning of the range exposed to the shader by the buffer binding. size, of type GPUSize64-
The size, in bytes, of the buffer binding. If not provided, specifies the range starting at
offsetand ending at the end ofbuffer.
createBindGroup(descriptor)-
Creates a
GPUBindGroup.Called on:GPUDevicethis.Arguments:
Arguments for the GPUDevice.createBindGroup(descriptor) method. Parameter Type Nullable Optional Description descriptorGPUBindGroupDescriptor✘ ✘ Description of the GPUBindGroupto create.Returns:
GPUBindGroupContent timeline steps:
-
Let bindGroup be a new
GPUBindGroupobject. -
Issue the initialization steps on the Device timeline of this.
-
Return bindGroup.
Device timeline initialization steps:-
Let limits be this.
[[device]].[[limits]]. -
If any of the following conditions are unsatisfied generate a validation error, make bindGroup invalid, and stop.
-
descriptor.
layoutis valid to use with this. -
The number of
entriesof descriptor.layoutis exactly equal to the number of descriptor.entries.
For each
GPUBindGroupEntrybindingDescriptor in descriptor.entries:-
Let resource be bindingDescriptor.
resource. -
There is exactly one
GPUBindGroupLayoutEntrylayoutBinding in descriptor.layout.entriessuch that layoutBinding.bindingequals to bindingDescriptor.binding. -
If the defined binding member for layoutBinding is
sampler-
-
resource is a
GPUSampler. -
resource is valid to use with this.
-
If layoutBinding.
sampler.typeis:"filtering"-
resource.
[[isComparison]]isfalse. "non-filtering"-
resource.
[[isFiltering]]isfalse. resource.[[isComparison]]isfalse. "comparison"-
resource.
[[isComparison]]istrue.
-
texture-
-
resource is a
GPUTextureView. -
resource is valid to use with this.
-
Let texture be resource.
[[texture]]. -
layoutBinding.
texture.viewDimensionis equal to resource’sdimension. -
layoutBinding.
texture.sampleTypeis compatible with resource’sformat. -
texture’s
usageincludesTEXTURE_BINDING. -
If layoutBinding.
texture.multisampledistrue, texture’ssampleCount>1, Otherwise texture’ssampleCountis1.
-
storageTexture-
-
resource is a
GPUTextureView. -
resource is valid to use with this.
-
Let texture be resource.
[[texture]]. -
layoutBinding.
storageTexture.viewDimensionis equal to resource’sdimension. -
layoutBinding.
storageTexture.formatis equal to resource.[[descriptor]].format. -
texture’s
usageincludesSTORAGE_BINDING. -
resource.
[[descriptor]].mipLevelCountmust be 1.
-
buffer-
-
resource is a
GPUBufferBinding. -
resource.
bufferis valid to use with this. -
The bound part designated by resource.
offsetand resource.sizeresides inside the buffer and has non-zero size. -
effective buffer binding size(resource) ≥ layoutBinding.
buffer.minBindingSize. -
If layoutBinding.
buffer.typeis"uniform"-
-
effective buffer binding size(resource) ≤ limits.
maxUniformBufferBindingSize. -
resource.
offsetis a multiple of limits.minUniformBufferOffsetAlignment.
"storage"or"read-only-storage"-
-
effective buffer binding size(resource) ≤ limits.
maxStorageBufferBindingSize. -
effective buffer binding size(resource) is a multiple of 4.
-
resource.
offsetis a multiple of limits.minStorageBufferOffsetAlignment.
-
externalTexture-
-
resource is a
GPUExternalTexture. -
resource is valid to use with this.
-
-
-
Let bindGroup.
[[layout]]= descriptor.layout. -
Let bindGroup.
[[entries]]= descriptor.entries. -
Let bindGroup.
[[usedResources]]= {}. -
For each
GPUBindGroupEntrybindingDescriptor in descriptor.entries:-
Let internalUsage be the binding usage for layoutBinding.
-
Each subresource seen by resource is added to
[[usedResources]]as internalUsage.
-
-
GPUBufferBinding objects a and b are considered buffer-binding-aliasing if and only if all of the following are true:
-
The range formed by a.
offsetand a.sizeintersects the range formed by b.offsetand b.size, where if asizeis unspecified, the range goes to the end of the buffer.
Note: When doing this calculation, any dynamic offsets have already been applied to the ranges.
8.3. GPUPipelineLayout
A GPUPipelineLayout defines the mapping between resources of all GPUBindGroup objects set up during command encoding in setBindGroup(), and the shaders of the pipeline set by GPURenderCommandsMixin.setPipeline or GPUComputePassEncoder.setPipeline.
The full binding address of a resource can be defined as a trio of:
-
shader stage mask, to which the resource is visible
-
bind group index
-
binding number
The components of this address can also be seen as the binding space of a pipeline. A GPUBindGroup (with the corresponding GPUBindGroupLayout) covers that space for a fixed bind group index. The contained bindings need to be a superset of the resources used by the shader at this bind group index.
[Exposed =(Window ,DedicatedWorker ),SecureContext ]interface GPUPipelineLayout { };GPUPipelineLayout includes GPUObjectBase ;
GPUPipelineLayout has the following internal slots:
[[bindGroupLayouts]], of type list<GPUBindGroupLayout>-
The
GPUBindGroupLayoutobjects provided at creation inGPUPipelineLayoutDescriptor.bindGroupLayouts.
Note: using the same GPUPipelineLayout for many GPURenderPipeline or GPUComputePipeline pipelines guarantees that the user agent doesn’t need to rebind any resources internally when there is a switch between these pipelines.
GPUComputePipeline object X was created with GPUPipelineLayout.bindGroupLayouts A, B, C. GPUComputePipeline object Y was created with GPUPipelineLayout.bindGroupLayouts A, D, C. Supposing the command encoding sequence has two dispatches:
-
setBindGroup(0, ...)
-
setBindGroup(1, ...)
-
setBindGroup(2, ...)
-
setPipeline(X) -
setBindGroup(1, ...)
-
setPipeline(Y)
In this scenario, the user agent would have to re-bind the group slot 2 for the second dispatch, even though neither the GPUBindGroupLayout at index 2 of GPUPipelineLayout.bindGroupLayouts, or the GPUBindGroup at slot 2, change.
Note: the expected usage of the GPUPipelineLayout is placing the most common and the least frequently changing bind groups at the "bottom" of the layout, meaning lower bind group slot numbers, like 0 or 1. The more frequently a bind group needs to change between draw calls, the higher its index should be. This general guideline allows the user agent to minimize state changes between draw calls, and consequently lower the CPU overhead.
8.3.1. Pipeline Layout Creation
A GPUPipelineLayout is created via GPUDevice.createPipelineLayout().
dictionary :GPUPipelineLayoutDescriptor GPUObjectDescriptorBase {required sequence <GPUBindGroupLayout >bindGroupLayouts ; };
GPUPipelineLayoutDescriptor dictionaries define all the GPUBindGroupLayouts used by a
pipeline, and have the following members:
bindGroupLayouts, of type sequence<GPUBindGroupLayout>-
A list of
GPUBindGroupLayouts the pipeline will use. Each element corresponds to a @group attribute in theGPUShaderModule, with theNth element corresponding with@group(N).
createPipelineLayout(descriptor)-
Creates a
GPUPipelineLayout.Called on:GPUDevicethis.Arguments:
Arguments for the GPUDevice.createPipelineLayout(descriptor) method. Parameter Type Nullable Optional Description descriptorGPUPipelineLayoutDescriptor✘ ✘ Description of the GPUPipelineLayoutto create.Returns:
GPUPipelineLayoutContent timeline steps:
-
Let pl be a new
GPUPipelineLayoutobject. -
Issue the initialization steps on the Device timeline of this.
-
Return pl.
Device timeline initialization steps:-
Let limits be this.
[[device]].[[limits]]. -
Let allEntries be the result of concatenating bgl.
[[descriptor]].entriesfor all bgl in descriptor.bindGroupLayouts. -
If any of the following conditions are unsatisfied generate a validation error, make pl invalid, and stop.
-
Every
GPUBindGroupLayoutin descriptor.bindGroupLayoutsmust be valid to use with this and have a[[exclusivePipeline]]ofnull. -
The size of descriptor.
bindGroupLayoutsmust be ≤ limits.maxBindGroups. -
allEntries must not exceed the binding slot limits of limits.
-
-
Set the pl.
[[bindGroupLayouts]]to descriptor.bindGroupLayouts.
-
Note: two GPUPipelineLayout objects are considered equivalent for any usage
if their internal [[bindGroupLayouts]] sequences contain GPUBindGroupLayout objects that are group-equivalent.
8.4. Example
GPUBindGroupLayout that describes a binding with a uniform buffer, a texture, and a sampler.
Then create a GPUBindGroup and a GPUPipelineLayout using the GPUBindGroupLayout.
const bindGroupLayout= gpuDevice. createBindGroupLayout({ entries: [{ binding: 0 , visibility: GPUShaderStage. VERTEX| GPUShaderStage. FRAGMENT, buffer: {} }, { binding: 1 , visibility: GPUShaderStage. FRAGMENT, texture: {} }, { binding: 2 , visibility: GPUShaderStage. FRAGMENT, sampler: {} }] }); const bindGroup= gpuDevice. createBindGroup({ layout: bindGroupLayout, entries: [{ binding: 0 , resource: { buffer: buffer}, }, { binding: 1 , resource: texture}, { binding: 2 , resource: sampler}] }); const pipelineLayout= gpuDevice. createPipelineLayout({ bindGroupLayouts: [ bindGroupLayout] });
9. Shader Modules
9.1. GPUShaderModule
[Exposed =(Window ,DedicatedWorker ),SecureContext ]interface GPUShaderModule {Promise <GPUCompilationInfo >getCompilationInfo (); };GPUShaderModule includes GPUObjectBase ;
GPUShaderModule is a reference to an internal shader module object.
9.1.1. Shader Module Creation
dictionary :GPUShaderModuleDescriptor GPUObjectDescriptorBase {required USVString code ;object sourceMap ;sequence <GPUShaderModuleCompilationHint >compilationHints = []; };
code, of type USVString-
The WGSL source code for the shader module.
sourceMap, of type object-
If defined MAY be interpreted as a source-map-v3 format.
Source maps are optional, but serve as a standardized way to support dev-tool integration such as source-language debugging [SourceMap]. WGSL names (identifiers) in source maps follow the rules defined in WGSL identifier comparison.
compilationHints, of type sequence<GPUShaderModuleCompilationHint>, defaulting to[]-
A list of
GPUShaderModuleCompilationHints.Any hint provided by an application should contain information about one entry point of a pipeline that will eventually be created from the entry point.
Implementations should use any information present in the
GPUShaderModuleCompilationHintto perform as much compilation as is possible withincreateShaderModule().Aside from type-checking, these hints are not validated in any way.
NOTE:Supplying information incompilationHintsdoes not have any observable effect, other than performance. It may be detrimental to performance to provide hints for pipelines that never end up being created.Because a single shader module can hold multiple entry points, and multiple pipelines can be created from a single shader module, it can be more performant for an implementation to do as much compilation as possible once in
createShaderModule()rather than multiple times in the multiple calls tocreateComputePipeline()orcreateRenderPipeline().Note: Hints are not validated in an observable way, but user agents may surface identifiable errors (like unknown entry point names or incompatible pipeline layouts) to developers, for example in the browser developer console.
createShaderModule(descriptor)-
Creates a
GPUShaderModule.Called on:GPUDevicethis.Arguments:
Arguments for the GPUDevice.createShaderModule(descriptor) method. Parameter Type Nullable Optional Description descriptorGPUShaderModuleDescriptor✘ ✘ Description of the GPUShaderModuleto create.Returns:
GPUShaderModuleContent timeline steps:
-
Let sm be a new
GPUShaderModuleobject. -
Issue the initialization steps on the Device timeline of this.
-
Return sm.
Device timeline initialization steps:-
Let result be the result of shader module creation with the WGSL source descriptor.
code. -
If any of the following requirements are unmet, generate a validation error, make sm invalid, and return.
-
this must be valid.
-
result must not be a shader-creation program error.
Note: Uncategorized errors cannot arise from shader module creation. Implementations which detect such errors during shader module creation must behave as if the shader module is valid, and defer surfacing the error until pipeline creation.
-
Describe remaining
createShaderModule()validation and algorithm steps.NOTE:User agents should not include detailed compiler error messages or shader text in themessagetext of validation errors arising here: these details are accessible viagetCompilationInfo(). User agents should surface human-readable, formatted error details to developers for easier debugging (for example as a warning in the browser developer console, expandable to show full shader source).As shader compilation errors should be rare in production applications, user agents could choose to surface them to developers regardless of error handling (GPU error scopes or
uncapturederrorevent handlers), e.g. as an expandable warning. If not, they should provide and document another way for developers to access human-readable error details, for example by adding a checkbox to show errors unconditionally, or by showing human-readable details when logging aGPUCompilationInfoobject to the console. -
GPUShaderModule from WGSL code:
// A simple vertex and fragment shader pair that will fill the viewport with red. const shaderSource= ` var<private> pos : array<vec2<f32>, 3> = array<vec2<f32>, 3>( vec2(-1.0, -1.0), vec2(-1.0, 3.0), vec2(3.0, -1.0)); @vertex fn vertexMain(@builtin(vertex_index) vertexIndex : u32) -> @builtin(position) vec4<f32> { return vec4(pos[vertexIndex], 1.0, 1.0); } @fragment fn fragmentMain() -> @location(0) vec4<f32> { return vec4(1.0, 0.0, 0.0, 1.0); } ` ; const shaderModule= gpuDevice. createShaderModule({ code: shaderSource, });
9.1.1.1. Shader Module Compilation Hints
Shader module compilation hints are optional, additional information indicating how a given GPUShaderModule entry point is intended to be used in the future. For some implementations this
information may aid in compiling the shader module earlier, potentially increasing performance.
dictionary {GPUShaderModuleCompilationHint required USVString ; (entryPoint GPUPipelineLayout or GPUAutoLayoutMode )layout ; };
layout, of type(GPUPipelineLayout or GPUAutoLayoutMode)-
A
GPUPipelineLayoutthat theGPUShaderModulemay be used with in a futurecreateComputePipeline()orcreateRenderPipeline()call. If set to"auto"the layout will be the default pipeline layout for the entry point associated with this hint will be used.
createShaderModule() and createComputePipeline() / createRenderPipeline().
If an application is unable to provide hint information at the time of calling createShaderModule(), it should usually not delay calling createShaderModule(), but instead just omit the unknown information from
the compilationHints sequence or the individual members of GPUShaderModuleCompilationHint. Omitting this information
may cause compilation to be deferred to createComputePipeline() / createRenderPipeline().
If an author is not confident that the hint information passed to createShaderModule() will match the information later passed to createComputePipeline() / createRenderPipeline() with that same module, they should avoid passing that
information to createShaderModule(), as passing mismatched information to createShaderModule() may cause unnecessary compilations to occur.
9.1.2. Shader Module Compilation Information
enum {GPUCompilationMessageType ,"error" ,"warning" , }; ["info" Exposed =(Window ,DedicatedWorker ),Serializable ,SecureContext ]interface {GPUCompilationMessage readonly attribute DOMString message ;readonly attribute GPUCompilationMessageType type ;readonly attribute unsigned long long lineNum ;readonly attribute unsigned long long linePos ;readonly attribute unsigned long long offset ;readonly attribute unsigned long long length ; }; [Exposed =(Window ,DedicatedWorker ),Serializable ,SecureContext ]interface {GPUCompilationInfo readonly attribute FrozenArray <GPUCompilationMessage >; };messages
A GPUCompilationMessage is an informational, warning, or error message generated by the GPUShaderModule compiler. The messages are intended to be human readable to help developers
diagnose issues with their shader code. Each message may correspond to
either a single point in the shader code, a substring of the shader code, or may not correspond to
any specific point in the code at all.
GPUCompilationMessage has the following attributes:
message, of type DOMString, readonly-
The human-readable, localizable text for this compilation message.
Note: The
messageshould follow the best practices for language and direction information. This includes making use of any future standards which may emerge regarding the reporting of string language and direction metadata.Editorial note: At the time of this writing, no language/direction recommendation is available that provides compatibility and consistency with legacy APIs, but when there is, adopt it formally.
type, of type GPUCompilationMessageType, readonly-
The severity level of the message.
If the
typeis"error", it corresponds to a shader-creation error. lineNum, of type unsigned long long, readonly-
The line number in the shader
codethemessagecorresponds to. Value is one-based, such that a lineNum of1indicates the first line of the shadercode. Lines are delimited by line breaks.If the
messagecorresponds to a substring this points to the line on which the substring begins. Must be0if themessagedoes not correspond to any specific point in the shadercode. linePos, of type unsigned long long, readonly-
The offset, in UTF-16 code units, from the beginning of line
lineNumof the shadercodeto the point or beginning of the substring that themessagecorresponds to. Value is one-based, such that alinePosof1indicates the first code unit of the line.If
messagecorresponds to a substring this points to the first UTF-16 code unit of the substring. Must be0if themessagedoes not correspond to any specific point in the shadercode. offset, of type unsigned long long, readonly-
The offset from the beginning of the shader
codein UTF-16 code units to the point or beginning of the substring thatmessagecorresponds to. Must reference the same position aslineNumandlinePos. Must be0if themessagedoes not correspond to any specific point in the shadercode. length, of type unsigned long long, readonly-
The number of UTF-16 code units in the substring that
messagecorresponds to. If the message does not correspond with a substring thenlengthmust be 0.
Note: GPUCompilationMessage.lineNum and GPUCompilationMessage.linePos are one-based since the most common use
for them is expected to be printing human readable messages that can be correlated with the line and
column numbers shown in many text editors.
Note: GPUCompilationMessage.offset and GPUCompilationMessage.length are appropriate to pass to substr() in order to retrieve the substring of the shader code the message corresponds to.
getCompilationInfo()-
Returns any messages generated during the
GPUShaderModule's compilation.The locations, order, and contents of messages are implementation-defined. In particular, messages may not be ordered by
lineNum.Called on:GPUShaderModulethisReturns:
Promise<GPUCompilationInfo>Content timeline steps:
-
Let contentTimeline be the current Content timeline.
-
Let promise be a new promise.
-
Issue the synchronization steps on the Device timeline of this.
-
Return promise.
Device timeline synchronization steps:-
When the device timeline becomes informed that shader module creation has completed for this:
-
Let messages be a list of any errors, warnings, or informational messages generated during shader module creation for this.
-
Issue the subsequent steps on contentTimeline.
-
Content timeline steps:-
Let info be a new
GPUCompilationInfo. -
For each message in messages:
-
Let m be a new
GPUCompilationMessage. -
Set m.
messageto be the text of message. -
- If message is associated with a specific substring or position
within the shader
code: -
-
Set m.
lineNumto the one-based number of the first line that the message refers to. -
Set m.
linePosto the one-based number of the first UTF-16 code units on m.lineNumthat the message refers to, or1if the message refers to the entire line. -
Set m.
offsetto the number of UTF-16 code units from the beginning of the shader to beginning of the substring or position that message refers to. -
Set m.
lengththe length of the substring in UTF-16 code units that message refers to, or 0 if message refers to a position
-
- Otherwise:
- If message is associated with a specific substring or position
within the shader
-
-
Resolve promise with info.
-
10. Pipelines
A pipeline, be it GPUComputePipeline or GPURenderPipeline,
represents the complete function done by a combination of the GPU hardware, the driver,
and the user agent, that process the input data in the shape of bindings and vertex buffers,
and produces some output, like the colors in the output render targets.
Structurally, the pipeline consists of a sequence of programmable stages (shaders) and fixed-function states, such as the blending modes.
Note: Internally, depending on the target platform, the driver may convert some of the fixed-function states into shader code, and link it together with the shaders provided by the user. This linking is one of the reason the object is created as a whole.
This combination state is created as a single object
(a GPUComputePipeline or GPURenderPipeline)
and switched using one command
(GPUComputePassEncoder.setPipeline() or GPURenderCommandsMixin.setPipeline() respectively).
There are two ways to create pipelines:
- immediate pipeline creation
-
createComputePipeline()andcreateRenderPipeline()return a pipeline object which can be used immediately in a pass encoder.When this fails, the pipeline object will be invalid and the call will generate either a validation error or an internal error.
Note: A handle object is returned immediately, but actual pipeline creation is not synchronous. If pipeline creation takes a long time, this can incur a stall in the device timeline at some point between the creation call and execution of the
submit()in which it is first used. The point is unspecified, but most likely to be one of: at creation, at the first usage of the pipeline insetPipeline(), at the correspondingfinish()of thatGPUCommandEncoderorGPURenderBundleEncoder, or atsubmit()of thatGPUCommandBuffer. - async pipeline creation
-
createComputePipelineAsync()andcreateRenderPipelineAsync()return aPromisewhich resolves to a pipeline object when creation of the pipeline has completed.When this fails, the
Promiserejects with aGPUPipelineError.
GPUPipelineError describes a pipeline creation failure.
[Exposed =(Window ,DedicatedWorker ),SecureContext ,Serializable ]interface GPUPipelineError :DOMException {constructor (optional DOMString message = "",GPUPipelineErrorInit options );readonly attribute GPUPipelineErrorReason reason ; };dictionary {GPUPipelineErrorInit required GPUPipelineErrorReason ; };reason enum GPUPipelineErrorReason {"validation" ,"internal" , };
GPUPipelineError constructor:
constructor()-
Arguments for the GPUPipelineError.constructor() method. Parameter Type Nullable Optional Description messageDOMString✘ ✔ Error message of the base DOMException.optionsGPUPipelineErrorInit✘ ✘ Options specific to GPUPipelineError.
GPUPipelineError has the following attributes:
reason, of type GPUPipelineErrorReason, readonly-
A read-only slot-backed attribute exposing the type of error encountered in pipeline creation as a
GPUPipelineErrorReason:-
"validation": A validation error. -
"internal": An internal error.
-
GPUPipelineError objects are serializable objects.
-
Run the
DOMExceptionserialization steps given value and serialized.
-
Run the
DOMExceptiondeserialization steps given value and serialized.
10.1. Base pipelines
enum {GPUAutoLayoutMode , };"auto" dictionary :GPUPipelineDescriptorBase GPUObjectDescriptorBase {required (GPUPipelineLayout or GPUAutoLayoutMode )layout ; };
layout, of type(GPUPipelineLayout or GPUAutoLayoutMode)-
The
GPUPipelineLayoutfor this pipeline, or"auto"to generate the pipeline layout automatically.Note: If
"auto"is used the pipeline cannot shareGPUBindGroups with any other pipelines.
interface mixin { [GPUPipelineBase NewObject ]GPUBindGroupLayout getBindGroupLayout (unsigned long index ); };
GPUPipelineBase has the following internal slots:
[[layout]], of typeGPUPipelineLayout-
The definition of the layout of resources which can be used with
this.
GPUPipelineBase has the following methods:
getBindGroupLayout(index)-
Gets a
GPUBindGroupLayoutthat is compatible with theGPUPipelineBase'sGPUBindGroupLayoutatindex.Called on:GPUPipelineBasethisArguments:
Arguments for the GPUPipelineBase.getBindGroupLayout(index) method. Parameter Type Nullable Optional Description indexunsigned long✘ ✘ Index into the pipeline layout’s [[bindGroupLayouts]]sequence.Returns:
GPUBindGroupLayoutContent timeline steps:
-
Let layout be a new
GPUBindGroupLayoutobject. -
Issue the initialization steps on the Device timeline of this.
-
Return layout.
Device timeline initialization steps:-
If any of the following conditions are unsatisfied generate a validation error, make layout invalid, and stop.
-
this is valid.
-
index < the size of this.
[[layout]].[[bindGroupLayouts]]
-
-
Initialize layout so it is a copy of this.
[[layout]].[[bindGroupLayouts]][index].Note:
GPUBindGroupLayoutis only ever used by-value, not by-reference, so this is equivalent to returning the same internal object in a new wrapper. A newGPUBindGroupLayoutwrapper is returned each time to avoid a round-trip between the Content timeline and the Device timeline.
-
10.1.1. Default pipeline layout
A GPUPipelineBase object that was created with a layout set to "auto" has a default layout created and used instead.
Note: Default layouts are provided as a convenience for simple pipelines, but use of explicit layouts is recommended in most cases. Bind groups created from default layouts cannot be used with other pipelines, and the structure of the default layout may change when altering shaders, causing unexpected bind group creation errors.
To create a default pipeline layout for GPUPipelineBase pipeline,
run the following steps:
-
Let groupCount be 0.
-
Let groupDescs be a sequence of device.
[[limits]].maxBindGroupsnewGPUBindGroupLayoutDescriptorobjects. -
For each groupDesc in groupDescs:
-
For each
GPUProgrammableStagestageDesc in the descriptor used to create pipeline:-
Let shaderStage be the
GPUShaderStageFlagsfor stageDesc.entryPointin stageDesc.module. -
For each resource resource statically used by stageDesc:
-
Let group be resource’s "group" decoration.
-
Let binding be resource’s "binding" decoration.
-
Let entry be a new
GPUBindGroupLayoutEntry. -
Set entry.
bindingto binding. -
Set entry.
visibilityto shaderStage. -
If resource is for a sampler binding:
-
Let samplerLayout be a new
GPUSamplerBindingLayout. -
Set entry.
samplerto samplerLayout.
-
-
If resource is for a comparison sampler binding:
-
Let samplerLayout be a new
GPUSamplerBindingLayout. -
Set samplerLayout.
typeto"comparison". -
Set entry.
samplerto samplerLayout.
-
-
If resource is for a buffer binding:
-
Let bufferLayout be a new
GPUBufferBindingLayout. -
Set bufferLayout.
minBindingSizeto resource’s minimum buffer binding size. -
If resource is for a read-only storage buffer:
-
Set bufferLayout.
typeto"read-only-storage".
-
-
If resource is for a storage buffer:
-
Set entry.
bufferto bufferLayout.
-
-
If resource is for a sampled texture binding:
-
Let textureLayout be a new
GPUTextureBindingLayout. -
If resource is a depth texture binding:
-
Set textureLayout.
sampleTypeto"depth"
Else if the sampled type of resource is:
f32and there exists a static use of resource with atextureSample*builtin in-
Set textureLayout.
sampleTypeto"float" f32otherwise-
Set textureLayout.
sampleTypeto"unfilterable-float" i32-
Set textureLayout.
sampleTypeto"sint" u32-
Set textureLayout.
sampleTypeto"uint"
-
-
Set textureLayout.
viewDimensionto resource’s dimension. -
If resource is for a multisampled texture:
-
Set textureLayout.
multisampledtotrue.
-
-
Set entry.
textureto textureLayout.
-
-
If resource is for a storage texture binding:
-
Let storageTextureLayout be a new
GPUStorageTextureBindingLayout. -
Set storageTextureLayout.
formatto resource’s format. -
Set storageTextureLayout.
viewDimensionto resource’s dimension. -
If the access mode is:
read-
Set textureLayout.
accessto"read-only". write-
Set textureLayout.
accessto"write-only". read_write-
Set textureLayout.
accessto"read-write".
-
Set entry.
storageTextureto storageTextureLayout.
-
-
Set groupCount to max(groupCount, group + 1).
-
If groupDescs[group] has an entry previousEntry with
bindingequal to binding:-
If entry has different
visibilitythan previousEntry:-
Add the bits set in entry.
visibilityinto previousEntry.visibility
-
-
If resource is for a buffer binding and entry has greater
buffer.minBindingSizethan previousEntry:-
Set previousEntry.
buffer.minBindingSizeto entry.buffer.minBindingSize.
-
-
If resource is a sampled texture binding and entry has different
texture.sampleTypethan previousEntry and both entry and previousEntry havetexture.sampleTypeof either"float"or"unfilterable-float":-
Set previousEntry.
texture.sampleTypeto"float".
-
-
If any other property is unequal between entry and previousEntry:
-
Return
null(which will cause the creation of the pipeline to fail).
-
-
If resource is a storage texture binding, entry.storageTexture.
accessis"read-write", previousEntry.storageTexture.accessis"write-only", and previousEntry.storageTexture.formatis compatible withSTORAGE_BINDINGand"read-write"according to the § 26.1.1 Plain color formats table:-
Set previousEntry.storageTexture.
accessto"read-write".
-
-
-
Else
-
Append entry to groupDescs[group].
-
-
-
-
Let groupLayouts be a new list.
-
For each i from 0 to groupCount - 1, inclusive:
-
Let groupDesc be groupDescs[i].
-
Let bindGroupLayout be the result of calling device.
createBindGroupLayout()(groupDesc). -
Set bindGroupLayout.
[[exclusivePipeline]]to pipeline. -
Append bindGroupLayout to groupLayouts.
-
-
Let desc be a new
GPUPipelineLayoutDescriptor. -
Set desc.
bindGroupLayoutsto groupLayouts. -
Return device.
createPipelineLayout()(desc).
10.1.2. GPUProgrammableStage
A GPUProgrammableStage describes the entry point in the user-provided GPUShaderModule that controls one of the programmable stages of a pipeline.
Entry point names follow the rules defined in WGSL identifier comparison.
dictionary GPUProgrammableStage {required GPUShaderModule module ;USVString entryPoint ;record <USVString ,GPUPipelineConstantValue >constants ; };typedef double GPUPipelineConstantValue ; // May represent WGSL’s bool, f32, i32, u32, and f16 if enabled.
GPUProgrammableStage has the following members:
module, of type GPUShaderModule-
The
GPUShaderModulecontaining the code that this programmable stage will execute. entryPoint, of type USVString-
The name of the function in
modulethat this stage will use to perform its work. constants, of type record<USVString, GPUPipelineConstantValue>-
Specifies the values of pipeline-overridable constants in the shader module
module.Each such pipeline-overridable constant is uniquely identified by a single pipeline-overridable constant identifier string, representing the pipeline constant ID of the constant if its declaration specifies one, and otherwise the constant’s identifier name.
The key of each key-value pair must equal the identifier string of one such constant, with the comparison performed according to the rules for WGSL identifier comparison. When the pipeline is executed, that constant will have the specified value.
Values are specified as
GPUPipelineConstantValue, which is adouble. They are converted to WGSL type of the pipeline-overridable constant (bool/i32/u32/f32/f16). If conversion fails, a validation error is generated.Pipeline-overridable constants defined in WGSL:@id ( 0 ) override has_point_light : bool= true ; // Algorithmic control. @id ( 1200 ) override specular_param : f32= 2.3 ; // Numeric control. @id ( 1300 ) override gain : f32; // Must be overridden. override width : f32= 0.0 ; // Specifed at the API level // using the name "width". override depth : f32; // Specifed at the API level // using the name "depth". // Must be overridden. override height = 2 * depth ; // The default value // (if not set at the API level), // depends on another // overridable constant. Corresponding JavaScript code, providing only the overrides which are required (have no defaults):
{ // ... constants: { 1300 : 2.0 , // "gain" depth: - 1 , // "depth" } } Corresponding JavaScript code, overriding all constants:
{ // ... constants: { 0 : false , // "has_point_light" 1200 : 3.0 , // "specular_param" 1300 : 2.0 , // "gain" width: 20 , // "width" depth: - 1 , // "depth" height: 15 , // "height" } }
GPUShaderStage stage, GPUProgrammableStage descriptor)
-
If descriptor.
entryPointis provided:-
Return the single entry point in descriptor.
modulewith a name equalling descriptor.entryPoint.
Otherwise:
-
Return the single entry point in descriptor.
modulewith a shader stage equalling stage.
-
Arguments:
-
GPUShaderStagestage -
GPUProgrammableStagedescriptor -
GPUPipelineLayoutlayout
Return true if all of the following conditions are met, and false otherwise:
-
descriptor.
modulemust be a validGPUShaderModule. -
If descriptor.
entryPointis provided:-
descriptor.
modulemust contain exactly one entry point with a name equalling descriptor.entryPoint, and its shader stage must equal stage.
Otherwise:
-
descriptor.
modulemust contain exactly one entry point for shader stage stage.
-
-
For each binding that is statically used by descriptor:
-
validating shader binding(binding, layout) must return
true.
-
-
For each texture and sampler statically used together in texture sampling call in descriptor:
-
Let texture be the
GPUBindGroupLayoutEntrycorresponding to the sampled texture in the call. -
Let sampler be the
GPUBindGroupLayoutEntrycorresponding to the used sampler in the call. -
If sampler.
typeis"filtering", then texture.sampleTypemust be"float".
Note:
"comparison"samplers can also only be used with"depth"textures, because they are the only texture type that can be bound to WGSLtexture_depth_*bindings. -
-
For each key → value in descriptor.
constants:-
key must equal the pipeline-overridable constant identifier string of some pipeline-overridable constant defined in the shader module descriptor.
moduleby the rules defined in WGSL identifier comparison. Let the type of that constant be T. -
Converting the IDL value value to WGSL type T must not throw a
TypeError.
-
-
For each pipeline-overridable constant identifier string key which is statically used by descriptor:
-
If the pipeline-overridable constant identified by key does not have a default value, descriptor.
constantsmust contain key.
-
-
Pipeline-creation program errors must not result from the rules of the [WGSL] specification.
Arguments:
-
shader binding declaration variable, a module-scope variable declaration reflected from a shader module
-
GPUPipelineLayoutlayout
Let bindGroup be the bind group index, and bindIndex be the binding index, of the shader binding declaration variable.
Return true if all of the following conditions are satisfied:
-
layout.
[[bindGroupLayouts]][bindGroup] contains aGPUBindGroupLayoutEntryentry whose entry.binding== bindIndex. -
If the defined binding member for entry is:
buffer-
"uniform"-
variable is declared with address space
uniform. "storage"-
variable is declared with address space
storageand access moderead_write. "read-only-storage"-
variable is declared with address space
storageand access moderead.
If entry.
buffer.minBindingSizeis not0, then it must be at least the minimum buffer binding size for the associated buffer binding variable in the shader. sampler-
"filtering"or"non-filtering"-
variable has type
sampler. "comparison"-
variable has type
sampler_comparison.
texture-
If, and only if, entry.
texture.multisampledistrue, variable has typetexture_multisampled_2d<T>ortexture_depth_multisampled_2d<T>.If entry.
texture.sampleTypeis:"float","unfilterable-float","sint"or"uint"-
variable has one of the types:
-
texture_1d<T> -
texture_2d<T> -
texture_2d_array<T> -
texture_cube<T> -
texture_cube_array<T> -
texture_3d<T> -
texture_multisampled_2d<T>
If entry.
texture.sampleTypeis:"float"or"unfilterable-float"-
The sampled type
Tisf32. "sint"-
The sampled type
Tisi32. "uint"-
The sampled type
Tisu32.
-
"depth"-
variable has one of the types:
-
texture_2d<T> -
texture_2d_array<T> -
texture_cube<T> -
texture_cube_array<T> -
texture_multisampled_2d<T> -
texture_depth_2d -
texture_depth_2d_array -
texture_depth_cube -
texture_depth_cube_array -
texture_depth_multisampled_2d
where the sampled type
Tisf32. -
If entry.
texture.viewDimensionis:"1d"-
variable has type
texture_1d<T>. "2d"-
variable has type
texture_2d<T>ortexture_multisampled_2d<T>. "2d-array"-
variable has type
texture_2d_array<T>. "cube"-
variable has type
texture_cube<T>. "cube-array"-
variable has type
texture_cube_array<T>. "3d"-
variable has type
texture_3d<T>.
storageTexture-
If entry.
storageTexture.viewDimensionis:"1d"-
variable has type
texture_storage_1d<T, A>. "2d"-
variable has type
texture_storage_2d<T, A>. "2d-array"-
variable has type
texture_storage_2d_array<T, A>. "3d"-
variable has type
texture_storage_3d<T, A>.
If entry.
storageTexture.accessis:"write-only"-
The access mode
Aiswrite. "read-only"-
The access mode
Aisread. "read-write"-
The access mode
Aisread_writeorwrite.
The texel format
Tequals entry.storageTexture.format.
-
Let T be the store type of var.
-
If T is a runtime-sized array, or contains a runtime-sized array, replace that
array<E>witharray<E, 1>.Note: This ensures there’s always enough memory for one element, which allows array indices to be clamped to the length of the array resulting in an in-memory access.
-
Return SizeOf(T).
Note: Enforcing this lower bound ensures reads and writes via the buffer variable only access memory locations within the bound region of the buffer.
GPUProgrammableStage if it is present in the interface of the shader stage of the specified entryPoint, in the specified shader module. 10.2. GPUComputePipeline
A GPUComputePipeline is a kind of pipeline that controls the compute shader stage,
and can be used in GPUComputePassEncoder.
Compute inputs and outputs are all contained in the bindings,
according to the given GPUPipelineLayout.
The outputs correspond to buffer bindings with a type of "storage" and storageTexture bindings with a type of "write-only" or "read-write".
Stages of a compute pipeline:
-
Compute shader
[Exposed =(Window ,DedicatedWorker ),SecureContext ]interface GPUComputePipeline { };GPUComputePipeline includes GPUObjectBase ;GPUComputePipeline includes GPUPipelineBase ;
10.2.1. Compute Pipeline Creation
A GPUComputePipelineDescriptor describes a compute pipeline. See § 23.2 Computing for additional details.
dictionary :GPUComputePipelineDescriptor GPUPipelineDescriptorBase {required GPUProgrammableStage compute ; };
GPUComputePipelineDescriptor has the following members:
compute, of type GPUProgrammableStage-
Describes the compute shader entry point of the pipeline.
createComputePipeline(descriptor)-
Creates a
GPUComputePipelineusing immediate pipeline creation.Called on:GPUDevicethis.Arguments:
Arguments for the GPUDevice.createComputePipeline(descriptor) method. Parameter Type Nullable Optional Description descriptorGPUComputePipelineDescriptor✘ ✘ Description of the GPUComputePipelineto create.Returns:
GPUComputePipelineContent timeline steps:
-
Let pipeline be a new
GPUComputePipelineobject. -
Issue the initialization steps on the Device timeline of this.
-
Return pipeline.
Device timeline initialization steps:-
Let layout be a new default pipeline layout for pipeline if descriptor.
layoutis"auto", and descriptor.layoutotherwise. -
If any of the following conditions are unsatisfied generate a validation error, make pipeline invalid, and stop.
-
layout must be valid to use with this.
-
validating GPUProgrammableStage(
COMPUTE, descriptor.compute, layout) must succeed. -
Let workgroupStorageUsed be the sum of roundUp(16, SizeOf(T)) over each type T of all variables with address space "workgroup" statically used by descriptor.
compute.workgroupStorageUsed must be ≤ device.limits.
maxComputeWorkgroupStorageSize. -
descriptor.
computemust use ≤ device.limits.maxComputeInvocationsPerWorkgroupper workgroup. -
Each component of descriptor.
compute'sworkgroup_sizeattribute must be ≤ the corresponding component in [device.limits.maxComputeWorkgroupSizeX, device.limits.maxComputeWorkgroupSizeY, device.limits.maxComputeWorkgroupSizeZ].
-
-
If any pipeline-creation uncategorized errors result from the implementation of pipeline creation, generate an internal error, make pipeline invalid, and stop.
Note: Even if the implementation detected uncategorized errors in shader module creation, the error is surfaced here.
-
Set pipeline.
[[layout]]to layout.
-
createComputePipelineAsync(descriptor)-
Creates a
GPUComputePipelineusing async pipeline creation. The returnedPromiseresolves when the created pipeline is ready to be used without additional delay.If pipeline creation fails, the returned
Promiserejects with anGPUPipelineError.Note: Use of this method is preferred whenever possible, as it prevents blocking the queue timeline work on pipeline compilation.
Called on:GPUDevicethis.Arguments:
Arguments for the GPUDevice.createComputePipelineAsync(descriptor) method. Parameter Type Nullable Optional Description descriptorGPUComputePipelineDescriptor✘ ✘ Description of the GPUComputePipelineto create.Returns:
Promise<GPUComputePipeline>Content timeline steps:
-
Let contentTimeline be the current Content timeline.
-
Let promise be a new promise.
-
Issue the initialization steps on the Device timeline of this.
-
Return promise.
Device timeline initialization steps:-
Let pipeline be a new
GPUComputePipelinecreated as if this.createComputePipeline()was called with descriptor; -
When pipeline is ready to be used or has been made invalid, issue the subsequent steps on contentTimeline.
Content timeline steps:-
If pipeline...
- valid
-
Resolve promise with pipeline.
- invalid due to an internal error
-
Reject promise with a
GPUPipelineErrorwithreason"internal". - invalid due to an validation error
-
Reject promise with a
GPUPipelineErrorwithreason"validation".
-
GPUComputePipeline:
const computePipeline= gpuDevice. createComputePipeline({ layout: pipelineLayout, compute: { module: computeShaderModule, entryPoint: 'computeMain' , } });
10.3. GPURenderPipeline
A GPURenderPipeline is a kind of pipeline that controls the vertex
and fragment shader stages, and can be used in GPURenderPassEncoder as well as GPURenderBundleEncoder.
Render pipeline inputs are:
-
bindings, according to the given
GPUPipelineLayout -
vertex and index buffers, described by
GPUVertexState -
the color attachments, described by
GPUColorTargetState -
optionally, the depth-stencil attachment, described by
GPUDepthStencilState
Render pipeline outputs are:
-
storageTexturebindings with aaccessof"write-only"or"read-write" -
the color attachments, described by
GPUColorTargetState -
optionally, depth-stencil attachment, described by
GPUDepthStencilState
A render pipeline is comprised of the following render stages:
-
Vertex fetch, controlled by
GPUVertexState.buffers -
Vertex shader, controlled by
GPUVertexState -
Primitive assembly, controlled by
GPUPrimitiveState -
Rasterization, controlled by
GPUPrimitiveState,GPUDepthStencilState, andGPUMultisampleState -
Fragment shader, controlled by
GPUFragmentState -
Stencil test and operation, controlled by
GPUDepthStencilState -
Depth test and write, controlled by
GPUDepthStencilState -
Output merging, controlled by
GPUFragmentState.targets
[Exposed =(Window ,DedicatedWorker ),SecureContext ]interface GPURenderPipeline { };GPURenderPipeline includes GPUObjectBase ;GPURenderPipeline includes GPUPipelineBase ;
GPURenderPipeline has the following internal slots:
[[descriptor]], of typeGPURenderPipelineDescriptor-
The
GPURenderPipelineDescriptordescribing this pipeline.All optional fields of
GPURenderPipelineDescriptorare defined. [[writesDepth]], of type boolean-
True if the pipeline writes to the depth component of the depth/stencil attachment
[[writesStencil]], of type boolean-
True if the pipeline writes to the stencil component of the depth/stencil attachment
10.3.1. Render Pipeline Creation
A GPURenderPipelineDescriptor describes a render pipeline by configuring each
of the render stages. See § 23.3 Rendering for additional details.
dictionary :GPURenderPipelineDescriptor GPUPipelineDescriptorBase {required GPUVertexState vertex ;GPUPrimitiveState primitive = {};GPUDepthStencilState depthStencil ;GPUMultisampleState multisample = {};GPUFragmentState fragment ; };
GPURenderPipelineDescriptor has the following members:
vertex, of type GPUVertexState-
Describes the vertex shader entry point of the pipeline and its input buffer layouts.
primitive, of type GPUPrimitiveState, defaulting to{}-
Describes the primitive-related properties of the pipeline.
depthStencil, of type GPUDepthStencilState-
Describes the optional depth-stencil properties, including the testing, operations, and bias.
multisample, of type GPUMultisampleState, defaulting to{}-
Describes the multi-sampling properties of the pipeline.
fragment, of type GPUFragmentState-
Describes the fragment shader entry point of the pipeline and its output colors. If not provided, the § 23.3.8 No Color Output mode is enabled.
createRenderPipeline(descriptor)-
Creates a
GPURenderPipelineusing immediate pipeline creation.Called on:GPUDevicethis.Arguments:
Arguments for the GPUDevice.createRenderPipeline(descriptor) method. Parameter Type Nullable Optional Description descriptorGPURenderPipelineDescriptor✘ ✘ Description of the GPURenderPipelineto create.Returns:
GPURenderPipelineContent timeline steps:
-
If descriptor.
fragmentis provided:-
For each non-
nullcolorState of descriptor.fragment.targets:-
? Validate texture format required features of colorState.
formatwith this.[[device]].
-
-
-
If descriptor.
depthStencilis provided:-
? Validate texture format required features of descriptor.
depthStencil.formatwith this.[[device]].
-
-
Let pipeline be a new
GPURenderPipelineobject. -
Issue the initialization steps on the Device timeline of this.
-
Return pipeline.
Device timeline initialization steps:-
Let layout be a new default pipeline layout for pipeline if descriptor.
layoutis"auto", and descriptor.layoutotherwise. -
If any of the following conditions are unsatisfied: generate a validation error, make pipeline invalid, and stop.
-
layout is valid to use with this.
-
validating GPURenderPipelineDescriptor(descriptor, layout, this) succeeds.
-
layout.
[[bindGroupLayouts]].length + vertexBufferCount is ≤ this.[[device]].[[limits]].maxBindGroupsPlusVertexBuffers, where vertexBufferCount is the maximum index in descriptor.vertex.buffersthat is notundefined.
-
-
If any pipeline-creation uncategorized errors result from the implementation of pipeline creation, generate an internal error, make pipeline invalid, and stop.
Note: Even if the implementation detected uncategorized errors in shader module creation, the error is surfaced here.
-
Set pipeline.
[[descriptor]]to descriptor. -
Set pipeline.
[[writesDepth]]to false. -
Set pipeline.
[[writesStencil]]to false. -
Let depthStencil be descriptor.
depthStencil. -
If depthStencil is not null:
-
Set pipeline.
[[writesDepth]]to depthStencil.depthWriteEnabled. -
If depthStencil.
stencilWriteMaskis not 0:-
Let stencilFront be depthStencil.
stencilFront. -
Let stencilBack be depthStencil.
stencilBack. -
If cullMode is not
"front", and any of stencilFront.passOp, stencilFront.depthFailOp, or stencilFront.failOpis not"keep":-
Set pipeline.
[[writesStencil]]to true.
-
-
If cullMode is not
"back", and any of stencilBack.passOp, stencilBack.depthFailOp, or stencilBack.failOpis not"keep":-
Set pipeline.
[[writesStencil]]to true.
-
-
-
-
Set pipeline.
[[layout]]to layout.
-
createRenderPipelineAsync(descriptor)-
Creates a
GPURenderPipelineusing async pipeline creation. The returnedPromiseresolves when the created pipeline is ready to be used without additional delay.If pipeline creation fails, the returned
Promiserejects with anGPUPipelineError.Note: Use of this method is preferred whenever possible, as it prevents blocking the queue timeline work on pipeline compilation.
Called on:GPUDevicethis.Arguments:
Arguments for the GPUDevice.createRenderPipelineAsync(descriptor) method. Parameter Type Nullable Optional Description descriptorGPURenderPipelineDescriptor✘ ✘ Description of the GPURenderPipelineto create.Returns:
Promise<GPURenderPipeline>Content timeline steps:
-
Let contentTimeline be the current Content timeline.
-
Let promise be a new promise.
-
Issue the initialization steps on the Device timeline of this.
-
Return promise.
Device timeline initialization steps:-
Let pipeline be a new
GPURenderPipelinecreated as if this.createRenderPipeline()was called with descriptor; -
When pipeline is ready to be used or has been made invalid, issue the subsequent steps on contentTimeline.
Content timeline steps:-
If pipeline is...
- valid
-
Resolve promise with pipeline.
- invalid due to an internal error
-
Reject promise with a
GPUPipelineErrorwithreason"internal". - invalid due to an validation error
-
Reject promise with a
GPUPipelineErrorwithreason"validation".
-
Arguments:
-
GPURenderPipelineDescriptordescriptor -
GPUPipelineLayoutlayout -
GPUDevicedevice
Return true if all of the following conditions are satisfied:
-
validating GPUProgrammableStage(
VERTEX, descriptor.vertex, layout) succeeds. -
validating GPUVertexState(device, descriptor.
vertex, descriptor.vertex) succeeds. -
If descriptor.
fragmentis provided:-
validating GPUProgrammableStage(
FRAGMENT, descriptor.fragment, layout) succeeds. -
validating GPUFragmentState(device, descriptor.
fragment) succeeds. -
If the sample_mask builtin is a shader stage output of descriptor.
fragment:-
descriptor.
multisample.alphaToCoverageEnabledisfalse.
-
-
If the frag_depth builtin is a shader stage output of descriptor.
fragment:-
descriptor.
<a data-link-type="idl" href="#dom-gpurenderpip
-
-