Skip to main content


onProcessGpu: ({ framework, frameStartResult })


onProcessGpu() is called to start GPU processing.


framework{ dispatchEvent(eventName, detail) } : Emits a named event with the supplied detail.
frameStartResult{ cameraTexture, computeTexture, GLctx, computeCtx, textureWidth, textureHeight, orientation, videoTime, repeatFrame }

The frameStartResult parameter has the following properties:

cameraTextureThe drawing canvas's WebGLTexture containing camera feed data.
computeTextureThe compute canvas's WebGLTexture containing camera feed data.
GLctxThe drawing canvas's WebGLRenderingContext or WebGL2RenderingContext.
computeCtxThe compute canvas's WebGLRenderingContext or WebGL2RenderingContext.
textureWidthThe width (in pixels) of the camera feed texture.
textureHeightThe height (in pixels) of the camera feed texture.
orientationThe rotation of the UI from portrait, in degrees (-90, 0, 90, 180).
videoTimeThe timestamp of this video frame.
repeatFrameTrue if the camera feed has not updated since the last call.


Any data that you wish to provide to onProcessCpu and onUpdate should be returned. It will be provided to those methods as processGpuResult.modulename


name: 'mycamerapipelinemodule',
onProcessGpu: ({frameStartResult}) => {
const {cameraTexture, GLctx, textureWidth, textureHeight} = frameStartResult

console.error("[index] Camera texture does not have a name")

const restoreParams = XR8.GlTextureRenderer.getGLctxParameters(GLctx, [GLctx.TEXTURE0])
// Do relevant GPU processing here
XR8.GlTextureRenderer.setGLctxParameters(GLctx, restoreParams)

// These fields will be provided to onProcessCpu and onUpdate
return {gpuDataA, gpuDataB}