GpuAccelerationConfig.Builder

public static classGpuAccelerationConfig.BuilderextendsObject

Builder class.

Public Constructor Summary

Builder()
Creates the GPU acceleration config builder.

Public Method Summary

GpuAccelerationConfig
build()
Builds the GPU acceleration config.
GpuAccelerationConfig.Builder
setCacheDirectory(Stringvalue)
Sets the directory to use for serialization.
GpuAccelerationConfig.Builder
setEnableQuantizedInference(boolean value)
Enables inference on quantized models with the delegate.
GpuAccelerationConfig.Builder
setForceBackend(GpuAccelerationConfig.GpuBackend value)
Sets GPU backend to select.
GpuAccelerationConfig.Builder
setInferencePreference(GpuAccelerationConfig.GpuInferenceUsage value)
Sets GPU inference preference for initialization time vs.
GpuAccelerationConfig.Builder
GpuAccelerationConfig.Builder
GpuAccelerationConfig.Builder
GpuAccelerationConfig.Builder
setModelToken(Stringvalue)
Sets the unique token string that acts as a 'namespace' for all serialization entries.

Inherited Method Summary

Public Constructors

publicBuilder()

Creates the GPU acceleration config builder.

Public Methods

publicGpuAccelerationConfig build()

Builds the GPU acceleration config.

public GpuAccelerationConfig.BuildersetCacheDirectory(Stringvalue)

Sets the directory to use for serialization. Whether serialization actually happens or not is dependent on backend used and validity of this directory.

NOTE: Users should ensure that this directory is private to the app to avoid data access issues.

public GpuAccelerationConfig.BuildersetEnableQuantizedInference(boolean value)

Enables inference on quantized models with the delegate. Defaults to true.

public GpuAccelerationConfig.BuildersetForceBackend(GpuAccelerationConfig.GpuBackend value)

Sets GPU backend to select. Default behaviour on Android is to try OpenCL and fall back to OpenGL if it's not available.

public GpuAccelerationConfig.BuildersetInferencePreference(GpuAccelerationConfig.GpuInferenceUsage value)

Sets GPU inference preference for initialization time vs. inference time.

public GpuAccelerationConfig.BuildersetInferencePriority1(GpuAccelerationConfig.GpuInferencePriority value)

Sets inference priority(1). Ordered priorities provide better control over desired semantics, where priority(n) is more important than priority(n+1). See GpuAccelerationConfig.GpuInferencePriorityfor more details.

public GpuAccelerationConfig.BuildersetInferencePriority2(GpuAccelerationConfig.GpuInferencePriority value)

Sets inference priority(2). Ordered priorities provide better control over desired semantics, where priority(n) is more important than priority(n+1). See GpuAccelerationConfig.GpuInferencePriorityfor more details.

public GpuAccelerationConfig.BuildersetInferencePriority3(GpuAccelerationConfig.GpuInferencePriority value)

Sets inference priority(3). Ordered priorities provide better control over desired semantics, where priority(n) is more important than priority(n+1). See GpuAccelerationConfig.GpuInferencePriorityfor more details.

public GpuAccelerationConfig.BuildersetModelToken(Stringvalue)

Sets the unique token string that acts as a 'namespace' for all serialization entries. Should be unique to a particular model (graph & constants). For an example of how to generate this from a TFLite model, see StrFingerprint().