GPU types are opaque handles used in compute shader declarations:
| Type | Description |
|---|---|
Buffer<T> |
Storage buffer (read/write GPU memory) |
Uniform<T> |
Uniform buffer (read-only, shared across invocations) |
Image<T> |
Texture/image for read/write operations |
All three are represented as i64 in host code.
Parser note: Buffer[T] with square brackets is also accepted and converted to Buffer<T>.
Mark a function as a GPU compute shader:
@compute(workgroup_size = 64)
fun vector_add(a: Buffer<Float>, b: Buffer<Float>, out: Buffer<Float>) {
let idx = global_invocation_id.x
out[idx] = a[idx] + b[idx]
}
Inside @compute functions:
| Variable | Type | Description |
|---|---|---|
global_invocation_id.x |
Int |
Global X index |
global_invocation_id.y |
Int |
Global Y index |
global_invocation_id.z |
Int |
Global Z index |
local_invocation_id.x |
Int |
Local X index within workgroup |
workgroup_id.x |
Int |
Workgroup X index |
Seen @compute function
→ GLSL #version 450 (generated by glsl_gen.seen)
→ glslc (Vulkan SDK shader compiler)
→ SPIR-V binary (.spv)
→ Vulkan runtime dispatch
seen build app.seen --emit-glsl
Each @compute function generates a dispatch wrapper:
// Generated: fnName_gpu_dispatch(gx, gy, gz, bufs, nbufs)
vector_add_gpu_dispatch(num_groups_x, 1, 1, buffer_handles, 3)
The Vulkan runtime (seen_gpu.c/seen_gpu.h) provides:
| Function | Description |
|---|---|
seen_gpu_init() |
Initialize Vulkan instance and device |
seen_gpu_create_buffer(device, data, size) |
Create GPU buffer |
seen_gpu_write_buffer(buffer, data, size) |
Write to GPU buffer |
seen_gpu_read_buffer(buffer, size) |
Read from GPU buffer |
seen_gpu_create_pipeline(device, spirv, spirv_size) |
Create compute pipeline |
seen_gpu_dispatch(pipeline, gx, gy, gz) |
Dispatch compute workgroups |
seen_gpu_barrier(device) |
Memory barrier |
seen_gpu_destroy_buffer(buffer) |
Destroy buffer |
seen_gpu_destroy_pipeline(pipeline) |
Destroy pipeline |
seen_gpu_destroy(device) |
Cleanup Vulkan resources |
seen_gpu_device_name(device) |
Get GPU device name |
seen_gpu_device_memory(device) |
Get GPU memory size |
seen_gpu_fence_wait(fence) |
Wait for GPU fence |
seen_gpu_fence_create(device) |
Create fence |
seen_gpu_fence_destroy(fence) |
Destroy fence |
seen_gpu_timestamp(device) |
Get GPU timestamp |
The compiler tracks GPU shaders via global variables:
| Variable | Purpose |
|---|---|
g_gpuShaderNames |
Pipe-delimited shader names |
g_gpuShaderStages |
Shader stages (compute/vertex/fragment) |
g_gpuShaderCount |
Number of shaders |
g_gpuWorkgroups |
Workgroup dimensions |
g_gpuBindings |
Buffer binding information |
@compute(workgroup_size = 16)
fun matmul(a: Buffer<Float>, b: Buffer<Float>, c: Buffer<Float>) {
let row = global_invocation_id.y
let col = global_invocation_id.x
var sum = 0.0
var k = 0
while k < 256 {
sum = sum + a[row * 256 + k] * b[k * 256 + col]
k = k + 1
}
c[row * 256 + col] = sum
}
fun main() {
let device = seen_gpu_init()
// Create and fill buffers...
let a_buf = seen_gpu_create_buffer(device, a_data, size)
let b_buf = seen_gpu_create_buffer(device, b_data, size)
let c_buf = seen_gpu_create_buffer(device, null, size)
// Dispatch
matmul_gpu_dispatch(16, 16, 1, buffers, 3)
// Read result
let result = seen_gpu_read_buffer(c_buf, size)
seen_gpu_destroy(device)
}
glslc compiler-lvulkan (automatic when GPU features detected)In addition to @compute, the following decorators are recognized:
| Decorator | Stage |
|---|---|
@compute |
Compute shader |
@vertex |
Vertex shader |
@fragment |
Fragment shader |
For web targets, WebGPU bindings are available:
import platform.web.webgpu
See seen_std/src/platform/web/webgpu.seen for the full API (~30 functions).
