Schema Configuration Guide
Load aperture configuration from YAML or JSON files for runtime flexibility.
Why Schema-Based Config
- Separation: Configuration in files, types in code
- Hot-reload: Update behavior without redeployment
- Validation: Catch errors at load time
- Tooling: Generate documentation from schemas
Loading Schemas
// From file
configBytes, err := os.ReadFile("observability.yaml")
if err != nil {
log.Fatal(err)
}
schema, err := aperture.LoadSchemaFromYAML(configBytes)
if err != nil {
log.Fatal(err)
}
// Or from JSON bytes
schema, err := aperture.LoadSchemaFromJSON(jsonData)
Validation
if err := schema.Validate(); err != nil {
log.Fatalf("invalid schema: %v", err)
}
Schema Format
YAML Example
# observability.yaml
metrics:
- signal: order.created
name: orders_total
type: counter
description: Total orders placed
- signal: request.done
name: request_duration_ms
type: histogram
value_key: duration
- signal: queue.changed
name: queue_depth
type: updowncounter
value_key: delta
traces:
- start: request.started
end: request.done
correlation_key: request_id
span_name: http-request
span_timeout: 5m
logs:
whitelist:
- order.created
- order.failed
- request.done
context:
logs:
- user_id
- request_id
metrics:
- region
traces:
- user_id
- region
stdout: false
JSON Example
{
"metrics": [
{
"signal": "order.created",
"name": "orders_total",
"type": "counter"
}
],
"traces": [
{
"start": "request.started",
"end": "request.done",
"correlation_key": "request_id",
"span_name": "http-request"
}
],
"logs": {
"whitelist": ["order.created", "order.failed"]
}
}
Applying Schemas
Apply schemas directly to aperture:
ap, _ := aperture.New(cap, logProvider, meterProvider, traceProvider)
defer ap.Close()
configBytes, _ := os.ReadFile("config.yaml")
schema, _ := aperture.LoadSchemaFromYAML(configBytes)
if err := schema.Validate(); err != nil {
log.Fatal(err)
}
if err := ap.Apply(schema); err != nil {
log.Fatal(err)
}
Context Key Registration
If using context extraction, register keys before applying:
type ctxKey string
const (
userIDKey ctxKey = "user_id"
regionKey ctxKey = "region"
requestKey ctxKey = "request_id"
)
ap.RegisterContextKey("user_id", userIDKey)
ap.RegisterContextKey("region", regionKey)
ap.RegisterContextKey("request_id", requestKey)
// Now apply schema that references these keys
ap.Apply(schema)
Hot-Reload with Flux
Integrate with flux for live configuration updates:
// Create aperture once
ap, _ := aperture.New(cap, logProvider, meterProvider, traceProvider)
defer ap.Close()
// Register context keys if needed
ap.RegisterContextKey("user_id", userIDKey)
ap.RegisterContextKey("region", regionKey)
// Watch config file and apply changes
capacitor := flux.New[aperture.Schema](
file.New("observability.yaml"),
func(_, schema aperture.Schema) error {
if err := schema.Validate(); err != nil {
return err
}
return ap.Apply(schema)
},
)
capacitor.Start(ctx)
Changes to observability.yaml are applied live without restart. The Apply() method atomically swaps the configuration.
Name-Based Matching
Schema configuration uses string names that match at runtime:
// Define signals and keys in code
orderCreated := capitan.NewSignal("order.created", "Order created")
orderID := capitan.NewStringKey("order_id")
// Schema references them by name
schema := aperture.Schema{
Metrics: []aperture.MetricSchema{
{Signal: "order.created", Name: "orders_total", Type: "counter"},
},
}
// At runtime, event.Signal().Name() matches "order.created"
This decouples configuration from Go types, enabling hot-reload without recompilation.
Schema Reference
Metrics
| Field | Required | Description |
|---|---|---|
signal | Yes | Signal name to match |
name | Yes | OTEL metric name |
type | No | counter (default), gauge, histogram, updowncounter |
value_key | For non-counters | Field key name for numeric value |
description | No | Metric description |
Traces
| Field | Required | Description |
|---|---|---|
start | Yes | Signal name that begins the span |
end | Yes | Signal name that completes the span |
correlation_key | Yes | Field key name to match start/end |
span_name | No | Span name (defaults to start signal name) |
span_timeout | No | Max wait for end event (default: 5m) |
Logs
| Field | Description |
|---|---|
whitelist | Signal names to log (empty = log all) |
Context
| Field | Description |
|---|---|
logs | Context key names for log attributes |
metrics | Context key names for metric dimensions |
traces | Context key names for span attributes |
Root Options
| Field | Description |
|---|---|
stdout | Enable stdout logging (boolean) |
Error Handling
Validation catches structural issues:
err := schema.Validate()
// Possible errors:
// - "metric config missing signal"
// - "metric config missing name"
// - "trace config missing correlation_key"
Runtime matching is silent:
- Unknown signal names: events don't match, no metrics/traces created
- Unknown key names: values not extracted
- This enables gradual rollout of new signals
Custom Type Handling
Custom field types are automatically JSON serialized to string attributes. No registration required:
type OrderInfo struct {
ID string `json:"id"`
Total float64 `json:"total"`
Secret string `json:"-"` // Excluded
}
orderKey := capitan.NewKey[OrderInfo]("order", "Order details")
// Automatically serialized as JSON
cap.Emit(ctx, sig, orderKey.Field(OrderInfo{ID: "ORD-123", Total: 99.99}))
// Attribute: order="{\"id\":\"ORD-123\",\"total\":99.99}"
Use JSON struct tags to control serialization.