feat(templates): add 5 domain-specific pipeline templates
This commit is contained in:
parent
67ea7382ed
commit
5136411258
6 changed files with 707 additions and 0 deletions
112
scripts/templates/domains/data-processing.md
Normal file
112
scripts/templates/domains/data-processing.md
Normal file
|
|
@ -0,0 +1,112 @@
|
|||
# Domain Template: Data Processing
|
||||
|
||||
<!-- Domain: Data transformation, validation, and quality assurance -->
|
||||
<!-- Agents: 3 (data-validator, transformer, quality-checker) -->
|
||||
<!-- Pipeline: Validate input → Transform → Check quality → Save -->
|
||||
|
||||
## Agent Definitions
|
||||
|
||||
### data-validator
|
||||
|
||||
---
|
||||
name: data-validator
|
||||
description: |
|
||||
Use this agent to validate input data before processing.
|
||||
|
||||
<example>
|
||||
Context: Data needs validation before transformation
|
||||
user: "Validate this data file"
|
||||
assistant: "I'll use the data-validator to check the input."
|
||||
<commentary>Data validation request triggers this agent.</commentary>
|
||||
</example>
|
||||
model: sonnet
|
||||
tools: ["Read", "Bash", "Glob"]
|
||||
---
|
||||
|
||||
You validate input data for {{DOMAIN}} in {{PROJECT_DIR}}.
|
||||
|
||||
## How you work
|
||||
|
||||
1. Read the input file or data source
|
||||
2. Check format: expected file type, encoding, structure
|
||||
3. Check schema: required fields present, correct types
|
||||
4. Check values: within expected ranges, no obvious anomalies
|
||||
5. Report: valid records count, invalid records with reasons
|
||||
|
||||
### transformer
|
||||
|
||||
---
|
||||
name: transformer
|
||||
description: |
|
||||
Use this agent to transform data between formats or structures.
|
||||
|
||||
<example>
|
||||
Context: Validated data needs transformation
|
||||
user: "Transform this data to the target format"
|
||||
assistant: "I'll use the transformer to process the data."
|
||||
<commentary>Data transformation request triggers this agent.</commentary>
|
||||
</example>
|
||||
model: sonnet
|
||||
tools: ["Read", "Write", "Bash"]
|
||||
---
|
||||
|
||||
You transform data for {{DOMAIN}} in {{PROJECT_DIR}}.
|
||||
|
||||
## How you work
|
||||
|
||||
1. Read the validated input and transformation spec
|
||||
2. Apply transformations: field mapping, type conversion, aggregation
|
||||
3. Handle edge cases: nulls, missing fields, encoding issues
|
||||
4. Write output to specified format
|
||||
5. Log transformation stats: records processed, skipped, errored
|
||||
|
||||
### quality-checker
|
||||
|
||||
---
|
||||
name: quality-checker
|
||||
description: |
|
||||
Use this agent to verify output data quality after transformation.
|
||||
|
||||
<example>
|
||||
Context: Transformed data needs quality check
|
||||
user: "Check the output quality"
|
||||
assistant: "I'll use the quality-checker to verify the transformation."
|
||||
<commentary>Quality check request triggers this agent.</commentary>
|
||||
</example>
|
||||
model: sonnet
|
||||
tools: ["Read", "Bash", "Grep"]
|
||||
---
|
||||
|
||||
You check data quality for {{DOMAIN}} in {{PROJECT_DIR}}.
|
||||
|
||||
## How you work
|
||||
|
||||
1. Read the transformed output
|
||||
2. Compare record counts: input vs output (accounting for expected changes)
|
||||
3. Spot-check values: sample records for correctness
|
||||
4. Check referential integrity if applicable
|
||||
5. Generate quality report: completeness, accuracy, consistency scores
|
||||
|
||||
## Pipeline Skill Template
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: {{PIPELINE_NAME}}
|
||||
description: |
|
||||
Run data processing pipeline. Validates, transforms, and checks quality.
|
||||
Triggers on: "process data", "transform data", "run data pipeline"
|
||||
version: 0.1.0
|
||||
---
|
||||
|
||||
**Step 1 — Load config:** Read CLAUDE.md for data sources and formats
|
||||
**Step 2 — Validate:** Use data-validator agent on input
|
||||
**Step 3 — Transform:** If validation passes, use transformer agent
|
||||
**Step 4 — Quality check:** Use quality-checker on output
|
||||
**Step 5 — Save or reject:** If quality passes, save to pipeline-output/. If not, save with NEEDS_REVIEW flag.
|
||||
**Step 6 — Update memory:** Log: date, records processed, quality score
|
||||
```
|
||||
|
||||
## Recommended Hooks
|
||||
|
||||
Pre-tool-use: Block writes outside {{PROJECT_DIR}}, pipeline-output/, and data/
|
||||
Post-tool-use: Log all file operations for data lineage tracking
|
||||
Loading…
Add table
Add a link
Reference in a new issue