Automate workflows with reusable recipe definitions
Recipes are YAML files that define automated workflows for goose. They allow you to parameterize tasks, compose complex workflows, and share common patterns.
Recipes package instructions, parameters, and extension requirements into a reusable format:
name: "api-builder"version: 1.0.0description: "Build a REST API with authentication and tests"parameters: - name: "language" type: "string" description: "Programming language to use" required: true - name: "database" type: "string" description: "Database type (postgres, mysql, sqlite)" default: "postgres"extensions: - type: builtin name: developerinstructions: | Build a REST API in {{language}} with: 1. User authentication using JWT tokens 2. CRUD endpoints for users and resources 3. {{database}} database integration 4. Input validation and error handling 5. Unit and integration tests 6. API documentation
Parameters
Define inputs with types, defaults, and validation
instructions: | {% if include_tests %} Write comprehensive unit tests for all functions. {% endif %} {% if database == "postgres" %} Use PostgreSQL with SQLAlchemy ORM. {% elif database == "mysql" %} Use MySQL with PyMySQL driver. {% else %} Use SQLite with the built-in sqlite3 library. {% endif %}
parameters: - name: "features" type: "array" description: "List of features to implement"instructions: | Implement the following features: {% for feature in features %} - {{feature}} {% endfor %}
name: "web-scraper"version: 1.0.0description: "Scrape data from websites and save to structured format"parameters: - name: "url" type: "string" description: "URL to scrape" required: true - name: "selectors" type: "object" description: "CSS selectors for data extraction" required: true - name: "output_format" type: "string" description: "Output format (json, csv, excel)" default: "json"extensions: - type: builtin name: computercontroller - type: builtin name: developerinstructions: | Scrape data from {{url}} using these selectors: {% for key, selector in selectors.items() %} - {{key}}: {{selector}} {% endfor %} Save the results to a {{output_format}} file named "scraped_data.{{output_format}}". Handle pagination if present, and implement rate limiting to be respectful.