The Five-Step Pipeline
Every piece of content generated by Notra flows through these five stages:1. Ingest Activity
The pipeline begins when Notra receives activity from your connected systems. How ingestion works:- GitHub webhooks deliver events like push, pull request merge, and release
- Notra’s API endpoints receive and validate webhook payloads
- Activity data is normalized into a common internal format
- Metadata like repository owner, branch, and timestamps are extracted
- Merged pull requests on main/default branch
- New releases and version tags
- Commit batches from push events
- Issue closures (Linear integration)
- Notable Slack conversations (Slack integration)
Notra only processes activity from repositories and channels you’ve explicitly connected. The platform uses integration IDs to enforce strict access control.
2. Analyze Context
Once activity is ingested, AI analysis determines what matters and why. Analysis objectives:- Identify technical changes (new features, bug fixes, refactors)
- Evaluate user impact and relevance to target audience
- Prioritize updates by significance (security > breaking > features > performance)
- Filter out low-signal maintenance work
- Call
getCommitsByTimeframewithdays: 7to fetch all commits - Identify notable commit messages mentioning features or fixes
- Call
getPullRequestsfor each significant PR number found - Build a prioritized list of changes worth highlighting
Internal changes like dependency bumps, formatting tweaks, and test updates are typically filtered out unless they have clear external impact.
3. Generate Drafts
With analyzed context in hand, Notra produces structured content using specialized agents. Content types:- Changelogs: Technical release notes organized by category
- LinkedIn posts: Social-friendly updates highlighting key improvements
- Blog posts (roadmap): Long-form feature announcements and stories
- Load the appropriate tone-specific prompt (e.g., conversational)
- Receive user prompt with context (source repos, timeframe, company info)
- Call GitHub tools to gather all relevant data
- Build Highlights section with top 1-5 most important changes
- Categorize remaining updates into More Updates sections
- Apply humanization skill if available to polish the text
- Call
createPostto save the final draft
Notra uses Claude 4.5 Haiku via AI Gateway for content generation. The model is specifically chosen for its speed, cost-efficiency, and strong reasoning capabilities.
4. Apply Brand Voice
Generated drafts are adapted to match your workspace’s brand identity. What brand voice controls:- Company name and description: Context about who you are and what you build
- Tone profile: Conversational, Professional, Casual, or Formal
- Target audience: Who you’re writing for (developers, customers, stakeholders)
- Custom instructions: Additional guidelines like “avoid jargon” or “include metrics”
- Uses terminology appropriate for the audience
- Matches the selected tone profile
- Follows custom instructions
- Maintains technical accuracy
| Profile | Style | Example |
|---|---|---|
| Conversational | Warm, authentic, founder-to-community | ”We’ve shipped some great improvements this week…” |
| Professional | Clear, confident, corporate | ”This release includes several significant enhancements…” |
| Casual | Friendly, relaxed, informal | ”Hey folks! Check out what we built this week 🎉“ |
| Formal | Precise, traditional, structured | ”The engineering team has completed the following deliverables…” |
You can update your brand settings at any time. Changes apply to all future content generation—existing drafts are not modified.
5. Publish-Ready Output
The final step stores polished content in your dashboard. What happens at this stage:- Draft is saved to the database with metadata
- Content type (changelog, linkedin_post) is recorded
- Source repositories and timeframe are attached
- Draft appears immediately in your dashboard
- You receive notification that content is ready for review
- View all generated drafts in one place
- Edit content directly with inline markdown editor
- Copy markdown for publishing elsewhere
- Track which events triggered each draft
- Filter by content type and date range
Pipeline Configuration
You can customize pipeline behavior through settings: Event-based triggers:- Enable/disable automatic generation for specific event types
- Configure which content types to generate (changelog only, LinkedIn only, both)
- Set minimum commit threshold before generating content
- Choose frequency (daily, weekly, monthly)
- Set time windows (e.g., “last 7 days”)
- Select which repositories to include
Error Handling
The pipeline includes robust error handling: GitHub API rate limits:- Automatic retry with exponential backoff
- Cache frequently accessed data to minimize requests
- Clear error messages when limits are hit
- Errors are logged with full context
- You’re notified of failures in the dashboard
- Partial results are preserved when possible
- AI omits details it can’t verify rather than guessing
- Tools are called to fill gaps when needed
- Generic descriptions used when specifics unavailable
Performance and Scaling
The pipeline is designed for efficiency:- Parallel processing: Independent tool calls run concurrently
- Smart caching: GitHub data cached for 2-30 minutes based on volatility
- Pagination handling: Large commit lists processed in batches automatically
- Background execution: Generation runs async without blocking webhooks
Typical changelog generation takes 15-45 seconds depending on activity volume. Most of this time is spent calling GitHub APIs to fetch PR and commit details.
Monitoring Pipeline Health
You can monitor pipeline activity through:- Dashboard activity feed showing recent generations
- Error notifications for failed workflows
- Integration status indicators
- GitHub webhook delivery logs (in GitHub settings)