AI Behavior
System Prompt
Custom system prompt for AI assistantThis prompt sets the behavior, personality, and response style of the assistant. It’s sent with every AI request.Default prompt:Example custom prompts:
Keep prompts concise and clear. The assistant is optimized for voice responses, so instruct it to be brief.
Code Assistant
Code Assistant
Writing Coach
Writing Coach
Personal Assistant
Personal Assistant
Model Parameters
AI model temperature (creativity vs consistency)
- Range:
0.0to2.0 0.0= Deterministic, consistent, focused0.35= Balanced (default)1.0= Creative, varied responses2.0= Highly creative, unpredictable
Lower temperatures (0.2-0.5) are recommended for dictation and factual tasks.
Higher temperatures (0.7-1.2) work better for creative writing.
Maximum tokens in AI response
- Default:
320(~240 words) - Lower values = shorter, faster responses
- Higher values = longer, more detailed responses
Voice responses should be concise. 320 tokens is optimized for TTS playback length.
Style Profiles
Response style and tone preset
adaptive- Adjusts style based on context (default)professional- Formal, business-appropriate tonecasual- Friendly, conversational toneconcise- Ultra-brief, minimal responsesdeveloper- Technical, code-focused responses
- Adaptive
- Professional
- Casual
- Concise
- Developer
Dynamically adjusts tone based on:
- Time of day
- Query complexity
- Previous conversation context
Language Settings
Primary dictation languageSupported languages:
en- Englishes- Spanishfr- Frenchde- Germanit- Italianpt- Portuguesehi- Hindibn- Bengalija- Japaneseko- Koreanzh- Chinesear- Arabicru- Russian
Language detection mode
single- Use one language for all dictation (default)multiple- Auto-detect from allowed languages
Multiple language mode requires
dictationLanguageAllowList to be configured.Allowed languages for auto-detectionWhen
dictationLanguageMode is set to multiple, the STT model will detect and transcribe in any of these languages.Example:Audio Processing
Remove filler words from transcriptionAutomatically removes:
- “um”, “uh”, “er”
- “like”, “you know”
- Repeated words
- Hesitations
This improves transcript readability but may occasionally remove intentional words.
Automatically add punctuation to transcriptionThe STT model infers punctuation from speech patterns:
- Periods at natural pauses
- Question marks for rising intonation
- Commas for breath breaks
Works best with online models. Local models may have limited punctuation support.
Convert spoken lists to numbered formatWhen enabled:
- “First, second, third” → “1. 2. 3.”
- “Item one, item two” → “1. 2.”
Enable voice correction commandsAllows you to say:
- “Scratch that” - Remove last sentence
- “Undo” - Remove last word
- “Delete that” - Remove last phrase
Experimental feature. May occasionally trigger on similar-sounding phrases.
Play audio feedback during dictation
- Start recording: chime
- Stop recording: click
- Processing: subtle tone
- Error: alert sound
Automatically mute system audio during recordingPrevents background music/audio from being picked up by the microphone.
Requires system audio control permissions.
Privacy and Data
Disable history loggingWhen enabled:
- No conversation history saved
- No transcripts stored locally
- Home history remains empty
- Usage stats not tracked
This does NOT affect data sent to online model providers. Use local mode for complete privacy.
Enable multi-turn conversation contextWhen enabled, the assistant remembers previous turns in the session for context-aware responses.
Context is stored in memory only and cleared when the app restarts (unless incognito mode is disabled).
Workflow Settings
Automatically paste transcription into active appWhen enabled, transcribed text is immediately pasted at cursor position.
When disabled, text is copied to clipboard only.
Copy assistant responses to clipboardUseful for saving AI responses without manual selection.
Enable command detection in dictationWhen enabled, spoken commands like “new line”, “period”, “comma” are converted to formatting.
Enable wake word detection for hands-free assistantWhen enabled, say the wake phrase (e.g., “Hey Lily”) to activate the assistant without pressing a hotkey.
Wake word detection runs continuously and may increase battery usage.
Microphone
Selected microphone device IDSet in Settings > General > Microphone. SlasshyWispr lists all available input devices.Leave empty to use system default microphone.
UI and Appearance
Application theme
system- Match OS theme (default)light- Always light modedark- Always dark mode
Show live pipeline status bar during processingDisplays real-time STT/AI/TTS latency and status.
Show SlasshyWispr icon in dock/taskbarWhen disabled, app runs in system tray only.
Start SlasshyWispr automatically on system boot
May require OS permissions on first enable.
Best Practices
For Privacy
- Enable incognito mode to prevent local history
- Use local runtime mode to keep data on-device
- Disable context awareness to avoid multi-turn tracking
- Disable remember API key to avoid persisting credentials
For Accuracy
- Enable auto punctuation for better readability
- Enable remove fillers for cleaner transcripts
- Use single language mode unless multilingual dictation is needed
- Set temperature to 0.2-0.4 for factual/dictation tasks
For Performance
- Use local STT + online AI for balanced speed
- Lower max tokens to 200-250 for faster responses
- Enable Piper TTS instead of Coqui for fastest playback
- Disable wake word when not needed to save battery
For Workflow
- Enable auto paste dictation for seamless text entry
- Enable copy to clipboard to save assistant responses
- Use concise style profile for quick lookups
- Customize system prompt to match your use case