Overview
SpendWisely George uses Google’s Gemini AI (via the Generative Language API) for:
- Natural language expense entry - “Spent 250 on coffee” → structured transaction
- Context-aware categorization - AI suggests categories based on description
- Smart insights - Compares new expenses against historical spending patterns
- Financial Q&A - Ask questions about your spending history
The integration uses gemini-flash-latest model, which is currently free with generous rate limits.
API Configuration
Get Gemini API Key
Create API Key
Click “Get API key” → “Create API key in new project”
Save to Script Properties
In the SpendWisely UI:
- Open Settings (click “G” button)
- Scroll to “Gemini API Key” section
- Paste your key
- Click “Save Key to Cloud”
The key is stored securely in Google Apps Script Properties:
SCRIPT_PROP.setProperty("GEMINI_KEY", "AIza...");
Implementation: google_script.js:85
API Endpoint
Gemini Flash Endpoint
https://generativelanguage.googleapis.com/v1beta/models/gemini-flash-latest:generateContent?key={API_KEY}
Request Format:
{
"contents": [
{
"parts": [
{
"text": "Your prompt here"
}
]
}
]
}
Response Format:
{
"candidates": [
{
"content": {
"parts": [
{
"text": "{\"description\": \"Coffee\", \"amount\": 250}"
}
]
}
}
]
}
Implementation: google_script.js:173-181
Processing Modes
Mode: Add Transaction
Convert natural language into structured expense data.
Request:
{
"action": "process_text",
"text": "Coffee with team 350 rupees",
"mode": "add"
}
AI Prompt Template:
You are a financial assistant.
CONTEXT: The user wants to ADD a transaction.
USER INPUT: "Coffee with team 350 rupees"
HISTORY:
Starbucks (250) [Food]
Uber (120) [Transport]
...
TASK:
1. Extract: Description, Amount, Category (Food, Transport, Tech, Invest, Income, Misc).
2. Analyze: Compare this new expense to the HISTORY. Is it higher than usual? Is it a repeat?
3. Insight: Write a very short, witty, or helpful insight (max 15 words) about this specific spend.
OUTPUT JSON STRICTLY:
{
"description": "String",
"amount": Number,
"category": "String",
"type": "DEBIT" or "CREDIT",
"insight": "String"
}
AI Response:
{
"description": "Coffee with team",
"amount": 350,
"category": "Food",
"type": "DEBIT",
"insight": "Above your usual coffee spend by ₹100. Team bonding is worth it!"
}
Implementation: google_script.js:131-152
Mode: Ask Question
Answer financial questions based on transaction history.
Request:
{
"action": "process_text",
"text": "How much did I spend on food this week?",
"mode": "ask"
}
AI Prompt Template:
You are a financial analyst.
CONTEXT: The user is ASKING a question about their finances.
USER INPUT: "How much did I spend on food this week?"
HISTORY:
Coffee (250) [Food]
Lunch (450) [Food]
Uber (120) [Transport]
...
TASK:
Answer the user's question based strictly on the HISTORY data provided.
Be concise (max 30 words).
OUTPUT JSON STRICTLY:
{
"answer": "String"
}
AI Response:
{
"answer": "You spent ₹1,850 on food this week across 8 transactions. That's ₹150 below your weekly average."
}
Implementation: google_script.js:154-169
Context Injection
The AI receives recent transaction history for better suggestions:
// Fetch last 40 transactions as context
const historyData = sheet.getRange(
Math.max(1, sheet.getLastRow() - 40),
1, 41, 4
).getValues();
const historyContext = historyData
.map(r => `${r[1]} (${r[2]}) [${r[3]}]`)
.join("\n");
// Include in prompt
const prompt = `
HISTORY:
${historyContext}
USER INPUT: "${userText}"
...
`;
Implementation: google_script.js:127-129
Context is limited to last 40 transactions to stay within token limits and reduce API costs.
Response Processing
The script handles various AI response formats:
const aiRes = UrlFetchApp.fetch(aiUrl, {
method: 'post',
contentType: 'application/json',
payload: JSON.stringify(aiPayload)
});
const aiJson = JSON.parse(aiRes.getContentText());
const rawText = aiJson.candidates[0].content.parts[0].text;
// Clean markdown code blocks
const cleanJson = rawText
.replace(/```json/g, '')
.replace(/```/g, '')
.trim();
const parsed = JSON.parse(cleanJson);
Implementation: google_script.js:176-181
Gemini sometimes wraps JSON in markdown code blocks. The cleanup regex handles this automatically.
Frontend Integration
Add Mode
async function sendToAI(mode) {
const input = document.getElementById('omniInput');
const txt = input.value; // "Coffee 250"
const res = await fetch(SCRIPT_URL, {
method: 'POST',
body: JSON.stringify({
action: "process_text",
text: txt,
mode: mode // "add" or "ask"
})
});
const data = await res.json();
if (data.status === "Success") {
// Update balance and show AI insight
appData.balance = parseFloat(data.balance);
document.getElementById('aiTipBox').innerText = data.ai_response;
}
}
Implementation: index.html:354-408
Fallback Mode
If AI fails (rate limit, API error), the app falls back to regex parsing:
async function handleManualFallback(text) {
const amountMatch = text.match(/\d+(\.\d+)?/);
const amount = amountMatch ? parseFloat(amountMatch[0]) : 0;
const desc = text.replace(amountMatch ? amountMatch[0] : '', '').trim() || "Manual Expense";
const type = (desc.toLowerCase().includes('salary') || desc.toLowerCase().includes('income')) ? 'CREDIT' : 'DEBIT';
if (amount > 0) {
const manualTx = {
date: new Date().toISOString(),
description: desc,
amount: amount,
category: type === 'CREDIT' ? 'Income' : 'General',
type: type
};
// Save locally and sync to sheet
appData.balance = (type === 'CREDIT') ? (appData.balance + amount) : (appData.balance - amount);
appData.expenses.push(manualTx);
}
}
Implementation: index.html:411-437
Fallback ensures the app remains functional even without AI, providing a better user experience.
Rate Limits
Gemini Flash (free tier) limits:
| Resource | Limit |
|---|
| Requests per minute | 15 |
| Requests per day | 1,500 |
| Tokens per minute | 1 million |
| Tokens per request | 32,768 input |
If you exceed rate limits, the API returns a 429 error. The app automatically falls back to manual parsing.
Error Handling
try {
const aiRes = UrlFetchApp.fetch(aiUrl, {...});
const aiJson = JSON.parse(aiRes.getContentText());
const rawText = aiJson.candidates[0].content.parts[0].text;
const cleanJson = rawText.replace(/```json/g, '').replace(/```/g, '').trim();
const parsed = JSON.parse(cleanJson);
// Process based on mode...
} catch (e) {
return ContentService.createTextOutput(
JSON.stringify({
status: "Error",
message: "AI Failed: " + e.message
})
).setMimeType(ContentService.MimeType.JSON);
}
Implementation: google_script.js:205-207
The app supports voice-to-text using Web Speech API, which feeds into Gemini:
const SpeechRecognition = window.SpeechRecognition || window.webkitSpeechRecognition;
let recognition = new SpeechRecognition();
recognition.continuous = false;
recognition.lang = 'en-IN';
recognition.interimResults = false;
recognition.onresult = (event) => {
const transcript = event.results[0][0].transcript;
// "Coffee two hundred fifty"
document.getElementById('omniInput').value = transcript;
// User clicks "ADD" → sends to Gemini for processing
};
Implementation: index.html:310-324
Best Practices
Optimize Context Window
// Only send last 40 transactions (not entire sheet)
const historyData = sheet.getRange(
Math.max(1, sheet.getLastRow() - 40),
1, 41, 4
).getValues();
This reduces:
- Token usage (lower costs)
- Response latency
- Risk of hitting context limits
Structured Prompts
Use explicit JSON output instructions:
OUTPUT JSON STRICTLY:
{
"description": "String",
"amount": Number,
"category": "String"
}
This ensures consistent, parseable responses.
Graceful Degradation
Always implement fallback:
try {
// AI processing
} catch (e) {
if (mode === 'add') {
handleManualFallback(txt); // Regex parsing
} else {
showError("AI Offline. Can't analyze right now.");
}
}
Troubleshooting
”AI Failed” Error
- Check API Key: Verify key is saved in Script Properties
- Rate Limit: Wait 1 minute and retry
- Invalid Response: Check Apps Script execution logs
Incorrect Categorization
- Improve Context: Ensure history has diverse, well-labeled transactions
- Adjust Prompt: Modify categories list in prompt template (line 140)
- Manual Override: Use fallback mode for precision
Insights Not Showing
Check:
- AI response includes
insight field
- Frontend correctly extracts
data.ai_response
- No JSON parsing errors in browser console