Skip to main content

Overview

The Blog Writing Agent analyzes your existing articles to learn your unique writing style, then generates new blog posts that sound exactly like you. Built with Digital Ocean AI and GibsonAI Memori, this agent provides persistent style memory across sessions.

Key Features

  • Writing Style Analysis: Analyzes tone, voice, structure, and vocabulary from your articles
  • Multi-Format Support: Handles PDF, DOCX, and TXT documents
  • Memory Integration: Stores your writing style profile using Memori
  • AI-Powered Generation: Creates content matching your writing personality
  • Content Management: Download, copy, and save generated posts

Architecture

Memori Integration

from memori import Memori, create_memory_tool

# Initialize Memori for writing style persistence
memory_system = Memori(
    database_connect="sqlite:///tmp/newsletter_style_memory.db",
    auto_ingest=True,          # Automatic content indexing
    conscious_ingest=True,     # Active memory capture
    verbose=False,
    namespace="newsletter_writing_style",  # Isolated namespace
)
memory_system.enable()
memory_tool = create_memory_tool(memory_system)
The namespace parameter creates an isolated memory space for writing styles, preventing interference with other data.

Implementation

Document Processing

import pypdf
from docx import Document

def extract_text_from_pdf(pdf_file) -> str:
    """Extract text from PDF file."""
    pdf_reader = pypdf.PdfReader(pdf_file)
    text = ""
    for page in pdf_reader.pages:
        text += page.extract_text() + "\n"
    return text

def extract_text_from_docx(docx_file) -> str:
    """Extract text from DOCX file."""
    doc = Document(docx_file)
    text = ""
    for paragraph in doc.paragraphs:
        text += paragraph.text + "\n"
    return text

def extract_text_from_txt(txt_file) -> str:
    """Extract text from TXT file."""
    return txt_file.read().decode("utf-8")

Writing Style Analysis

from openai import OpenAI
import json

# Initialize Digital Ocean AI client
client = OpenAI(
    base_url=f"{DIGITAL_OCEAN_ENDPOINT}/api/v1/",
    api_key=DIGITAL_OCEAN_AGENT_ACCESS_KEY,
)

def analyze_writing_style(text: str) -> dict:
    """Analyze writing style using Digital Ocean AI."""
    prompt = f"""
    Analyze the following text and extract the author's writing style characteristics. 
    Focus on:
    1. Tone (formal, casual, professional, friendly, etc.)
    2. Writing structure (how paragraphs are organized, transitions, etc.)
    3. Vocabulary level and complexity
    4. Sentence structure patterns
    5. Use of examples, analogies, or storytelling
    6. Overall voice and personality
    
    Text to analyze:
    {text[:3000]}  # Limit to first 3000 characters
    
    Provide your analysis in JSON format with these keys:
    - tone: string describing the tone
    - structure: string describing paragraph and content structure
    - vocabulary: string describing vocabulary level and style
    - sentence_patterns: string describing sentence structure
    - examples_style: string describing how examples/analogies are used
    - voice: string describing overall voice/personality
    - writing_habits: list of specific writing habits or patterns
    """
    
    response = client.chat.completions.create(
        model="n/a",  # Digital Ocean agent endpoint
        messages=[
            {
                "role": "system",
                "content": "You are an expert writing analyst."
            },
            {"role": "user", "content": prompt}
        ],
        temperature=0.3,
    )
    
    # Extract JSON from response
    analysis_text = response.choices[0].message.content
    if "```json" in analysis_text:
        json_start = analysis_text.find("```json") + 7
        json_end = analysis_text.find("```", json_start)
        json_content = analysis_text[json_start:json_end].strip()
    else:
        json_start = analysis_text.find("{")
        json_end = analysis_text.rfind("}") + 1
        json_content = analysis_text[json_start:json_end]
    
    return json.loads(json_content)

Storing Style in Memori

def store_writing_style_in_memori(
    memory_system,
    style_analysis: dict,
    original_text: str
):
    """Store writing style analysis in Memori as a conversation."""
    # Create a conversation about the writing style
    user_input = f"""
    Hi AI, here is my writing style: 
    {style_analysis.get('tone', 'N/A')} tone, 
    {style_analysis.get('voice', 'N/A')} voice, 
    {style_analysis.get('structure', 'N/A')} structure, 
    {style_analysis.get('vocabulary', 'N/A')} vocabulary, 
    and {len(style_analysis.get('writing_habits', []))} writing habits.
    """
    
    ai_response = f"""
    I understand your writing style! You write with a 
    {style_analysis.get('tone', 'N/A')} tone and 
    {style_analysis.get('voice', 'N/A')} voice. 
    Your structure is {style_analysis.get('structure', 'N/A')} and you use 
    {style_analysis.get('vocabulary', 'N/A')} vocabulary. 
    I'll use this to write content that sounds exactly like you.
    """
    
    # Record in Memori
    memory_system.record_conversation(
        user_input=user_input,
        ai_output=ai_response,
        model="n/a",
        metadata={
            "type": "writing_style_profile",
            "style_data": style_analysis,
            "text_length": len(original_text),
            "analysis_timestamp": "now",
        },
    )
    
    return ai_response

Generating Blog Content

def generate_blog_with_style(memory_tool, topic: str) -> str:
    """Generate blog content using stored writing style."""
    # Retrieve writing style from memory
    writing_style_context = ""
    try:
        context_result = memory_tool.execute(query="writing style")
        if context_result and "No relevant memories found" not in str(context_result):
            writing_style_context = str(context_result)[:300]
    except Exception:
        pass
    
    # Create prompt with style context
    if writing_style_context:
        prompt = f"Write a blog post about {topic}. Use this writing style: {writing_style_context}"
    else:
        prompt = f"Write a professional and engaging blog post about {topic}."
    
    response = client.chat.completions.create(
        model="n/a",
        messages=[{"role": "user", "content": prompt}],
        temperature=0.7,
        max_tokens=2000,
    )
    
    return response.choices[0].message.content

Streamlit Application

import streamlit as st

st.title("AI Blog Writing Agent")

# Sidebar: Knowledge Agent for style analysis
with st.sidebar:
    st.header("πŸ“š Knowledge Agent")
    uploaded_file = st.file_uploader(
        "Upload your article (PDF, DOCX, TXT)",
        type=["pdf", "docx", "txt"]
    )
    
    if uploaded_file and st.button("πŸ” Analyze Writing Style"):
        # Extract text based on file type
        if uploaded_file.type == "application/pdf":
            text = extract_text_from_pdf(uploaded_file)
        elif uploaded_file.type == "application/vnd.openxmlformats-officedocument.wordprocessingml.document":
            text = extract_text_from_docx(uploaded_file)
        else:
            text = extract_text_from_txt(uploaded_file)
        
        # Analyze style
        with st.spinner("Analyzing your writing style..."):
            style_analysis = analyze_writing_style(text)
            
            # Store in Memori
            memory_system = initialize_memori()
            store_writing_style_in_memori(memory_system, style_analysis, text)
            
            st.success("βœ… Writing style analyzed and stored!")
            st.json(style_analysis)

# Main area: Writing Agent
st.header("✍️ Writing Agent")
topic = st.chat_input("What would you like to write about?")

if topic:
    with st.spinner("Generating blog post..."):
        memory_system = initialize_memori()
        memory_tool = create_memory_tool(memory_system)
        
        # Generate content
        blog_content = generate_blog_with_style(memory_tool, topic)
        st.markdown(blog_content)
        
        # Download button
        st.download_button(
            "Download Blog Post",
            blog_content,
            file_name="blog_post.md",
            mime="text/markdown"
        )

Writing Style Dimensions

The agent analyzes your writing across multiple dimensions:

Tone

Formal, casual, professional, friendly, authoritative, conversational

Voice

Personal, objective, passionate, analytical, empathetic

Structure

Paragraph organization, transitions, flow, logical progression

Vocabulary

Complexity level, technical terms, jargon, word choice patterns

Sentence Patterns

Length, structure, rhythm, variety, complexity

Examples Style

Use of analogies, metaphors, case studies, storytelling

Installation

git clone https://github.com/Arindam200/awesome-ai-apps.git
cd memory_agents/blog_writing_agent
uv sync

Environment Setup

Create a .env file:
DIGITAL_OCEAN_ENDPOINT=your_digital_ocean_agent_endpoint
DIGITAL_OCEAN_AGENT_ACCESS_KEY=your_digital_ocean_api_key

Running the Application

uv run streamlit run app.py

Workflow

1

Upload Article

Upload a blog post that represents your writing style (PDF, DOCX, or TXT)
2

Analyze Style

Click β€πŸ” Analyze Writing Style” to extract and store your style profile
3

Generate Content

Use the chat interface to request new blog posts on any topic
4

Review and Download

Review the generated content and download it as a markdown file

Use Cases

Content Creators

Maintain consistent voice across multiple blog posts

Marketing Teams

Generate brand-consistent content at scale

Technical Writers

Create documentation matching existing style guides

Ghostwriters

Match client writing styles for authentic content

Best Practices

1

Use Representative Samples

Upload your best articles that truly represent your writing style
2

Analyze Multiple Pieces

For more accurate style profiles, analyze several articles
3

Refine Over Time

Update your style profile with new content as your writing evolves
4

Review Generated Content

Always review and edit AI-generated content before publishing

Memori Documentation

Official Memori memory system documentation

Digital Ocean AI

Digital Ocean AI platform documentation

Build docs developers (and LLMs) love