Skip to main content

List All Documents

proposal_id
string
required
Unique identifier for the proposal
documents
array
Array of document objects
curl -X GET "https://api.igadinnovationhub.org/api/proposals/{proposal_id}/documents/" \
  -H "Authorization: Bearer YOUR_TOKEN"
{
  "documents": [
    {
      "filename": "rfp_2024_health.pdf",
      "size": 2458624,
      "type": "rfp",
      "uploaded_at": "2024-03-15T10:30:00Z",
      "key": "PROP-2024-001/documents/rfp/rfp_2024_health.pdf"
    },
    {
      "filename": "concept.docx",
      "size": 45120,
      "type": "initial_concept",
      "uploaded_at": "2024-03-15T11:00:00Z",
      "key": "PROP-2024-001/documents/initial_concept/concept.docx"
    },
    {
      "filename": "concept_text.txt",
      "size": 1024,
      "type": "initial_concept",
      "uploaded_at": "2024-03-15T11:05:00Z",
      "key": "PROP-2024-001/documents/initial_concept/concept_text.txt"
    }
  ]
}

Document Type Detection

The type field is determined by the S3 key path:
S3 PathDocument Type
*/documents/rfp/*"rfp"
*/documents/initial_concept/*"initial_concept"
Other paths"unknown"
This endpoint only lists RFP and concept documents. For a complete list of all document types, use the categorized endpoint below.

Get Documents by Category

proposal_id
string
required
Unique identifier for the proposal
rfp_documents
array
Array of RFP document filenames
concept_documents
array
Array of concept document filenames
reference_documents
array
Array of reference proposal filenames
supporting_documents
array
Array of supporting document filenames
curl -X GET "https://api.igadinnovationhub.org/api/proposals/{proposal_id}/documents" \
  -H "Authorization: Bearer YOUR_TOKEN"
{
  "rfp_documents": [
    "rfp_2024_health.pdf"
  ],
  "concept_documents": [
    "concept.docx",
    "concept_text.txt"
  ],
  "reference_documents": [
    "usaid_health_2023.pdf",
    "world_bank_education.pdf",
    "unicef_wash_proposal.docx"
  ],
  "supporting_documents": [
    "igad_previous_work.pdf",
    "existing_work_text.txt",
    "capacity_building_report.docx"
  ]
}

S3 Folder Structure

The endpoint lists files from these S3 prefixes:
{proposal_code}/documents/
├── rfp/              → rfp_documents
├── initial_concept/  → concept_documents
├── references/       → reference_documents
└── supporting/       → supporting_documents

Error Handling

If listing fails for a specific category, the endpoint continues with other categories:
Backend Logic
for doc_type, prefix in folders.items():
    try:
        response = s3_client.list_objects_v2(Bucket=bucket, Prefix=prefix)
        # ... process files
    except Exception as e:
        print(f"Error listing {doc_type}: {str(e)}")
        # Continue with other folders

Filter Documents by Type

Use the categorized response to filter by document type:
# Get only reference proposals
response = requests.get(
    f"{API_URL}/api/proposals/{proposal_id}/documents",
    headers=headers
)
references = response.json()["reference_documents"]

print(f"Found {len(references)} reference proposals")
for ref in references:
    print(f"  - {ref}")

Document Counts

Get quick counts of documents by category:
import requests

response = requests.get(
    f"{API_URL}/api/proposals/{proposal_id}/documents",
    headers=headers
)
data = response.json()

counts = {
    "RFP": len(data["rfp_documents"]),
    "Concept": len(data["concept_documents"]),
    "References": len(data["reference_documents"]),
    "Supporting": len(data["supporting_documents"]),
}

total = sum(counts.values())
print(f"Total documents: {total}")
for doc_type, count in counts.items():
    print(f"  {doc_type}: {count}")

Check Vectorization Status

Combine document listing with vectorization status:
import requests

# Get document list
docs_response = requests.get(
    f"{API_URL}/api/proposals/{proposal_id}/documents",
    headers=headers
)
docs = docs_response.json()

# Get vectorization status
status_response = requests.get(
    f"{API_URL}/api/proposals/{proposal_id}/documents/vectorization-status",
    headers=headers
)
status_data = status_response.json()

# Combine data
for ref in docs["reference_documents"]:
    file_status = status_data["vectorization_status"].get(ref, {})
    status = file_status.get("status", "unknown")
    print(f"{ref}: {status}")

Error Handling

Common Error Codes

CodeErrorSolution
404Proposal not foundVerify proposal ID and user ownership
500S3 bucket not configuredContact administrator
500Failed to list documentsCheck S3 permissions

Error Response Example

404 Not Found
{
  "detail": "Proposal not found"
}
500 S3 Error
{
  "detail": "Failed to list documents: S3 bucket not configured"
}

Best Practices

1. Caching Document Lists

Cache document lists to reduce S3 API calls:
React Query Caching
import { useQuery } from '@tanstack/react-query'

const useDocuments = (proposalId: string) => {
  return useQuery({
    queryKey: ['documents', proposalId],
    queryFn: async () => {
      const response = await fetch(
        `${API_URL}/api/proposals/${proposalId}/documents`,
        { headers: { 'Authorization': `Bearer ${token}` } }
      )
      return response.json()
    },
    staleTime: 60000, // Cache for 1 minute
  })
}

2. Invalidate Cache After Upload/Delete

Cache Invalidation
import { useMutation, useQueryClient } from '@tanstack/react-query'

const queryClient = useQueryClient()

const deleteDocument = useMutation({
  mutationFn: async (filename: string) => {
    const response = await fetch(
      `${API_URL}/api/proposals/${proposalId}/documents/reference/${filename}`,
      { method: 'DELETE', headers: { 'Authorization': `Bearer ${token}` } }
    )
    if (!response.ok) throw new Error('Delete failed')
    return response.json()
  },
  onSuccess: () => {
    // Invalidate document list cache
    queryClient.invalidateQueries(['documents', proposalId])
  },
})

3. Display File Sizes

Format bytes for human-readable display:
File Size Formatting
const formatFileSize = (bytes: number): string => {
  if (bytes === 0) return '0 Bytes'
  
  const k = 1024
  const sizes = ['Bytes', 'KB', 'MB', 'GB']
  const i = Math.floor(Math.log(bytes) / Math.log(k))
  
  return `${parseFloat((bytes / Math.pow(k, i)).toFixed(2))} ${sizes[i]}`
}

// Usage
documents.forEach(doc => {
  console.log(`${doc.filename} - ${formatFileSize(doc.size)}`)
})

4. Group by Upload Date

Date Grouping
const groupByDate = (documents: Document[]) => {
  const groups: Record<string, Document[]> = {}
  
  documents.forEach(doc => {
    const date = new Date(doc.uploaded_at).toLocaleDateString()
    if (!groups[date]) groups[date] = []
    groups[date].push(doc)
  })
  
  return groups
}

const grouped = groupByDate(documents)
Object.entries(grouped).forEach(([date, docs]) => {
  console.log(`\n${date}:`)
  docs.forEach(doc => console.log(`  - ${doc.filename}`))
})

5. Check for Missing Documents

Validation
import requests

response = requests.get(
    f"{API_URL}/api/proposals/{proposal_id}/documents",
    headers=headers
)
data = response.json()

# Check required documents
if not data["rfp_documents"]:
    print("⚠️  Warning: No RFP document uploaded")

if not data["reference_documents"] and not data["supporting_documents"]:
    print("⚠️  Warning: No reference materials uploaded")

total_refs = len(data["reference_documents"])
if total_refs < 3:
    print(f"⚠️  Warning: Only {total_refs} reference proposals (recommended: 3+)")

Response Size Considerations

  • Typical response size: < 10KB for most proposals
  • Large proposals: 100+ documents may produce 50KB+ responses
  • Folders are skipped: S3 keys ending with / are filtered out
  • No pagination: All documents returned in a single response
If you need to handle proposals with thousands of documents, consider implementing pagination or server-side filtering.

Build docs developers (and LLMs) love