Overview
The AI-BIM App integrates OpenAI’s GPT models to enable natural language queries over IFC (Industry Foundation Classes) building data. This guide walks you through obtaining an API key, understanding costs, and optimizing your usage.Getting an OpenAI API Key
Step 1: Create an OpenAI Account
- Visit OpenAI Platform
- Sign up for an account or log in
- Navigate to the API Keys page
Step 2: Generate an API Key
- Click “Create new secret key”
- Name your key (e.g., “AI-BIM App”)
- Copy the key immediately - you won’t be able to see it again
- Store it securely
Step 3: Configure Your Application
Add your API key to the.env file:
Setting Up Billing
Prepaid Credits
OpenAI operates on a pay-as-you-go model:- Go to Billing Settings
- Click “Add payment method”
- Add a credit card or set up auto-recharge
- Set a usage limit to control costs
New accounts may receive free credits for testing. Check your dashboard for current promotions.
Setting Usage Limits
Protect against unexpected charges:- Navigate to Usage Limits
- Set a monthly budget cap
- Configure email alerts at specific thresholds (e.g., 50%, 75%, 100%)
Understanding Token Usage and Costs
What Are Tokens?
Tokens are pieces of text used for API billing:- 1 token ≈ 4 characters in English
- 1 token ≈ ¾ of a word
- Both input (prompt) and output (response) count toward usage
Current Model Pricing
The AI-BIM App uses GPT-3.5 Turbo by default (src/bim-components/ChatGpt/index.ts:74):
| Model | Input Cost | Output Cost |
|---|---|---|
| gpt-3.5-turbo | $0.50 / 1M tokens | $1.50 / 1M tokens |
| gpt-4-turbo | $10.00 / 1M tokens | $30.00 / 1M tokens |
| gpt-4 | $30.00 / 1M tokens | $60.00 / 1M tokens |
Pricing is subject to change. Check OpenAI’s pricing page for current rates.
Cost Estimation for BIM Queries
Typical IFC file queries in the AI-BIM App: File reference:src/bim-components/ChatGpt/index.ts:61-92
The application sends:
- System prompt (~50 tokens)
- IFC file content (varies, potentially thousands of tokens)
- User question (~10-50 tokens)
- Input: ~10,100 tokens = $0.005
- Output: ~100 tokens = $0.00015
- Total per query: ~$0.005
Model Selection
GPT-3.5 Turbo (Default)
Current implementation:src/bim-components/ChatGpt/index.ts:74
- Fast response times
- Cost-effective for high-volume queries
- Good for straightforward BIM data extraction
- Context window: 16,385 tokens
- Material queries
- Element counting
- Simple property lookups
- Budget-conscious deployments
GPT-4 and GPT-4 Turbo
For more complex queries, you can modify the model insrc/bim-components/ChatGpt/index.ts:74:
- More accurate understanding
- Better reasoning for complex relationships
- Higher cost (20-60x more expensive)
- Context window: 128,000 tokens (gpt-4-turbo)
- Complex spatial relationships
- Multi-step reasoning
- Large building models
- Quality-critical applications
API Implementation Details
Current API Call Structure
File reference:src/bim-components/ChatGpt/index.ts:67-86
The application makes direct REST API calls to OpenAI:
Data Optimization
The app includes file filtering logic (src/bim-components/ChatGpt/index.ts:37-59) to reduce token usage:
The filtering function is defined but not currently used in queries. Implementing it could significantly reduce costs.
Best Practices for Prompt Engineering with BIM Data
System Prompt Optimization
The current system prompt instructs the model to:- Only use provided data
- Answer concisely
- Not fabricate information
Query Structure Tips
Good queries:- “How many IFCWALL elements are in the building?”
- “List all materials associated with slabs”
- “What building storeys are defined?”
- “Count beams on level 2”
- Open-ended questions
- Requests for design recommendations
- Queries about data not in IFC files
- Very complex multi-part questions
Token Optimization Strategies
- Filter file content before sending (use
modifyDataDile()method) - Send only relevant entities for the query type
- Implement response caching for repeated queries
- Use shorter system prompts
- Encourage concise user questions
Context Window Management
GPT-3.5 Turbo supports up to 16,385 tokens:- Reserve ~50 tokens for system prompt
- Reserve ~50 tokens for user question
- Reserve ~500 tokens for response
- Available for IFC data: ~15,785 tokens (~63,000 characters)
Monitoring and Debugging
Usage Dashboard
Track your API usage:- Visit OpenAI Usage Dashboard
- View token consumption by date
- Analyze cost trends
- Download usage reports
Response Logging
The app logs responses to console (src/bim-components/ChatGpt/index.ts:90):
Error Handling
Common errors:| Error | Cause | Solution |
|---|---|---|
| 401 Unauthorized | Invalid API key | Check .env configuration |
| 429 Rate limit | Too many requests | Implement rate limiting |
| 400 Bad request | Invalid parameters | Check model name and formatting |
| 500 Server error | OpenAI service issue | Retry with exponential backoff |
Advanced Configuration
Adding Parameters
You can enhance the API call with additional parameters:Recommended Settings for BIM Queries
Security Considerations
Production Architecture Recommendation
- API key exposure
- Unauthorized usage
- Billing abuse
- CORS issues