Skip to main content
This guide will walk you through setting up VisionaryAI and generating your first AI-powered image using DALL-E 3.

Prerequisites

Before you begin, ensure you have:
  • Node.js 18 or higher installed
  • An OpenAI API account with access to DALL-E 3
  • An Azure account (free tier works for testing)
  • Git installed on your machine
DALL-E 3 API access requires an OpenAI paid account. Free tier accounts do not have access to image generation endpoints.

Step 1: Get your API keys

VisionaryAI requires API keys from OpenAI and Azure credentials.
1

Get OpenAI credentials

Navigate to platform.openai.com and sign in to your account.
  • Go to API Keys in your account settings
  • Click Create new secret key and save it securely
  • Note your organization ID from the Settings page
Keep your API key secure and never commit it to version control. You’ll add it to environment variables in the next step.
2

Set up Azure Storage

Log in to the Azure Portal and create a storage account:
  • Navigate to Storage Accounts and click Create
  • Choose a resource group (create one if needed)
  • Enter a unique storage account name
  • Select your region and performance tier
  • After creation, go to Access Keys and copy your account name and key
Azure’s free tier includes 5GB of blob storage, which is sufficient for testing VisionaryAI with dozens of images.

Step 2: Clone and configure the project

Clone the VisionaryAI repository and set up your environment:
git clone https://github.com/Srijan-D/DALLE3.git
cd DALLE3
Install dependencies for both the Next.js app and Azure Functions:
# Install Next.js dependencies
npm install

# Install Azure Functions dependencies
cd azure
npm install
cd ..
You can use npm, yarn, or pnpm as your package manager. The project supports all three.

Step 3: Configure environment variables

Create environment files for both the Next.js app and Azure Functions.
# OpenAI Configuration
OPEN_AI_KEY=your_openai_api_key_here
OPEN_AI_ORGANIZATION=your_org_id_here

# Azure Functions Endpoint (use localhost during development)
NEXT_PUBLIC_AZURE_FUNCTIONS_URL=http://localhost:7071/api
Never commit environment files to version control. Both .env.local and local.settings.json should be in your .gitignore file.

Step 4: Create Azure Blob container

VisionaryAI stores generated images in Azure Blob Storage:
1

Create a container

In the Azure Portal, navigate to your storage account and select Containers from the left menu.
2

Add the images container

Click + Container and name it images. Set the public access level to Private.
3

Verify configuration

The container name must be images as specified in the Azure Functions code:
const containerName = "images";

Step 5: Start the development servers

Run both the Azure Functions and Next.js development servers simultaneously.
1

Start Azure Functions

Open a terminal and navigate to the azure directory:
cd azure
npm run start
The Azure Functions will start on http://localhost:7071. You should see output confirming that all four functions are loaded:
  • generateImage
  • getImages
  • getChatGPTSuggestion
  • generateSASToken
2

Start Next.js

Open a second terminal in the project root:
npm run dev
The Next.js app will start on http://localhost:3000.
If you’re using VS Code, you can use the Azure Functions extension to start functions directly from the editor with built-in debugging support.

Step 6: Generate your first image

Now you’re ready to create AI-generated artwork!
1

Open the application

Navigate to http://localhost:3000 in your browser. You’ll see the VisionaryAI interface with a prompt input area.
2

Get a suggestion

Click New suggestion to have ChatGPT generate a creative prompt for you:
const {
  data: suggestion,
  mutate,
} = useSWR("/api/suggestion", fetchSuggestionFromChatGPT, {
  revalidateOnFocus: false,
});
ChatGPT will provide a detailed prompt including artistic style, genre, and details.
3

Use the suggestion or create your own

You can either click Use suggestion to generate an image from ChatGPT’s prompt, or type your own custom prompt in the text area.Example prompts:
  • “A serene watercolor painting of a Japanese garden at sunset”
  • “Photo-realistic portrait of a cyberpunk warrior in neon city, 4K”
  • “Abstract oil painting with bold geometric shapes in vibrant colors”
4

Generate the image

Click Generate to start the image creation process:
const submitPrompt = async (useSuggestion?: boolean) => {
  const p = useSuggestion ? suggestion : inputPrompt;
  
  const res = await fetch("/api/generateImage", {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({ prompt: p }),
  });
  
  updateImages();
};
A toast notification will appear showing the generation progress. DALL-E 3 typically takes 10-30 seconds to generate an image.
5

View your creation

Once generated, the image will automatically appear in the gallery grid. The newest images appear first, sorted by timestamp.
Hover over any image to see the original prompt that was used to generate it.

Understanding the image flow

Here’s what happens when you generate an image:
1

Frontend submission

The PromptInput component sends your prompt to the Next.js API route at /api/generateImage.
2

Azure Functions processing

The API route forwards the request to the Azure Function generateImage, which:
  • Calls the OpenAI API with DALL-E 3
  • Downloads the generated image from OpenAI’s temporary URL
  • Uploads it to Azure Blob Storage with a timestamped filename
const timestamp = new Date().getTime();
const file_name = `${prompt}_${timestamp}.png`;
await blockBlobClient.uploadData(arrayBuffer);
3

Gallery update

SWR automatically refreshes the image gallery, fetching the updated list from Azure Blob Storage.

Next steps

Now that you’ve generated your first image, explore more features:

Installation guide

Deploy VisionaryAI to production with detailed setup instructions

Customize prompts

Modify the ChatGPT suggestion prompt in azure/src/functions/getChatGPTSuggestion.js to generate different styles

Adjust image settings

Change image size or count by modifying the DALL-E 3 parameters in azure/src/functions/generateImage.js

Add features

Extend VisionaryAI with image variations, inpainting, or custom image metadata

Troubleshooting

DALL-E 3 has rate limits based on your OpenAI account tier. If you hit the limit:The application handles rate limits gracefully with toast error notifications.
If the Next.js app can’t reach Azure Functions:
  • Verify Azure Functions are running on http://localhost:7071
  • Check that local.settings.json has correct environment variables
  • Ensure your firewall isn’t blocking port 7071
  • Look for error messages in the Azure Functions terminal output
If the suggestion feature doesn’t work:
  • Verify your OpenAI API key has access to gpt-3.5-turbo-instruct
  • Check the Azure Functions logs for API errors
  • Try clicking New suggestion again after a few seconds
  • Ensure you’re not hitting OpenAI rate limits

Build docs developers (and LLMs) love