GemAI supports multiple Google Gemini AI models, each optimized for different use cases. You can select the model that best fits your needs based on speed, capability, and task complexity.
Model ID:gemini-2.0-flash-expNumber:0This is the experimental version of Gemini’s latest 2.0 Flash model, offering cutting-edge capabilities and performance improvements.Best for:
Testing new AI features
Accessing latest model improvements
Experimental applications
As an experimental model, behavior may change as Google refines the model. Use this for testing new capabilities before they become stable.
Models are configured through the UserConfig data class:
UserConfig structure
data class UserConfig( val model: AIModel, val apiKey: String?, val hasApiKey: Boolean) { companion object { val DEFAULT = UserConfig( model = AIModel.GEMINI_1_5_FLASH, apiKey = null, hasApiKey = false ) }}
The application loads the selected model through the BaseAIModel class:
BaseAIModel.kt
abstract class BaseAIModel(private val datastoreRepository: DatastoreRepository) { protected var modelName: String = AIModel.GEMINI_1_5_FLASH.modelName private set init { coroutineScope.launch { val deferredModelName = async { datastoreRepository.getModel().modelName } modelName = deferredModelName.await() } } protected val geminiAIModel: GenerativeModel by lazy { getGenerativeModel() }}
Model configuration is loaded asynchronously from the datastore repository during initialization, ensuring the correct model is used for all subsequent requests.
Your selected model is stored in Android DataStore and automatically loaded on app startup:
// Retrieve stored modelval storedModel = datastoreRepository.getModel()// Save new model selectiondatastoreRepository.saveModel(AIModel.GEMINI_2_0_FLASH_EXP)
This ensures your model preference persists across app restarts and maintains consistency throughout your chat sessions.