Apr 29, 2024

Guide on How to Use Google Gemini Pro – Can I Run It Locally?

A simple and useful guide on how to use Google Gemini AI models, including the the most advanced Gemini 1.5 Pro.

Have you tried
ChatLabs?

40 best AI models

at one place!

Have you tried
ChatLabs?

40 best AI models

at one place!

Run Gemini 1.5

What is Gemini AI?

Gemini is AI model developed by Google corporation. Gemini was created by various teams at Google, including those at Google Research. It's designed to be multimodal, meaning it can work with and blend different kinds of information like text, code, audio, images, and video. This allows it to understand and handle various types of data smoothly.

It's available in three versions (subject to change):

  • Gemini 1.0 Ultra: This initial version set new standards in various benchmarks, including text processing and coding.

  • Gemini Nano: Tailored for mobile devices, it comes in two versions: Nano-1 (1.8 billion parameters) and Nano-2 (3.25 billion parameters). It will is available to Android developers through Android AICore.

  • Gemini 1.5 Pro: The latest medium-sized model by Google AI that handles tasks involving text, images, videos, audio, and code. It features a long-context window that can process up to 1 million tokens, a significant leap from the 32,000 tokens of Gemini 1.0. This model was released to Google Cloud customers on AI Studio and Vertex AI in December 2023.

Gemini 1.5 Pro is particularly notable for its enhanced ability to handle large amounts of data from various sources. It can perform tasks such as extracting book titles from images and videos, summarizing extensive texts, and generating JSON data from code.

Access to Gemini AI

While it's not possible to run Gemini AI models locally without the internet, you can now use a top AI model from Google that does amazing things. It is readily available for you to use. You can set it up easily and run it right on your own computer.

Getting the Gemini API

First, you'll need an API key from Google. Go to Google's AI Page, find the Google Studio section, and click on "Get API key." Choose "Create your API key" in a new project, and you're ready to go.

Then, open your code editor (VScode, for example, might be a good choice) and set up a Python virtual environment in a new folder:

  1. Create the virtual environment:

python -m venv venv
source venv/bin/activate
  1. Now, create a file named requirements.txt and list these packages:

google-generativeai
streamlit
dotenv

The google-generativeai package lets you use Google's Gemini model, and streamlit helps you build web apps easily.

  1. Install the packages:

bashCopy codepip install -r requirements.txt
  1. Paste your API key in a file named .env like this:

makefileCopy codeGOOGLE_API_KEY="your-API-key"

Now that everything is set up, you can start the fun part – coding!

There are two methods to implement this, let's look at both:

Method 1: Running Gemini with Python Notebook

This is the first way to run the Gemini AI locally on your computer.

Run the following steps in a Python notebook:

  1. Import necessary packages:

import textwrap
import google.generativeai as genai
import os
from dotenv import load_dotenv
from IPython.display import display, Markdown


  1. Write a function to display text nicely:

def to_markdown(text):
    text = text.replace('•', '  *')
    return Markdown(textwrap.indent(text, '> ', predicate=lambda _: True))


  1. Load your API key:

load_dotenv()
GOOGLE_API_KEY = os.getenv('GOOGLE_API_KEY')
genai.configure(api_key=GOOGLE_API_KEY)


  1. Check the available Gemini models:

for model in genai.list_models():
    if 'generateContent' in model.supported_generation_methods:
        print(model.name)


  1. Select a model you want to use:

# Selecting a model 
model = genai.GenerativeModel('gemini-pro')


  1. Provide a prompt to the model to get the response:

# Write any prompt of your choice
response = model.generate_content("YOUR PROMPT")
to_markdown(response.text)


  1. To see the response in sections (chunks) or one line at a time:

for chunk in response:
    print(chunk.text)
    print("_" * 80)


Method 2: Run Gemini Using Streamlit

First, create a Python file named gemini.py:

from dotenv import load_dotenv
import os
import google.generativeai as genai
import streamlit as st
load_dotenv()
GOOGLE_API_KEY = os.getenv('GOOGLE_API_KEY')
genai.configure(api_key=GOOGLE_API_KEY)
model = genai.GenerativeModel('gemini-pro')
def generate_content(prompt):
    response = model.generate_content(prompt)
    return response.text
st.title('Gemini AI Text Generator')
prompt = st.text_input('Enter a prompt:')
if st.button('Generate'):
    response = generate_content(prompt)
    st.write(response)

In this Streamlit application, you first retrieve the API key from a .env file and load it into the GOOGLE_API_KEY variable.

Then, you create a simple function called generate_content that takes a user's input (prompt), passes it to the model, and then returns the text response.

In the Streamlit interface, there's a text input box provided by st.text_input and a 'Generate' button. When you press the button, it calls the generate_content function and displays the response text below the input field.

This is what you'll see in the Streamlit application when it's running.


Use Gemini 1.5 Pro with ChaLabs

We are happy to let you know that you can easily start using Google Gemini Models, including the latest Gemini 1.5 Pro, with our ChatLabs AI tool.

ChatLabs is advanced, easy-to-use platform that supports more than 30 most popular LLMs including GPT-4, Gemini 1.5, Llama 3, Mistral 8x22B, Claude Opus, and many others

To use all best AI models at one place, just go to https://labs.writingmate.ai and sign up.


Useful Links and Related Articles

Google Developers Blog Post about Gemini 1.5
Google AI SDK for Android
How to Run Mistral 8x22B Locally on Your Computer
How to Run Llama 3 Locally on Your PC
Guide to Running Llama 2 Locally
How to Run OpenELM by Apple

Stay up to date
on the latest AI news by ChatLabs

Use the best AI models at once place.