Quickstart: Get started with Gemini using the REST API with React Native

Greetings, developers! Today, we embark on an exciting journey to seamlessly integrate Gemini with REST API in your React Native applications. Inspired by Google’s REST API Quickstart, this step-by-step guide aims to empower you with enhanced testing capabilities and a streamlined development process.

If you want to quickly try out the Gemini API, you can use curl commands to call the methods in the REST API. The examples in this tutorial show calls for each API method.

The Colab uses Python code to set an environment variable and to display an image, but you don’t need Colab to work with the REST API. You should be able to run all of the curl examples outside of Colab, without modification, as long as you have API_KEY set as described in the next section.

For each curl command, you must specify the applicable model name and your API key.

Ensure you have the following prerequisites ready before diving into the integration:

  • A React Native project set up
  • Access to Gemini’s REST API (credentials and endpoint)

To use the Gemini API, you’ll need an API key. If you don’t already have one, create a key in Google AI Studio.

In Colab, add the key to the secrets manager under the “🔑” in the left panel. Give it the name API_KEY. You can then add it as an environment variable to pass the key in your curl call.

In a terminal, you can just run API_KEY="Your API Key".

 
import os
from google.colab import userdata

os
.environ['API_KEY'] = userdata.get('API_KEY')

Use the generateContent method to generate a response from the model given an input message. If the input contains only text, use the gemini-pro model.

 
curl https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?key=$API_KEY \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [{
        "parts":[{
          "text": "Write a story about a magic backpack."}]}]}' 2> /dev/null
 
{
  "candidates": [
    {
      "content": {
        "parts": [
          {
            "text": "Once upon a time, in a small town nestled at the foot of towering mountains, there lived a young girl named Lily. Lily was an adventurous and imaginative child, always dreaming of exploring the world beyond her home. One day, while wandering through the attic of her grandmother's house, she stumbled upon a dusty old backpack tucked away in a forgotten corner. Intrigued, Lily opened the backpack and discovered that it was an enchanted one. Little did she know that this magical backpack would change her life forever.\n\nAs Lily touched the backpack, it shimmered with an otherworldly light. She reached inside and pulled out a map that seemed to shift and change before her eyes, revealing hidden paths and distant lands. Curiosity tugged at her heart, and without hesitation, Lily shouldered the backpack and embarked on her first adventure.\n\nWith each step she took, the backpack adjusted to her needs. When the path grew treacherous, the backpack transformed into sturdy hiking boots, providing her with the confidence to navigate rocky terrains. When a sudden rainstorm poured down, the backpack transformed into a cozy shelter, shielding her from the elements.\n\nAs days turned into weeks, Lily's journey took her through lush forests, across treacherous rivers, and to the summits of towering mountains. The backpack became her loyal companion, guiding her along the way, offering comfort, protection, and inspiration.\n\nAmong her many adventures, Lily encountered a lost fawn that she gently carried in the backpack's transformed cradle. She helped a friendly giant navigate a dense fog by using the backpack's built-in compass. And when faced with a raging river, the backpack magically transformed into a sturdy raft, transporting her safely to the other side.\n\nThrough her travels, Lily discovered the true power of the magic backpack. It wasn't just a magical object but a reflection of her own boundless imagination and tenacity. She realized that the world was hers to explore, and the backpack was a tool to help her reach her full potential.\n\nAs Lily returned home, enriched by her adventures and brimming with stories, she decided to share the magic of the backpack with others. She organized a special adventure club, where children could embark on their own extraordinary journeys using the backpack's transformative powers. Together, they explored hidden worlds, learned valuable lessons, and formed lifelong friendships.\n\nAnd so, the legend of the magic backpack lived on, passed down from generation to generation. It became a reminder that even the simplest objects can hold extraordinary power when combined with imagination, courage, and a sprinkle of magic."
          }
        ],
        "role": "model"
      },
      "finishReason": "STOP",
      "index": 0,
      "safetyRatings": [
        {
          "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
          "probability": "NEGLIGIBLE"
        },
        {
          "category": "HARM_CATEGORY_HATE_SPEECH",
          "probability": "NEGLIGIBLE"
        },
        {
          "category": "HARM_CATEGORY_HARASSMENT",
          "probability": "NEGLIGIBLE"
        },
        {
          "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
          "probability": "NEGLIGIBLE"
        }
      ]
    }
  ],
  "promptFeedback": {
    "safetyRatings": [
      {
        "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
        "probability": "NEGLIGIBLE"
      },
      {
        "category": "HARM_CATEGORY_HATE_SPEECH",
        "probability": "NEGLIGIBLE"
      },
      {
        "category": "HARM_CATEGORY_HARASSMENT",
        "probability": "NEGLIGIBLE"
      },
      {
        "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
        "probability": "NEGLIGIBLE"
      }
    ]
  }
}

If the input contains both text and image, use the gemini-pro-vision model. The following snippets help you build a request and send it to the REST API.

 
curl -o image.jpg https://storage.googleapis.com/generativeai-downloads/images/scones.jpg
 
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  385k  100  385k    0     0  2053k      0 --:--:-- --:--:-- --:--:-- 2050k
 
import PIL.Image

img
= PIL.Image.open("image.jpg")
img
.resize((512, int(img.height*512/img.width)))

png

 
echo '{
  "contents":[
    {
      "parts":[
        {"text": "What is this picture?"},
        {
          "inline_data": {
            "mime_type":"image/jpeg",
            "data": "'$(base64 -w0 image.jpg)'"
          }
        }
      ]
    }
  ]
}' > request.json
 
curl https://generativelanguage.googleapis.com/v1beta/models/gemini-pro-vision:generateContent?key=${API_KEY} \
        -H 'Content-Type: application/json' \
        -d @request.json 2> /dev/null | grep "text"
 
"text": " The picture shows a table with a white tablecloth. On the table are two cups of coffee, a bowl of blueberries, and a plate of scones. There are also some flowers on the table."

Using Gemini, you can build freeform conversations across multiple turns.

 
curl https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?key=$API_KEY \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [
        {"role":"user",
         "parts":[{
           "text": "Write the first line of a story about a magic backpack."}]},
        {"role": "model",
         "parts":[{
           "text": "In the bustling city of Meadow brook, lived a young girl named Sophie. She was a bright and curious soul with an imaginative mind."}]},
        {"role": "user",
         "parts":[{
           "text": "Can you set it in a quiet village in 1600s France?"}]},
      ]
    }' 2> /dev/null | grep "text"
 
"text": "In the quaint village of Fleur-de-Lys, nestled amidst the rolling hills of 17th century France, lived a young maiden named Antoinette. She possessed a heart brimming with curiosity and a spirit as vibrant as the wildflowers that bloomed in the meadows.\n\nOne sunny morn, as Antoinette strolled through the cobblestone streets, her gaze fell upon a peculiar sight—a weathered leather backpack resting atop a mossy stone bench. Intrigued, she cautiously approached the bag, her fingers tracing the intricate carvings etched into its surface. As her fingertips grazed the worn leather, a surge of warmth coursed through her body, and the backpack began to emit a soft, ethereal glow."

Every prompt you send to the model includes parameter values that control how the model generates a response. The model can generate different results for different parameter values. Learn more about model parameters.

Also, you can use safety settings to adjust the likelihood of getting responses that may be considered harmful. By default, safety settings block content with medium and/or high probability of being unsafe content across all dimensions. Learn more about safety settings.

The following example specifies values for all the parameters of the generateContent method.

 
curl https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?key=$API_KEY \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
        "contents": [{
            "parts":[
                {"text": "Write a story about a magic backpack."}
            ]
        }],
        "safetySettings": [
            {
                "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
                "threshold": "BLOCK_ONLY_HIGH"
            }
        ],
        "generationConfig": {
            "stopSequences": [
                "Title"
            ],
            "temperature": 1.0,
            "maxOutputTokens": 800,
            "topP": 0.8,
            "topK": 10
        }
    }'  2> /dev/null | grep "text"
 
"text": "Once upon a time, in a small town nestled at the foot of a majestic mountain range, lived a young girl named Lily. Lily was a bright and curious child who loved to explore the world around her. One day, while playing in the forest near her home, she stumbled upon a hidden cave. Intrigued, she stepped inside, and to her amazement, she discovered a dusty old backpack lying in a corner.\n\nCuriosity piqued, Lily reached out and picked up the backpack. As soon as her fingers brushed against the worn leather, she felt a strange tingling sensation coursing through her body. Suddenly, the backpack began to glow, emitting a soft, ethereal light that filled the cave.\n\nWith wide-eyed wonder, Lily opened the backpack to find it filled with an assortment of magical objects. There was a compass that always pointed to the nearest adventure, a magnifying glass that could reveal hidden secrets, a telescope that allowed her to see distant lands, and a book that contained the knowledge of the universe.\n\nOverjoyed with her discovery, Lily took the magic backpack home and began using its contents to explore the world in ways she had never imagined. She followed the compass to discover hidden treasures, used the magnifying glass to uncover the secrets of nature, gazed through the telescope to witness the wonders of the cosmos, and delved into the book to learn about the mysteries of the universe.\n\nAs Lily's adventures continued, she realized that the magic backpack was more than just a collection of enchanted items. It was a symbol of her own limitless potential and the power of her imagination. It taught her that with curiosity, courage, and a touch of magic, anything was possible.\n\nNews of Lily's magical backpack spread throughout the town, and soon, children from all around came to her, eager to learn about its wonders. Lily welcomed them with open arms, sharing her stories and inspiring them to embark on their own adventures.\n\nAnd so, the magic backpack became a beacon of hope and wonder, reminding everyone that the world is full of hidden treasures waiting to be discovered, if only one has the courage to step into the unknown."

By default, the model returns a response after completing the entire generation process. You can achieve faster interactions by not waiting for the entire result, and instead use streaming to handle partial results.

 
!curl https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:streamGenerateContent?key=${API_KEY} \
       
-H 'Content-Type: application/json' \
       
--no-buffer \
       
-d '{ "contents":[{"parts":[{"text": "Write long a story about a magic backpack."}]}]}' \
       
2> /dev/null | grep "text"
 
"text": "In the bustling city of Meadow brook, lived a young girl named Sophia with an"
            "text": " insatiable curiosity that knew no bounds. One sunny morning, as she walked to school, she came across a charming little shop tucked away on a side street. Int"
            "text": "rigued, she stepped inside and was immediately captivated by the mystical aura that enveloped the room. Amidst the shelves lined with ancient books and curious trinkets, she stumbled upon a peculiar backpack. Crafted from deep emerald leather and adorned with shimmering crystals, it seemed to pulse with a hidden energy. Unable to resist its allure"
            "text": ", Sophia reached out and touched the backpack, and in that moment, a powerful connection was forged. As if awakening from a long slumber, the backpack emitted a soft glow, revealing its extraordinary properties.\n\nSophia discovered that this was no ordinary backpack. It possessed the magical ability to grant her wishes, but only if they were made with a pure heart and genuine intentions. Overwhelmed with excitement and responsibility, Sophia pondered deeply about how she would use this extraordinary gift. She decided that her first wish would be to make her grandmother, who lived far away, feel close to her. With a heartfelt desire, she whispered her wish to the backpack"

When using long prompts, it might be useful to count tokens before sending any content to the model.

 
curl https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:countTokens?key=$API_KEY \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "contents": [{
        "parts":[{
          "text": "Write a story about a magic backpack."}]}]}' > response.json
 
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   127    0    23  100   104    105    477 --:--:-- --:--:-- --:--:--   585
 
cat response.json
 
{
  "totalTokens": 8
}

Embedding is a technique used to represent information as a list of floating point numbers in an array. With Gemini, you can represent text (words, sentences, and blocks of text) in a vectorized form, making it easier to compare and contrast embeddings. For example, two texts that share a similar subject matter or sentiment should have similar embeddings, which can be identified through mathematical comparison techniques such as cosine similarity.

Use the embedding-001 model with either embedContents or batchEmbedContents:

 
curl https://generativelanguage.googleapis.com/v1beta/models/embedding-001:embedContent?key=$API_KEY \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
        "model": "models/embedding-001",
        "content": {
        "parts":[{
          "text": "Write a story about a magic backpack."}]} }' 2> /dev/null | head
 
{
  "embedding": {
    "values": [
      0.008624583,
      -0.030451821,
      -0.042496547,
      -0.029230341,
      0.05486475,
      0.006694871,
      0.004025645,
 
curl https://generativelanguage.googleapis.com/v1beta/models/embedding-001:batchEmbedContents?key=$API_KEY \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{
      "requests": [{
        "model": "models/embedding-001",
        "content": {
        "parts":[{
          "text": "Write a story about a magic backpack."}]} }]}' 2> /dev/null | head
 
{
  "embeddings": [
    {
      "values": [
        0.008624583,
        -0.030451821,
        -0.042496547,
        -0.029230341,
        0.05486475,
        0.006694871,

If you GET a model’s URL, the API uses the get method to return information about that model such as version, display name, input token limit, etc.

 
curl https://generativelanguage.googleapis.com/v1beta/models/gemini-pro?key=$API_KEY
 
{
  "name": "models/gemini-pro",
  "version": "001",
  "displayName": "Gemini Pro",
  "description": "The best model for scaling across a wide range of tasks",
  "inputTokenLimit": 30720,
  "outputTokenLimit": 2048,
  "supportedGenerationMethods": [
    "generateContent",
    "countTokens"
  ],
  "temperature": 0.9,
  "topP": 1,
  "topK": 1
}

If you GET the models directory, it uses the list method to list all of the models available through the API, including both the Gemini and PaLM family models.

 
curl https://generativelanguage.googleapis.com/v1beta/models?key=$API_KEY
 
{
  "models": [
    {
      "name": "models/chat-bison-001",
      "version": "001",
      "displayName": "Chat Bison",
      "description": "Chat-optimized generative language model.",
      "inputTokenLimit": 4096,
      "outputTokenLimit": 1024,
      "supportedGenerationMethods": [
        "generateMessage",
        "countMessageTokens"
      ],
      "temperature": 0.25,
      "topP": 0.95,
      "topK": 40
    },
    {
      "name": "models/text-bison-001",
      "version": "001",
      "displayName": "Text Bison",
      "description": "Model targeted for text generation.",
      "inputTokenLimit": 8196,
      "outputTokenLimit": 1024,
      "supportedGenerationMethods": [
        "generateText",
        "countTextTokens",
        "createTunedTextModel"
      ],
      "temperature": 0.7,
      "topP": 0.95,
      "topK": 40
    },
    {
      "name": "models/embedding-gecko-001",
      "version": "001",
      "displayName": "Embedding Gecko",
      "description": "Obtain a distributed representation of a text.",
      "inputTokenLimit": 1024,
      "outputTokenLimit": 1,
      "supportedGenerationMethods": [
        "embedText",
        "countTextTokens"
      ]
    },
    {
      "name": "models/embedding-gecko-002",
      "version": "002",
      "displayName": "Embedding Gecko 002",
      "description": "Obtain a distributed representation of a text.",
      "inputTokenLimit": 2048,
      "outputTokenLimit": 1,
      "supportedGenerationMethods": [
        "embedText",
        "countTextTokens"
      ]
    },
    {
      "name": "models/gemini-pro",
      "version": "001",
      "displayName": "Gemini Pro",
      "description": "The best model for scaling across a wide range of tasks",
      "inputTokenLimit": 30720,
      "outputTokenLimit": 2048,
      "supportedGenerationMethods": [
        "generateContent",
        "countTokens"
      ],
      "temperature": 0.9,
      "topP": 1,
      "topK": 1
    },
    {
      "name": "models/gemini-pro-vision",
      "version": "001",
      "displayName": "Gemini Pro Vision",
      "description": "The best image understanding model to handle a broad range of applications",
      "inputTokenLimit": 12288,
      "outputTokenLimit": 4096,
      "supportedGenerationMethods": [
        "generateContent",
        "countTokens"
      ],
      "temperature": 0.4,
      "topP": 1,
      "topK": 32
    },
    {
      "name": "models/gemini-ultra",
      "version": "001",
      "displayName": "Gemini Ultra",
      "description": "The most capable model for highly complex tasks",
      "inputTokenLimit": 30720,
      "outputTokenLimit": 2048,
      "supportedGenerationMethods": [
        "generateContent",
        "countTokens"
      ],
      "temperature": 0.9,
      "topP": 1,
      "topK": 32
    },
    {
      "name": "models/embedding-001",
      "version": "001",
      "displayName": "Embedding 001",
      "description": "Obtain a distributed representation of a text.",
      "inputTokenLimit": 2048,
      "outputTokenLimit": 1,
      "supportedGenerationMethods": [
        "embedContent",
        "countTextTokens"
      ]
    },
    {
      "name": "models/aqa",
      "version": "001",
      "displayName": "Model that performs Attributed Question Answering.",
      "description": "Model trained to return answers to questions that are grounded in provided sources, along with estimating answerable probability.",
      "inputTokenLimit": 7168,
      "outputTokenLimit": 1024,
      "supportedGenerationMethods": [
        "generateAnswer"
      ],
      "temperature": 0.2,
      "topP": 1,
      "topK": 40
    }
  ]
}