In standard LLM applications, the model relies solely on its pre-trained knowledge. An Agent differs because it has access to Tools (like a search engine, calculator, or API) and the reasoning capabilities to decide when to use them.

This guide demonstrates how to build a agent using Python, LangChain, and the Google Gemini API. We will use DuckDuckGo as our tool because it allows the agent to search for real-time data (like weather) without requiring a second API key.

We will walk through the entire process: setting up the environment, managing dependencies via requirements.txt, and writing the orchestration code.

Also Read: Top 10 Agentic AI Tools Every AI Engineer Must Learn in 2026

Prerequisites

  • Python 3.11+ installed.

  • Google API Key: You can get one for free from Google AI Studio.

Step 1: Set Up the Environment (Optional)

It is best practice to keep your project isolated so dependencies don't conflict with other projects.

  1. Open your terminal in VS Code.

  2. Create a virtual environment:

    Windows: 

    python -m venv venv

    Mac/Linux:

    python3 -m venv venv
  3. Activate the environment:

    Windows: 

    .\venv\Scripts\activate

    Mac/Linux:

    source venv/bin/activate

Step 2: Install Dependencies

We need a few specific libraries to make this work. You could install them one by one using pip install package_name, but it is much cleaner to use a requirements file.

  1. Create a new file named requirements.txt.

  2. Paste the following libraries into it:

    langchain==1.1.3
    langchain-google-genai==4.0.0
    duckduckgo-search
    langchain-community
    python-dotenv
    ddgs
  3. Install everything at once by running:

    pip install -r requirements.txt

What are we installing?

  • langchain: The framework to build the agent.

  • langchain-google-genai: The connector for Gemini models.

  • duckduckgo-search: The search engine tool (requires no API key).

  • langchain-community: Contains the third-party tool integrations.

  • python-dotenv: Used to import Gemini API from .env

Step 3: Secure Your API Key

It is bad practice to hardcode API keys in your Python scripts. We will use a .env file instead.

  1. Create a file named .env in your project folder.

  2. Add your Google API key inside it like this:

GOOGLE_API_KEY=your_actual_api_key_here

Step 4: Write the Agent Code

Create a new file named agent.py. We will build this file in logical blocks to understand how the components interact.

  1. Imports and Environment Setup First, we load the environment variables to secure the API key and import the necessary libraries.

    from dotenv import load_dotenv
    load_dotenv()
    from langchain_google_genai import ChatGoogleGenerativeAI
    from langchain_community.tools import DuckDuckGoSearchRun
    from langchain.agents import create_agent
  2. Initialize the Model and Tools We configure the Gemini model. We use temperature=0 to ensure the model provides factual, deterministic answers rather than creative ones.

    We also initialize the search tool. We wrap DuckDuckGoSearchRun in a standard Python function (web_search). This wrapper provides a clear signature that the Agent uses to understand how to call the tool.

    # Initialize the Model
    llm = ChatGoogleGenerativeAI(model="gemini-2.5-flash", temperature=0)
    # Initialize the Tool
    search_tool = DuckDuckGoSearchRun()
    def web_search(query: str) -> str:
        """Search the web using DuckDuckGo."""
        return search_tool.run(query)
    tools = [
        web_search
    ]
  3. Construct the Agent We use the create_agent function to bind the LLM, the tools, and the system prompt into a single reasoning engine. The system prompt explicitly instructs the agent to favor the tool for current information.

    agent = create_agent(
        llm,
        tools=tools,
        system_prompt="You are a helpful assistant. Use the web_search tool for up-to-date information."
    )
  4. The Execution Loop Finally, we implement a while loop to handle user input.

    This block contains logic to inspect the response type. If the agent returns a list (which happens when it executes a tool step), we parse the dictionary to extract the text and print a "Using DuckDuckGo Search" header. This provides visual confirmation that the agent is actively searching the web.

    print("--- Agent Online (type quit/exit) to stop) ---")
    while True:
        user_input = input("\nYou: ")
        if user_input.lower() in ["quit", "exit"]:
            break
        else:
            # Invoke the agent with the user's message
            result = agent.invoke({
                "messages": [
                    {"role": "user", "content": user_input}
                ]
            })
            
            # Parse the last message from the agent
            ai_message = result["messages"][-1]
            
            # Check if the content is complex (List/Dict) or simple (String)
            if isinstance(ai_message.content, list):
                if ai_message.content and isinstance(ai_message.content[0], dict) and "text" in ai_message.content[0]:
                    print("-----------------Using DuckDuckGo Search-----------------")
                    print("Agent:", ai_message.content[0]["text"])
                else:
                    print("Agent:", ai_message.content)
            else:
                print("Agent:", ai_message.content)

Step 5: Run the Agent

Open your terminal and execute the script:

python agent.py

Test Case 1: General Knowledge

  • Input: "What is an array in Python?"

  • Result: The agent recognizes this is static knowledge and answers directly without searching.

Test Case 2: Real-Time Information

  • Input: "What is the price of Bitcoin today?"

  • Result: The agent recognizes it lacks current data, triggers the web_search tool, and the console displays the search header before the final answer.

    -----------------Using DuckDuckGo Search-----------------
    Agent: As of today, Bitcoin is trading at approximately $90,000...

You have successfully built the agent. By binding the Gemini LLM to a functional tool, you created a system capable of discerning when to rely on internal training data and when to fetch external information.

Use Our Free LLM Cost Calculator Here: https://oniyoon.com/tools/llm-cost-calculator/