How to Build AI Agents Using β€œTool Use”?

Must Read
bicycledays
bicycledayshttp://trendster.net
Please note: Most, if not all, of the articles published at this website were completed by Chat GPT (chat.openai.com) and/or copied and possibly remixed from other websites or Feedzy or WPeMatico or RSS Aggregrator or WP RSS Aggregrator. No copyright infringement is intended. If there are any copyright issues, please contact: bicycledays@yahoo.com.

Introduction

Earlier than speaking about AI Brokers, It’s crucial to grasp the lifespan of a complicated language mannequin like GPT. A big language mannequin similar to GPT begins its lifespan with pretraining when it learns from an enormous corpus of textual information to determine a fundamental grasp of the language. The following step is supervised fine-tuning when the mannequin is improved for particular duties by utilizing specified datasets to refine it. Through the use of optimistic reinforcement to optimize the mannequin’s conduct, reward modeling enhances efficiency on the whole and decision-making specifically. Lastly, the mannequin might study and alter dynamically by means of interactions due to reinforcement studying, honing its expertise to do numerous duties extra precisely and adaptable. On this article, we may even study how one can construct AI Brokers utilizing β€œInstrument Use.”

Overview

  • Language fashions like GPT are developed by means of pretraining, supervised fine-tuning, reward modeling, and reinforcement studying.
  • Every section entails particular datasets, algorithms, mannequin changes, and evaluations to reinforce the mannequin’s capabilities.
  • Static fashions wrestle with offering real-time info, requiring common fine-tuning, which is resource-intensive and infrequently impractical.
  • Construct AI Brokers Utilizing β€œInstrument Use” in Agentic Workflow.
  • AI brokers with entry to exterior instruments can collect real-time information, execute duties, and preserve context, enhancing accuracy and responsiveness.

GPT Assistant Coaching Pipeline

Every section of the mannequin’s improvementβ€”pretraining, supervised fine-tuning, reward modeling, and reinforcement studyingβ€”progresses by means of 4 crucial parts: Dataset, Algorithm, Mannequin, and Analysis.

Pretraining Section

Within the preliminary pretraining section, the mannequin ingests huge portions of uncooked web information, totaling trillions of phrases. Whereas the info’s high quality might range, its sheer quantity is substantial however nonetheless falls in need of satisfying the mannequin’s starvation for extra. This section calls for vital {hardware} sources, together with GPUs, and months of intensive coaching. The method begins with initializing weights from scratch and updating them as studying progresses. Algorithms like language modeling predict the following token, forming the premise of the mannequin’s early phases.

AI Agents tools

Supervised Positive-Tuning Section

Transferring to supervised fine-tuning, the main focus shifts to task-specific labeled datasets the place the mannequin refines its parameters to foretell correct labels for every enter. Right here, the datasets’ high quality is paramount, resulting in a discount in amount. Algorithms tailor coaching for duties similar to token prediction, culminating in a Supervised Positive-Tuning (SFT) Mannequin. This section requires fewer GPUs and fewer time than pretraining because of enhanced dataset high quality.

Reward Modeling Section

Reward modeling follows, using algorithms like binary classification to reinforce mannequin efficiency primarily based on optimistic reinforcement indicators. The ensuing Reward Modeling (RM) Mannequin undergoes additional enhancement by means of human suggestions or analysis.

Reinforcement Studying Section

Reinforcement studying optimizes the mannequin’s responses by means of iterative interactions with its setting, making certain adaptability to new info and prompts. Nonetheless, integrating real-world information to maintain the mannequin up to date stays a problem.

The Problem of Actual-Time Information

Addressing this problem entails bridging the hole between educated information and real-world info. It necessitates methods to constantly replace and combine new information into the mannequin’s data base, making certain it might reply precisely to the newest queries and prompts.

Nonetheless, a crucial query arises: Whereas we’ve educated our LLM on the info supplied, how will we equip it to entry and reply to real-world info, particularly to handle the newest queries and prompts?

For example, the mannequin struggled to supply responses grounded in real-world information when testing ChatGPT 3.5 with particular questions, as proven within the picture under:

Build AI Agents

Positive-tune the Mannequin

One strategy is to fine-tune the mannequin, maybe scheduling each day classes often. Nonetheless, because of useful resource limitations, the viability of this system is at present beneath doubt. Common fine-tuning comes with a number of difficulties:

  1. Inadequate Information: An absence of latest information steadily makes it not possible to justify quite a few fine-tuning classes.
  2. Excessive Necessities for Computation: Positive-tuning normally requires vital processing energy, which could not be possible for normal duties.
  3. Time Intensiveness: Retraining the mannequin would possibly take a protracted interval, which is an enormous impediment.

In mild of those difficulties, it’s clear that including new information to the mannequin requires overcoming a number of obstacles and isn’t a easy operation.

So right here comes AI BrokersΒ 

Right here, we current AI brokers, basically LLMs, with built-in entry to exterior instruments. These brokers can acquire and course of info, perform duties, and hold monitor of previous encounters of their working reminiscence. Though acquainted LLM-based techniques are able to working programming and conducting net searches, AI brokers go one step additional:

  • Exterior Instrument Use: AI brokers can interface with and make the most of exterior instruments.
  • Information Gathering and Manipulation: They will acquire and course of information to assist them with their duties.
  • Process Planning: They will plan and perform duties delegated to those brokers.
  • Working Reminiscence: They hold particulars from earlier exchanges, which improves dialogue movement and context.
  • Function Enhancements: The vary of what LLMs can accomplish is elevated by this function enhancement, which matches past fundamental questions and solutions to actively manipulating and leveraging exterior sources

Utilizing AI Brokers for Actual-Time Data Retrieval

If prompted with β€œWhat’s the present temperature and climate in Delhi, India?” a web-based LLM-based chat system would possibly provoke an online search to collect related info. Early on, builders of LLMs acknowledged that relying solely on pre-trained transformers to generate output is limiting. By integrating an online search instrument, LLMs can carry out extra complete duties. On this state of affairs, the LLM may very well be fine-tuned or prompted (probably with few-shot studying) to generate a particular command like {instrument: web-search, question: β€œpresent temperature and climate in Delhi, India”} to provoke a search engine question.

A subsequent step identifies such instructions, triggers the net search perform with the suitable parameters, retrieves the climate info, and integrates it again into the LLM’s enter context for additional processing.

Dealing with Advanced Queries with Computational Instruments

In the event you pose a query similar to, β€œIf a product-based firm sells an merchandise at a 20% loss, what could be the ultimate revenue or loss?” an LLM geared up with a code execution instrument might deal with this by executing a Python command to compute the outcome precisely. For example, it would generate a command like {instrument: python-interpreter, code: β€œcost_price * (1 – 0.20)”}, the place β€œcost_price” represents the preliminary price of the merchandise. This strategy ensures that the LLM leverages computational instruments successfully to supply the right revenue or loss calculation somewhat than trying to generate the reply instantly by means of its language processing capabilities, which could not yield correct outcomes. In addition to that, with the assistance of exterior instruments, the customers can even e-book a ticket, which is planning an execution, i.e., Process Planning – Agentic Workflow.

So, AI brokers may also help ChatGPT with the issue of not having any details about the newest information in the actual world. We are able to present entry to the Web, the place it might Google search and retrieve the highest matches. So right here, on this case, the instrument is the Web search.

When the AI identifies the need for present climate info in responding to a consumer’s question, it features a listing of accessible instruments in its API request, indicating its entry to such features. Upon recognizing the necessity to use get_current_weather, it generates a particular perform name with a chosen location, similar to β€œLondon,” because the parameter. Subsequently, the system executes this perform name, fetching the newest climate particulars for London. The retrieved climate information is then seamlessly built-in into the AI’s response, enhancing the accuracy and relevance of the knowledge supplied to the consumer.

Now, let’s implement and inculcate the Instrument Use to grasp the Agentic workflow!

We’re going to Use AI brokers, a instrument, to get info on present climate. As we noticed within the above instance, it can not generate a response to the real-world query utilizing the newest information.Β 

So, we’ll now start with the Implementation.

Let’s start:

Putting in dependencies and LibrariesΒ 

Let’s set up dependencies first:

langchain
langchain-community>=0.0.36
langchainhub>=0.1.15
llama_cpp_python  # please set up the right construct primarily based in your {hardware} and OS
pandas
loguru
googlesearch-python
transformers
Openai

Importing LibrariesΒ 

Now, we’ll import libraries:

from openai import OpenAI
import json
from wealthy import print


import dotenv
dotenv.load_dotenv()

Maintain your OpenAI API key in an env file, or you’ll be able to put the important thing in a variableΒ 

OPENAI_API_KEY= "your_open_api_key"

shopper = OpenAI(api_key= OPENAI_API_KEY)

Work together with the GPT mannequin utilizing code and never interface :Β 

messages = [{"role": "user", "content": "What's the weather like in London?"}]
response = shopper.chat.completions.create(
   mannequin="gpt-4o",
   messages=messages,
)
print(response)

This code units up a easy interplay with an AI mannequin, asking concerning the climate in London. The API would course of this request and return a response, which you’d have to parse to get the precise reply.

It’s value noting that this code doesn’t fetch real-time climate information. As an alternative, it asks an AI mannequin to generate a response primarily based on its coaching information, which can not replicate the present climate in London.

AI Agents

On this case, the AI acknowledged it couldn’t present real-time info and recommended checking a climate web site or app for present London climate.

This construction permits straightforward parsing and extracting related info from the API response. The extra metadata (like token utilization) could be helpful for monitoring and optimizing API utilization.

Defining the Perform

Now, let’s outline a perform for getting climate info and arrange the construction for utilizing it as a instrument in an AI dialog:

def get_current_weather(location):
   """Get the present climate in a given metropolis"""
   if "london" in location.decrease():
       return json.dumps({"temperature": "20 C"})
   elif "san francisco" in location.decrease():
       return json.dumps({"temperature": "15 C"})
   elif "paris" in location.decrease():
       return json.dumps({"temperature": "22 C"})
   else:
       return json.dumps({"temperature": "unknown"})

messages = [{"role": "user", "content": "What's the weather like in London?"}]
instruments = [
   {
       "type": "function",
       "function": {
           "name": "get_current_weather",
           "description": "Get the current weather in a given location",
           "parameters": {
               "type": "object",
               "properties": {
                   "location": {
                       "type": "string",
                       "description": "The city and state, e.g. San Francisco",
                   },
               },
               "required": ["location"],
           },
       },
   }
]

Code Rationalization

This code snippet defines a perform for getting climate info and units up the construction for utilizing it as a instrument in an AI dialog. Let’s break it down:

  • get_current_weather perform:
    • Takes a location parameter.
    • Returns simulated climate information for London, San Francisco, and Paris.
    • For every other location, it returns β€œunknown”.
    • The climate information is returned as a JSON string.
  • messages listing:
    • Accommodates a single message from the consumer asking concerning the climate in London.
    • This is identical as within the earlier instance.
  • instruments listing:
    • Defines a single instrument (perform) that the AI can use.
    • The instrument is of kind β€œperform”.
    • It describes the get_current_weather perform:
      • identify: The identify of the perform to be referred to as.
      • description: A quick description of what the perform does.
      • parameters: Describes the anticipated enter for the perform:
        • It expects an object with a location property.
        • location needs to be a string describing a metropolis.
        • The placement parameter is required.
response = shopper.chat.completions.create(
   mannequin="gpt-4o",
   messages=messages,
   instruments=instruments,
)
print(response)
Build AI Agents

Also learn: Agentic AI Demystified: The Final Information to Autonomous Brokers

Right here, we use three exterior Scripts named LLMs, instruments, and tool_executor, which act as helper features.

fromllms import OpenAIChatCompletion
from instruments import get_current_weather
from tool_executor import need_tool_use

Earlier than going additional with the code movement, let’s perceive the scripts.

llms.py script

It manages interactions with OpenAI’s chat completion API, enabling using exterior instruments throughout the chat context:

from typing import Listing, Elective, Any, Dict

import logging
from brokers.specs import ChatCompletion
from brokers.tool_executor import ToolRegistry
from langchain_core.instruments import StructuredTool
from llama_cpp import ChatCompletionRequestMessage
from openai import OpenAI

logger = logging.getLogger(__name__)

class OpenAIChatCompletion:
   def __init__(self, mannequin: str = "gpt-4o"):
       self.mannequin = mannequin
       self.shopper = OpenAI()
       self.tool_registry = ToolRegistry()

   def bind_tools(self, instruments: Elective[List[StructuredTool]] = None):
       for instrument in instruments:
           self.tool_registry.register_tool(instrument)


   def chat_completion(
       self, messages: Listing[ChatCompletionRequestMessage], **kwargs
   ) -> ChatCompletion:
       instruments = self.tool_registry.openai_tools
       output = self.shopper.chat.completions.create(
           mannequin=self.mannequin, messages=messages, instruments=instruments
       )
       logger.debug(output)
       return output


   def run_tools(self, chat_completion: ChatCompletion) -> Listing[Dict[str, Any]]:
       return self.tool_registry.call_tools(chat_completion)

This code defines a category OpenAIChatCompletion that encapsulates the performance for interacting with OpenAI’s chat completion API and managing instruments. Let’s break it down:

Imports

Numerous typing annotations and crucial modules are imported.

Class Definition

pythonCopyclass OpenAIChatCompletion:

This class serves as a wrapper for OpenAI’s chat completion performance.

Constructor

pythonCopydef __init__(self, mannequin: str = β€œgpt-4o”):

Initializes the category with a specified mannequin (default is β€œgpt-4o”).

Creates an OpenAI shopper and a ToolRegistry occasion.

bind_tools methodology

pythonCopydef bind_tools(self, instruments: Elective[List[StructuredTool]] = None):

Registers supplied instruments with the ToolRegistry.

This enables the chat completion to make use of these instruments when wanted.

chat_completion methodology:

pythonCopydef chat_completion(

Β Β Β Β self, messages: Listing[ChatCompletionRequestMessage], **kwargs

) ->

ChatCompletion

Sends a request to the OpenAI API for chat completion.

Consists of the registered instruments within the request.

Returns the API response as a ChatCompletion object.

run_tools methodology

pythonCopydef run_tools(self, chat_completion: ChatCompletion) -> Listing[Dict[str, Any]]:

Executes the instruments referred to as within the chat completion response.

Returns the outcomes of the instrument executions.

instruments.py

It defines particular person instruments or features, similar to fetching real-time climate information, that may be utilized by the AI to carry out particular duties.

import json
import requests
from langchain.instruments import instrument
from loguru import logger

@instrument
def get_current_weather(metropolis: str) -> str:
   """Get the present climate for a given metropolis.


   Args:
     metropolis (str): The town to fetch climate for.


   Returns:
     str: present climate situation, or None if an error happens.
   """
   attempt:
       information = json.dumps(
           requests.get(f"https://wttr.in/{metropolis}?format=j1")
           .json()
           .get("current_condition")[0]
       )
       return information
   besides Exception as e:
       logger.exception(e)
       error_message = f"Error fetching present climate for {metropolis}: {e}"
       return error_message

This code defines a number of instruments that can be utilized in an AI system, doubtless at the side of the OpenAIChatCompletion class we mentioned earlier. Let’s break down every instrument:

get_current_weather:

  • Fetches real-time climate information for a given metropolis utilizing the wttr.in API.
  • Returns the climate information as a JSON string.
  • Consists of error dealing with and logging.

Tool_executor.py

It handles the execution and administration of instruments, making certain they’re referred to as and built-in appropriately throughout the AI’s response workflow.

import json
from typing import Any, Listing, Union, Dict

from langchain_community.instruments import StructuredTool

from langchain_core.utils.function_calling import convert_to_openai_function
from loguru import logger
from brokers.specs import ChatCompletion, ToolCall

class ToolRegistry:
   def __init__(self, tool_format="openai"):
       self.tool_format = tool_format
       self._tools: Dict[str, StructuredTool] = {}
       self._formatted_tools: Dict[str, Any] = {}

   def register_tool(self, instrument: StructuredTool):
       self._tools[tool.name] = instrument
       self._formatted_tools[tool.name] = convert_to_openai_function(instrument)

   def get(self, identify: str) -> StructuredTool:
       return self._tools.get(identify)

   def __getitem__(self, identify: str)
       return self._tools[name]

   def pop(self, identify: str) -> StructuredTool:
       return self._tools.pop(identify)

   @property
   def openai_tools(self) -> Listing[Dict[str, Any]]:
       # [{"type": "function", "function": registry.openai_tools[0]}],
       outcome = []
       for oai_tool in self._formatted_tools.values():
           outcome.append({"kind": "perform", "perform": oai_tool})

       return outcome if outcome else None

   def call_tool(self, instrument: ToolCall) -> Any:
       """Name a single instrument and return the outcome."""
       function_name = instrument.perform.identify
       function_to_call = self.get(function_name)


       if not function_to_call:
           increase ValueError(f"No perform was discovered for {function_name}")


       function_args = json.masses(instrument.perform.arguments)
       logger.debug(f"Perform {function_name} invoked with {function_args}")
       function_response = function_to_call.invoke(function_args)
       logger.debug(f"Perform {function_name}, responded with {function_response}")
       return function_response

   def call_tools(self, output: Union[ChatCompletion, Dict]) -> Listing[Dict[str, str]]:
       """Name all instruments from the ChatCompletion output and return the
       outcome."""
       if isinstance(output, dict):
           output = ChatCompletion(**output)


       if not need_tool_use(output):
           increase ValueError(f"No instrument name was present in ChatCompletionn{output}")

       messages = []
       # https://platform.openai.com/docs/guides/function-calling
       tool_calls = output.selections[0].message.tool_calls
       for instrument in tool_calls:
           function_name = instrument.perform.identify
           function_response = self.call_tool(instrument)
           messages.append({
               "tool_call_id": instrument.id,
               "position": "instrument",
               "identify": function_name,
               "content material": function_response,
           })
       return messages

def need_tool_use(output: ChatCompletion) -> bool:
   tool_calls = output.selections[0].message.tool_calls
   if tool_calls:
       return True
   return False

def check_function_signature(
   output: ChatCompletion, tool_registry: ToolRegistry = None
):
   instruments = output.selections[0].message.tool_calls
   invalid = False
   for instrument in instruments:
       instrument: ToolCall
       if instrument.kind == "perform":
           function_info = instrument.perform
           if tool_registry:
               if tool_registry.get(function_info.identify) is None:
                   logger.error(f"Perform {function_info.identify} will not be out there")
                   invalid = True


           arguments = function_info.arguments
           attempt:
               json.masses(arguments)
           besides json.JSONDecodeError as e:
               logger.exception(e)
               invalid = True
       if invalid:
           return False

   return True

Code Rationalization

This code defines a ToolRegistry class and related helper features for managing and executing instruments in an AI system. Let’s break it down:

  • ToolRegistry class:
    • Manages a group of instruments, storing them in each their authentic type and an OpenAI-compatible format.
    • Supplies strategies to register, retrieve, and execute instruments.
  • Key strategies:
    • register_tool: Provides a brand new instrument to the registry.
    • openai_tools: Property that returns instruments in OpenAI’s perform format.
    • call_tool: Executes a single instrument.
    • call_tools: Executes a number of instruments from a ChatCompletion output.
  • Helper features:
    • need_tool_use: Checks if a ChatCompletion output requires instrument utilization.
    • check_function_signature: Validates perform calls in opposition to the out there instruments.

This ToolRegistry class is a central part for managing and executing instruments in an AI system. It permits for:

  • Simple registration of latest instruments
  • Conversion of instruments to OpenAI’s perform calling format
  • Execution of instruments primarily based on AI mannequin outputs
  • Validation of instrument calls and signatures

The design permits seamless integration with AI fashions supporting perform calling, like these from OpenAI. It supplies a structured technique to lengthen an AI system’s capabilities by permitting it to work together with exterior instruments and information sources.

The helper features need_tool_use and check_function_signature present further utility for working with ChatCompletion outputs and validating instrument utilization.

This code types an important half of a bigger system for constructing AI brokers able to utilizing exterior instruments and APIs to reinforce their capabilities past easy textual content era.

These had been the exterior scripts and different helper features required to incorporate exterior instruments/performance and leverage all AI capabilities.

Also learn: How Autonomous AI Brokers Are Shaping Our Future?

Now, an occasion of OpenAIChatCompletion is created.

The get_current_weather instrument is certain to this occasion.

A message listing is created with a consumer question about London’s climate.

A chat completion is requested utilizing this setup.

llm = OpenAIChatCompletion()
llm.bind_tools([get_current_weather])

messages = [
   {"role": "user", "content": "how is the weather in London today?"}
]

output = llm.chat_completion(messages)
print(output)
AI Agents
  • The AI understood that to reply the query about London’s climate, it wanted to make use of the get_current_weather perform.
  • As an alternative of offering a direct reply, it requests that this perform be referred to as with β€œLondon” because the argument.
  • In a whole system, the following step could be to execute the get_current_weather perform with this argument, get the outcome, after which probably work together with the AI once more to formulate a ultimate response primarily based on the climate information.

This demonstrates how the AI can intelligently resolve to make use of out there instruments to collect info earlier than offering a solution, making its responses extra correct and up-to-date.

if need_tool_use(output):
   print("Utilizing climate instrument")
   tool_results = llm.run_tools(output)
   print(tool_results)
   tool_results[0]["role"] = "assistant"


   updated_messages = messages + tool_results
   updated_messages = updated_messages + [
       {"role": "user", "content": "Think step by step and answer my question based on the above context."}
   ]
   output = llm.chat_completion(updated_messages)


print(output.selections[0].message.content material)

This code:

  • Examine if instruments should be used primarily based on the AI’s output.
  • Runs the instrument (get_current_weather) and prints the outcome.
  • Adjustments the position of the instrument outcome to β€œassistant.”
  • Creates an up to date message listing with the unique message, instrument outcomes, and a brand new consumer immediate.
  • Sends this up to date message listing for an additional chat completion.
AI Agents
  • The AI initially acknowledged it wanted climate information to reply the query.
  • The code executed the climate instrument to get this information.
  • The climate information was added to the context of the dialog.
  • The AI was then prompted to reply the unique query utilizing this new info.
  • The ultimate response is a complete breakdown of London’s climate, instantly answering the unique query with particular, up-to-date info.

Conclusion

This implementation represents a major step towards creating extra succesful, context-aware AI techniques. By bridging the hole between giant language fashions and exterior instruments and information sources, we are able to create AI assistants that perceive and generate human-like textual content that meaningfully interacts with the actual world.

Continuously Requested Questions

Q1. What precisely is an AI agent with dynamic instrument use?

Ans. An AI agent with dynamic instrument use is a sophisticated synthetic intelligence system that may autonomously choose and make the most of numerous exterior instruments or features to collect info, carry out duties, and remedy issues. Not like conventional chatbots or AI fashions which can be restricted to their pre-trained data, these brokers can work together with exterior information sources and APIs in actual time, permitting them to supply up-to-date and contextually related responses.

Q2. How does utilizing a dynamic instrument differ from that of normal AI fashions?

Ans. Common AI fashions usually rely solely on their pre-trained data to generate responses. In distinction, AI brokers with dynamic instrument use can acknowledge after they want further info, choose acceptable instruments to collect that info (like climate APIs, search engines like google and yahoo, or databases), use these instruments, after which incorporate the brand new information into their reasoning course of. This enables them to deal with a a lot wider vary of duties and supply extra correct, present info.

Q3. What are the potential functions of constructing AI brokers with instrument use?

Ans. The functions of constructing AI brokers are huge and various. Some examples embody:
– Private assistants who can schedule appointments, examine real-time info, and carry out advanced analysis duties.
– Customer support bots that may entry consumer accounts, course of orders, and supply product info.
– Monetary advisors who can analyze market information, examine present inventory costs, and supply customized funding recommendation.
– Healthcare assistants who can entry medical databases interpret lab outcomes and supply preliminary diagnoses.
– Mission administration techniques that may coordinate duties, entry a number of information sources, and supply real-time updates.

Latest Articles

Did you play PokΓ©mon Go? You didn’t know it, but you...

You in all probability did not understand it, however in the event you performed or are nonetheless enjoying PokΓ©mon...

More Articles Like This