Langchain Agents and Tools: Integrating LLMs with other models

Anna C S Medeiros
3 min readJun 7, 2024

--

I know you have awesome models capable of incredible tasks, maybe you would like to make them more accessible to your users using ChatGPT. Now you can, let's go!

Photo by fatma betül içen on Unsplash

Welcome! In this article, we will learn how to create tools in LangChain v0.2 that agents can use. In simpler terms, we will build an LLM-powered assistant that can call other functions, including using your own models.

If you have never used LangChain or OpenAI start here.

I will be using extracts from one of my GitHub repos here, so the requirements for this blog can be found here.

Creating a LangChain Tool

When creating a tool, besides using the decorator, it is important to be clear about the tool usage in the docstring. The model will reason when to use your tool based on the docstring description!

You also will need to use typing when defining the tools' arguments and output, as it will help your model extract the correct values for those variables from the user text input.


@tool
def predict_price(city: str, area: int, private_pool: bool) -> str:
"""Outputs the predicted price of an apartment, given the apartment city location, area in square meters and if it has a pool or not. """
details = {
"City": city,
"Carpet Area": area,
"Private pool ": private_pool
}

df = pd.DataFrame([details])
df['Private pool '] = df['Private pool '].astype(int)
missing_columns = {col: np.nan for col in submission_columns if col not in df.columns}
missing_df = pd.DataFrame(missing_columns, index=df.index)
df = pd.concat([df, missing_df], axis=1)

prediction = rf_model.predict(df)

return f'{prediction[0]} rupees'

In the example above I used a function from my GitHub. The rf_model is a random forest regressor, previously trained and saved as a .pkl file, intended for pricing apartments based on used input information. Similar to a real state agent helping you evaluate your apartment.

Creating an Agent

First, let's give the Agent some memory so that it can remember previous user input, SqliteSaver is meant for lightweight, synchronous use cases.

The tools and memory are given to the Agent object agent_executor.

from langchain_core.messages import HumanMessage
from langgraph.checkpoint.sqlite import SqliteSaver
from langgraph.prebuilt import create_react_agent

memory = SqliteSaver.from_conn_string(":memory:")
model = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
tools = [predict_price]
agent_executor = create_react_agent(model, tools, checkpointer=memory)


config = {"configurable": {"thread_id": "test001"}}
for chunk in agent_executor.stream(
{"messages": [HumanMessage(content="Hi there! Can you help me?")]}, config
):
print(chunk)
print("----")

""" output:
{'agent': {'messages': [AIMessage(content="Of course! I'd be happy to help. What do you need assistance with?", response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 82, 'total_tokens': 100}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-127c0d24-3cc5-4007-9fe5-193ea73f0315-0', usage_metadata={'input_tokens': 82, 'output_tokens': 18, 'total_tokens': 100})]}}
----
"""

for chunk in agent_executor.stream(
{"messages": [HumanMessage(content="Can you predict the value of my apartment in mumbai? My apartment has 132 square meters and it has a pool.")]}, config
):
print(chunk)
print("----")

""" output:
{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_0XV4yzJNQzJTLbEmkYEGpKhY', 'function': {'arguments': '{"city":"Mumbai","area":132,"private_pool":true}', 'name': 'predict_price'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 24, 'prompt_tokens': 132, 'total_tokens': 156}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-4944cf57-bf29-4e30-95c2-373429a3ce7d-0', tool_calls=[{'name': 'predict_price', 'args': {'city': 'Mumbai', 'area': 132, 'private_pool': True}, 'id': 'call_0XV4yzJNQzJTLbEmkYEGpKhY'}], usage_metadata={'input_tokens': 132, 'output_tokens': 24, 'total_tokens': 156})]}}
----
{'tools': {'messages': [ToolMessage(content='263320.0 rupees', name='predict_price', tool_call_id='call_0XV4yzJNQzJTLbEmkYEGpKhY')]}}
----
{'agent': {'messages': [AIMessage(content='The predicted value of your apartment in Mumbai, with an area of 132 square meters and a private pool, is 263,320 rupees. If you need any more assistance or information, feel free to ask!', response_metadata={'token_usage': {'completion_tokens': 45, 'prompt_tokens': 170, 'total_tokens': 215}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-673d8442-a383-431c-a3b7-6cb7302156b9-0', usage_metadata={'input_tokens': 170, 'output_tokens': 45, 'total_tokens': 215})]}}
----
"""

Here the HumanMessage is used to simulate user input, and we can observe that ChatGPT was able to identify when to call the tool predict_price, extract the variables correctly from user input:

{"city":"Mumbai","area":132,"private_pool":true}

and, finally, return the answer to the user:

The predicted value of your apartment in Mumbai, with an area of 132 square meters and a private pool, is 263,320 rupees. If you need any more assistance or information, feel free to ask! — ChatGPT final Answer

There are lots of other ready-to-go tools in LangChain. Have fun!

Thanks for reading!

Give it some claps to make others find it, too! Also, Make sure you follow me on Medium to not miss anything. Let’s connect on LinkedIn.

--

--

Anna C S Medeiros

Senior Data Scientist @ Vsoft | GenAI | Computer Vision | NLP | LLM