Post

Langchain with Ollama Cloud Models

Ollama provides a way to run large language models (LLMs) locally or in the cloud. It supports various models including open-source ones. Langchain, a popular framework for building applications with LLMs, can be integrated with Ollama to leverage these models.

Below is a simple example of how to use Langchain with Ollama cloud models. Pre-requisites:

  1. Install Ollama and set up your cloud instance. Follow the instructions on the Ollama website.
  2. Install the Langchain community package that includes the Ollama integration:
1
2
3
4
pip install langchain-community langchain-ollama

ollama signin
ollama run gpt-oss:120b-cloud
1
2
3
4
5
6
7
8
9
10
11
12
from langchain_ollama import OllamaLLM 
# Replace with your cloud IP or DNS name
ollama_url = "http://localhost:11434"

llm = OllamaLLM(
    base_url=ollama_url,
    model="gpt-oss:120b-cloud",  # Or any model you've loaded in Ollama
)

# Now use the LLM in LangChain
response = llm.invoke("Explain the difference between CPU and GPU.")
print(response)
This post is licensed under CC BY 4.0 by the author.