PineconeStore
Pinecone is a vector database that helps power AI for some of the worldโs best companies.
This guide provides a quick overview for getting started with Pinecone
vector stores. For detailed
documentation of all PineconeStore
features and configurations head to
the API
reference.
Overviewโ
Integration detailsโ
Class | Package | PY support | Package latest |
---|---|---|---|
PineconeStore | @langchain/pinecone | โ |
Setupโ
To use Pinecone vector stores, youโll need to create a Pinecone account,
initialize an index, and install the @langchain/pinecone
integration
package. Youโll also want to install the official Pinecone
SDK to
initialize a client to pass into the PineconeStore
instance.
This guide will also use OpenAI
embeddings, which require you
to install the @langchain/openai
integration package. You can also use
other supported embeddings models
if you wish.
- npm
- yarn
- pnpm
npm i @langchain/pinecone @pinecone-database/pinecone @langchain/openai
yarn add @langchain/pinecone @pinecone-database/pinecone @langchain/openai
pnpm add @langchain/pinecone @pinecone-database/pinecone @langchain/openai
Credentialsโ
Sign up for a Pinecone account and create an
index. Make sure the dimensions match those of the embeddings you want
to use (the default is 1536 for OpenAIโs text-embedding-3-small
). Once
youโve done this set the PINECONE_INDEX
, PINECONE_API_KEY
, and
(optionally) PINECONE_ENVIRONMENT
environment variables:
process.env.PINECONE_API_KEY = "your-pinecone-api-key";
process.env.PINECONE_INDEX = "your-pinecone-index";
// Optional
process.env.PINECONE_ENVIRONMENT = "your-pinecone-environment";
If you are using OpenAI embeddings for this guide, youโll need to set your OpenAI key as well:
process.env.OPENAI_API_KEY = "YOUR_API_KEY";
If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:
// process.env.LANGCHAIN_TRACING_V2="true"
// process.env.LANGCHAIN_API_KEY="your-api-key"
Instantiationโ
import { PineconeStore } from "@langchain/pinecone";
import { OpenAIEmbeddings } from "@langchain/openai";
import { Pinecone as PineconeClient } from "@pinecone-database/pinecone";
const embeddings = new OpenAIEmbeddings({
model: "text-embedding-3-small",
});
const pinecone = new PineconeClient();
// Will automatically read the PINECONE_API_KEY and PINECONE_ENVIRONMENT env vars
const pineconeIndex = pinecone.Index(process.env.PINECONE_INDEX!);
const vectorStore = await PineconeStore.fromExistingIndex(embeddings, {
pineconeIndex,
// Maximum number of batch requests to allow at once. Each batch is 1000 vectors.
maxConcurrency: 5,
// You can pass a namespace here too
// namespace: "foo",
});
Manage vector storeโ
Add items to vector storeโ
import type { Document } from "@langchain/core/documents";
const document1: Document = {
pageContent: "The powerhouse of the cell is the mitochondria",
metadata: { source: "https://example.com" },
};
const document2: Document = {
pageContent: "Buildings are made out of brick",
metadata: { source: "https://example.com" },
};
const document3: Document = {
pageContent: "Mitochondria are made out of lipids",
metadata: { source: "https://example.com" },
};
const document4: Document = {
pageContent: "The 2024 Olympics are in Paris",
metadata: { source: "https://example.com" },
};
const documents = [document1, document2, document3, document4];
await vectorStore.addDocuments(documents, { ids: ["1", "2", "3", "4"] });
[ '1', '2', '3', '4' ]
Note: After adding documents, there is a slight delay before they become queryable.
Delete items from vector storeโ
await vectorStore.delete({ ids: ["4"] });
Query vector storeโ
Once your vector store has been created and the relevant documents have been added you will most likely wish to query it during the running of your chain or agent.
Query directlyโ
Performing a simple similarity search can be done as follows:
// Optional filter
const filter = { source: "https://example.com" };
const similaritySearchResults = await vectorStore.similaritySearch(
"biology",
2,
filter
);
for (const doc of similaritySearchResults) {
console.log(`* ${doc.pageContent} [${JSON.stringify(doc.metadata, null)}]`);
}
* The powerhouse of the cell is the mitochondria [{"source":"https://example.com"}]
* Mitochondria are made out of lipids [{"source":"https://example.com"}]
If you want to execute a similarity search and receive the corresponding scores you can run:
const similaritySearchWithScoreResults =
await vectorStore.similaritySearchWithScore("biology", 2, filter);
for (const [doc, score] of similaritySearchWithScoreResults) {
console.log(
`* [SIM=${score.toFixed(3)}] ${doc.pageContent} [${JSON.stringify(
doc.metadata
)}]`
);
}
* [SIM=0.165] The powerhouse of the cell is the mitochondria [{"source":"https://example.com"}]
* [SIM=0.148] Mitochondria are made out of lipids [{"source":"https://example.com"}]
Query by turning into retrieverโ
You can also transform the vector store into a retriever for easier usage in your chains.
const retriever = vectorStore.asRetriever({
// Optional filter
filter: filter,
k: 2,
});
await retriever.invoke("biology");
[
Document {
pageContent: 'The powerhouse of the cell is the mitochondria',
metadata: { source: 'https://example.com' },
id: undefined
},
Document {
pageContent: 'Mitochondria are made out of lipids',
metadata: { source: 'https://example.com' },
id: undefined
}
]
Usage for retrieval-augmented generationโ
For guides on how to use this vector store for retrieval-augmented generation (RAG), see the following sections:
- Tutorials: working with external knowledge.
- How-to: Question and answer with RAG
- Retrieval conceptual docs
API referenceโ
For detailed documentation of all PineconeStore
features and
configurations head to the API
reference.
Relatedโ
- Vector store conceptual guide
- Vector store how-to guides