id stringlengths 14 16 | text stringlengths 4 1.28k | source stringlengths 54 121 |
|---|---|---|
d20aa011a1e8-0 | Modelsο
LangChain provides interfaces and integrations for a number of different types of models.
LLMs
Chat Models | https://api.python.langchain.com/en/latest/models.html |
682e9659a78b-0 | Model I/Oο
LangChain provides interfaces and integrations for working with language models.
Prompts
Models
Output Parsers | https://api.python.langchain.com/en/latest/model_io.html |
c66f8c7f2f1f-0 | Promptsο
The reference guides here all relate to objects for working with Prompts.
Prompt Templates
Example Selector | https://api.python.langchain.com/en/latest/prompts.html |
38d20ef0abd9-0 | Data connectionο
LangChain has a number of modules that help you load, structure, store, and retrieve documents.
Document Loaders
Document Transformers
Embeddings
Vector Stores
Retrievers | https://api.python.langchain.com/en/latest/data_connection.html |
50175773b3a2-0 | Embeddingsο
Wrappers around embedding modules.
class langchain.embeddings.OpenAIEmbeddings(*, client=None, model='text-embedding-ada-002', deployment='text-embedding-ada-002', openai_api_version=None, openai_api_base=None, openai_api_type=None, openai_proxy=None, embedding_ctx_length=8191, openai_api_key=None, openai_o... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-1 | Example
from langchain.embeddings import OpenAIEmbeddings
openai = OpenAIEmbeddings(openai_api_key="my-api-key")
In order to use the library with Microsoft Azure endpoints, you need to set
the OPENAI_API_TYPE, OPENAI_API_BASE, OPENAI_API_KEY and OPENAI_API_VERSION.
The OPENAI_API_TYPE must be set to βazureβ and the oth... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-2 | os.environ["OPENAI_PROXY"] = "http://your-corporate-proxy:8080"
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(
deployment="your-embeddings-deployment-name",
model="your-embeddings-model-name",
openai_api_base="https://your-endpoint.openai.azure.com/",
openai_api_... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-3 | openai_organization (Optional[str]) β
allowed_special (Union[Literal['all'], typing.Set[str]]) β
disallowed_special (Union[Literal['all'], typing.Set[str], typing.Sequence[str]]) β
chunk_size (int) β
max_retries (int) β
request_timeout (Optional[Union[float, Tuple[float, float]]]) β
headers (Any) β
tiktoken_mode... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-4 | The model name to pass to tiktoken when using this class.
Tiktoken is used to count the number of tokens in documents to constrain
them to be under a certain limit. By default, when set to None, this will
be the same as the embedding model name. However, there are some cases
where you may want to use this Embedding cla... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-5 | chunk_size (Optional[int]) β The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
async aembed_query(text)[source]ο
Call out to OpenAIβs embedding endpoint async for embedding query text.
Parameters
text (str)... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-6 | Return type
List[List[float]]
embed_query(text)[source]ο
Call out to OpenAIβs embedding endpoint for embedding query text.
Parameters
text (str) β The text to embed.
Returns
Embedding for the text.
Return type
List[float]
class langchain.embeddings.HuggingFaceEmbeddings(*, client=None, model_name='sentence-transformers... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-7 | encode_kwargs = {'normalize_embeddings': False}
hf = HuggingFaceEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
Parameters
client (Any) β
model_name (str) β
cache_folder (Optional[str]) β
model_kwargs (Dict[str, Any]) β
encode_kwargs (Dict[str, Any]) β
Return... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-8 | Key word arguments to pass to the model.
attribute model_name: str = 'sentence-transformers/all-mpnet-base-v2'ο
Model name to use.
embed_documents(texts)[source]ο
Compute doc embeddings using a HuggingFace transformer model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for ... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-9 | Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around Cohere embedding models.
To use, you should have the cohere python package installed, and the
environment variable COHERE_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embedding... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-10 | Truncate embeddings that are too long from start or end (βNONEβ|βSTARTβ|βENDβ)
embed_documents(texts)[source]ο
Call out to Cohereβs embedding endpoint.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Call ... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-11 | and the model_id of the model deployed in the cluster.
In Elasticsearch you need to have an embedding model loaded and deployed.
- https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-trained-model.html
- https://www.elastic.co/guide/en/machine-learning/current/ml-nlp-deploy-models.html
Parameters
clie... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-12 | document. Defaults to βtext_fieldβ.
es_cloud_id (Optional[str]) β (str, optional): The Elasticsearch cloud ID to connect to.
es_user (Optional[str]) β (str, optional): Elasticsearch username.
es_password (Optional[str]) β (str, optional): Elasticsearch password.
Return type
langchain.embeddings.elasticsearch.Elasticsea... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-13 | model_id,
input_field=input_field,
# es_cloud_id="foo",
# es_user="bar",
# es_password="baz",
)
documents = [
"This is an example document.",
"Another example document to generate embeddings for.",
]
embeddings_generator.embed_documents(documents)
classmethod from_es_connection(model_id, es_conn... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-14 | connection object. input_field (str, optional): The name of the key for the
input text field in the document. Defaults to βtext_fieldβ.
Returns:
ElasticsearchEmbeddings: An instance of the ElasticsearchEmbeddings class.
Example
from elasticsearch import Elasticsearch
from langchain.embeddings import ElasticsearchEmbedd... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-15 | "This is an example document.",
"Another example document to generate embeddings for.",
]
embeddings_generator.embed_documents(documents)
Parameters
model_id (str) β
es_connection (Elasticsearch) β
input_field (str) β
Return type
ElasticsearchEmbeddings
embed_documents(texts)[source]ο
Generate embeddings for a l... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-16 | Return type
List[float]
class langchain.embeddings.LlamaCppEmbeddings(*, client=None, model_path, n_ctx=512, n_parts=- 1, seed=- 1, f16_kv=False, logits_all=False, vocab_only=False, use_mlock=False, n_threads=None, n_batch=8, n_gpu_layers=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddin... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-17 | Parameters
client (Any) β
model_path (str) β
n_ctx (int) β
n_parts (int) β
seed (int) β
f16_kv (bool) β
logits_all (bool) β
vocab_only (bool) β
use_mlock (bool) β
n_threads (Optional[int]) β
n_batch (Optional[int]) β
n_gpu_layers (Optional[int]) β
Return type
None
attribute f16_kv: bool = Falseο
Use half-pr... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-18 | Token context window.
attribute n_gpu_layers: Optional[int] = Noneο
Number of layers to be loaded into gpu memory. Default None.
attribute n_parts: int = -1ο
Number of parts to split the model into.
If -1, the number of parts is automatically determined.
attribute n_threads: Optional[int] = Noneο
Number of threads to u... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-19 | Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Embed a query using the Llama model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.HuggingFaceHubEmbeddings(*, client=None, repo_id='se... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-20 | Example
from langchain.embeddings import HuggingFaceHubEmbeddings
repo_id = "sentence-transformers/all-mpnet-base-v2"
hf = HuggingFaceHubEmbeddings(
repo_id=repo_id,
task="feature-extraction",
huggingfacehub_api_token="my-api-key",
)
Parameters
client (Any) β
repo_id (str) β
task (Optional[str]) β
model_... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-21 | Task to call the model with.
embed_documents(texts)[source]ο
Call out to HuggingFaceHubβs embedding endpoint for embedding search docs.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Call out to HuggingFa... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-22 | Example
from langchain.embeddings import ModelScopeEmbeddings
model_id = "damo/nlp_corom_sentence-embedding_english-base"
embed = ModelScopeEmbeddings(model_id=model_id)
Parameters
embed (Any) β
model_id (str) β
Return type
None
attribute model_id: str = 'damo/nlp_corom_sentence-embedding_english-base'ο
Model name to... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-23 | Return type
List[float]
class langchain.embeddings.TensorflowHubEmbeddings(*, embed=None, model_url='https://tfhub.dev/google/universal-sentence-encoder-multilingual/3')[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around tensorflow_hub embedding models.
To use, you should have ... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-24 | embed_documents(texts)[source]ο
Compute doc embeddings using a TensorflowHub embedding model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a TensorflowHub embedding model.... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-25 | Sagemaker model & the region where it is deployed.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-26 | Return type
None
attribute content_handler: langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler [Required]ο
The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint.
attribute credentials_profile_name: Optional[str] = Noneο
The name of t... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-27 | attribute endpoint_name: str = ''ο
The name of the endpoint from the deployed Sagemaker model.
Must be unique within an AWS Region.
attribute model_kwargs: Optional[Dict] = Noneο
Key word arguments to pass to the model.
attribute region_name: str = ''ο
The aws region where the Sagemaker model is deployed, eg. us-west-2... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-28 | embed_query(text)[source]ο
Compute query embeddings using a SageMaker inference endpoint.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.HuggingFaceInstructEmbeddings(*, client=None, model_name='hkunlp/instructor-large', cache_folder=None, ... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-29 | model_kwargs = {'device': 'cpu'}
encode_kwargs = {'normalize_embeddings': True}
hf = HuggingFaceInstructEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
Parameters
client (Any) β
model_name (str) β
cache_folder (Optional[str]) β
model_kwargs (Dict[str, Any]) β
... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-30 | Key word arguments to pass when calling the encode method of the model.
attribute model_kwargs: Dict[str, Any] [Optional]ο
Key word arguments to pass to the model.
attribute model_name: str = 'hkunlp/instructor-large'ο
Model name to use.
attribute query_instruction: str = 'Represent the question for retrieving supporti... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-31 | Return type
List[float]
class langchain.embeddings.MosaicMLInstructorEmbeddings(*, endpoint_url='https://models.hosted-on.mosaicml.hosting/instructor-xl/v1/predict', embed_instruction='Represent the document for retrieval: ', query_instruction='Represent the question for retrieving supporting documents: ', retry_sleep=... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-32 | )
mosaic_llm = MosaicMLInstructorEmbeddings(
endpoint_url=endpoint_url,
mosaicml_api_token="my-api-key"
)
Parameters
endpoint_url (str) β
embed_instruction (str) β
query_instruction (str) β
retry_sleep (float) β
mosaicml_api_token (Optional[str]) β
Return type
None
attribute embed_instruction: str = 'Repre... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-33 | embed_documents(texts)[source]ο
Embed documents using a MosaicML deployed instructor embedding model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Embed a query using a MosaicML deployed instructor embe... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-34 | Bases: langchain.llms.self_hosted.SelfHostedPipeline, langchain.embeddings.base.Embeddings
Runs custom embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or ... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-35 | model = AutoModelForCausalLM.from_pretrained(model_id)
return pipeline("feature-extraction", model=model, tokenizer=tokenizer)
embeddings = SelfHostedEmbeddings(
model_load_fn=get_pipeline,
hardware=gpu
model_reqs=["./", "torch", "transformers"],
)
Example passing in a pipeline path:from langchain.embed... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-36 | hardware=gpu,
model_reqs=["./", "torch", "transformers"],
)
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-37 | attribute inference_kwargs: Any = Noneο
Any kwargs to pass to the modelβs inference function.
embed_documents(texts)[source]ο
Compute doc embeddings using a HuggingFace transformer model.
Parameters
texts (List[str]) β The list of texts to embed.s
Returns
List of embeddings, one for each text.
Return type
List[List[flo... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-38 | Return type
List[float]
class langchain.embeddings.SelfHostedHuggingFaceEmbeddings(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _embed_documents>, hardware=None, model_load_fn=<function load_embedding_model>, load_fn_kwargs=None, m... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-39 | To use, you should have the runhouse python package installed.
Example
from langchain.embeddings import SelfHostedHuggingFaceEmbeddings
import runhouse as rh
model_name = "sentence-transformers/all-mpnet-base-v2"
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
hf = SelfHostedHuggingFaceEmbeddings(model_name=mo... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-40 | hardware (Any) β
model_load_fn (Callable) β
load_fn_kwargs (Optional[dict]) β
model_reqs (List[str]) β
inference_kwargs (Any) β
model_id (str) β
Return type
None
attribute hardware: Any = Noneο
Remote hardware to send the inference function to.
attribute inference_fn: Callable = <function _embed_documents>ο
Infer... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-41 | attribute model_reqs: List[str] = ['./', 'sentence_transformers', 'torch']ο
Requirements to install on hardware to inference the model.
class langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, infere... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-42 | Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example
from langchain.embeddings import... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-43 | callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
pipeline_ref (Any) β
client (Any) β
inference_fn (Callable) β
hardware (Any) β
model_load_fn (Callable) β
load_fn_kwargs (Optional[dict]) β
model_reqs (List[str]) β
inference_kwargs (Any) β
model_id (str) ... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-44 | Requirements to install on hardware to inference the model.
attribute query_instruction: str = 'Represent the question for retrieving supporting documents: 'ο
Instruction to use for embedding query.
embed_documents(texts)[source]ο
Compute doc embeddings using a HuggingFace instruct model.
Parameters
texts (List[str]) β... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-45 | Return type
None
embed_documents(texts)[source]ο
Embed search docs.
Parameters
texts (List[str]) β
Return type
List[List[float]]
embed_query(text)[source]ο
Embed query text.
Parameters
text (str) β
Return type
List[float]
class langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding(*, client=None, model='luminous... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-46 | the query for a document as similar as possible.
To learn more, check out: https://docs.aleph-alpha.com/docs/tasks/semantic_embed/
Example
from aleph_alpha import AlephAlphaAsymmetricSemanticEmbedding
embeddings = AlephAlphaSymmetricSemanticEmbedding()
document = "This is a content of the document"
query = "What is the... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-47 | Return type
None
attribute aleph_alpha_api_key: Optional[str] = Noneο
API key for Aleph Alpha API.
attribute compress_to_size: Optional[int] = 128ο
Should the returned embeddings come back as an original 5120-dim vector,
or should it be compressed to 128-dim.
attribute contextual_control_threshold: Optional[int] = None... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-48 | Model name to use.
attribute normalize: Optional[bool] = Trueο
Should returned embeddings be normalized
embed_documents(texts)[source]ο
Call out to Aleph Alphaβs asymmetric Document endpoint.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-49 | Return type
List[float]
class langchain.embeddings.AlephAlphaSymmetricSemanticEmbedding(*, client=None, model='luminous-base', hosting='https://api.aleph-alpha.com', normalize=True, compress_to_size=128, contextual_control_threshold=None, control_log_additive=True, aleph_alpha_api_key=None)[source]ο
Bases: langchain.em... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-50 | Parameters
client (Any) β
model (Optional[str]) β
hosting (Optional[str]) β
normalize (Optional[bool]) β
compress_to_size (Optional[int]) β
contextual_control_threshold (Optional[int]) β
control_log_additive (Optional[bool]) β
aleph_alpha_api_key (Optional[str]) β
Return type
None
embed_documents(texts)[source]... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-51 | Return type
List[float]
langchain.embeddings.SentenceTransformerEmbeddingsο
alias of langchain.embeddings.huggingface.HuggingFaceEmbeddings
class langchain.embeddings.MiniMaxEmbeddings(*, endpoint_url='https://api.minimax.chat/v1/embeddings', model='embo-01', embed_type_db='db', embed_type_query='query', minimax_group_... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-52 | query_result = embeddings.embed_query(query_text)
document_text = "This is a test document."
document_result = embeddings.embed_documents([document_text])
Parameters
endpoint_url (str) β
model (str) β
embed_type_db (str) β
embed_type_query (str) β
minimax_group_id (Optional[str]) β
minimax_api_key (Optional[str]) ... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-53 | Group ID for MiniMax API.
attribute model: str = 'embo-01'ο
Embeddings model name to use.
embed_documents(texts)[source]ο
Embed documents using a MiniMax embedding endpoint.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_quer... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-54 | Embeddings provider to invoke Bedrock embedding models.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-55 | has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
attribute model_id: str = 'amazon.titan-e1t-medium'ο
Id of the model t... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-56 | Compute doc embeddings using a Bedrock model.
Parameters
texts (List[str]) β The list of texts to embed.
chunk_size (int) β Bedrock currently only allows single string
inputs, so chunk size is always 1. This input is here
only for compatibility with the embeddings interface.
Returns
List of embeddings, one for each tex... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-57 | Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around Deep Infraβs embedding inference service.
To use, you should have the
environment variable DEEPINFRA_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
There are multiple embeddings models available,
... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-58 | r2 = deepinfra_emb.embed_query(
"What is the second letter of Greek alphabet"
)
Parameters
model_id (str) β
normalize (bool) β
embed_instruction (str) β
query_instruction (str) β
model_kwargs (Optional[dict]) β
deepinfra_api_token (Optional[str]) β
Return type
None
attribute embed_instruction: str = 'passage:... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-59 | embed_documents(texts)[source]ο
Embed documents using a Deep Infra deployed embedding model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Embed a query using a Deep Infra deployed embedding model.
Param... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-60 | environment variable DASHSCOPE_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import DashScopeEmbeddings
embeddings = DashScopeEmbeddings(dashscope_api_key="my-api-key")
Example
import os
os.environ["DASHSCOPE_API_KEY"] = "your DashScope API KEY"
from... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-61 | Maximum number of retries to make when generating.
embed_documents(texts)[source]ο
Call out to DashScopeβs embedding endpoint for embedding search docs.
Parameters
texts (List[str]) β The list of texts to embed.
chunk_size β The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns
... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-62 | Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around embaasβs embedding service.
To use, you should have the
environment variable EMBAAS_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
# Initialise with default model and instruction
from langchai... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
50175773b3a2-63 | Return type
None
attribute api_url: str = 'https://api.embaas.io/v1/embeddings/'ο
The URL for the embaas embeddings API.
attribute instruction: Optional[str] = Noneο
Instruction used for domain-specific embeddings.
attribute model: str = 'e5-large-v2'ο
The model used for embeddings.
embed_documents(texts)[source]ο
Get ... | https://api.python.langchain.com/en/latest/modules/embeddings.html |
159549d20e18-0 | Memoryο
class langchain.memory.CassandraChatMessageHistory(contact_points, session_id, port=9042, username='cassandra', password='cassandra', keyspace_name='chat_history', table_name='message_store')[source]ο
Bases: langchain.schema.BaseChatMessageHistory
Chat message history that stores history in Cassandra.
Parameter... | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-1 | Retrieve the messages from Cassandra
add_message(message)[source]ο
Append the message to the record in Cassandra
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
clear()[source]ο
Clear session memory from Cassandra
Return type
None
class langchain.memory.ChatMessageHistory(*, messages=[])[source]ο
... | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-2 | Return type
None
class langchain.memory.CombinedMemory(*, memories)[source]ο
Bases: langchain.schema.BaseMemory
Class for combining multiple memoriesβ data together.
Parameters
memories (List[langchain.schema.BaseMemory]) β
Return type
None
attribute memories: List[langchain.schema.BaseMemory] [Required]ο
For tracking... | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-3 | Return type
None
property memory_variables: List[str]ο
All the memory variables that this instance provides.
class langchain.memory.ConversationBufferMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, human_prefix='Human', ai_prefix='AI', memory_key='history')[source]ο
Bases: langchain.... | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-4 | Return history buffer.
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, Any]
property buffer: Anyο
String buffer of memory.
class langchain.memory.ConversationBufferWindowMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, human_prefix='Human', ai_prefix='AI', memory_key='hist... | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-5 | attribute human_prefix: str = 'Human'ο
attribute k: int = 5ο
load_memory_variables(inputs)[source]ο
Return history buffer.
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, str]
property buffer: List[langchain.schema.BaseMessage]ο
String buffer of memory. | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-6 | class langchain.memory.ConversationEntityMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, human_prefix='Human', ai_prefix='AI', llm, entity_extraction_prompt=PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistan... | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-7 | the user is just issuing a greeting or having a simple conversation).\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Lan... | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-8 | "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF ... | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-9 | the "Entity" section based on the last line of your conversation with the human. If you are writing the summary for the first time, return a single sentence.\nThe update should only include facts that are relayed in the last line of conversation about the provided entity, and should only contain facts about the provide... | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-10 | Bases: langchain.memory.chat_memory.BaseChatMemory
Entity extractor & summarizer memory.
Extracts named entities from the recent chat history and generates summaries.
With a swapable entity store, persisting entities across conversations.
Defaults to an in-memory entity store, and can be swapped out for a Redis,
SQLite... | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-11 | k (int) β
chat_history_key (str) β
entity_store (langchain.memory.entity.BaseEntityStore) β
Return type
None
attribute ai_prefix: str = 'AI'ο
attribute chat_history_key: str = 'history'ο
attribute entity_cache: List[str] = []ο | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-12 | attribute entity_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the la... | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-13 | going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various ... | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-14 | #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHum... | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-15 | attribute entity_store: langchain.memory.entity.BaseEntityStore [Optional]ο | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-16 | attribute entity_summarization_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['entity', 'summary', 'history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant helping a human keep track of facts about relevant people, places, and concepts in thei... | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-17 | context):\n{history}\n\nEntity to summarize:\n{entity}\n\nExisting summary of {entity}:\n{summary}\n\nLast line of conversation:\nHuman: {input}\nUpdated summary:', template_format='f-string', validate_template=True)ο | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-18 | attribute human_prefix: str = 'Human'ο
attribute k: int = 3ο
attribute llm: langchain.base_language.BaseLanguageModel [Required]ο
clear()[source]ο
Clear memory contents.
Return type
None
load_memory_variables(inputs)[source]ο
Returns chat history and all generated entities with summaries if available,
and updates or cl... | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-19 | Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
property buffer: List[langchain.schema.BaseMessage]ο
Access chat memory messages. | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-20 | class langchain.memory.ConversationKGMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, k=2, human_prefix='Human', ai_prefix='AI', kg=None, knowledge_extraction_prompt=PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template="You are a netw... | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-21 | Area 51?\nAI: No, I didn't hear that. What do you know about Area 51?\nPerson #1: It's a secret military base in Nevada.\nAI: What do you know about Nevada?\nLast line of conversation:\nPerson #1: It's a state in the US. It's also the number 1 producer of gold in the US.\n\nOutput: (Nevada, is a, state)<|>(Nevada, is i... | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-22 | store.\n\nOutput: NONE\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: What do you know about Descartes?\nAI: Descartes was a French philosopher, mathematician, and scientist who lived in the 17th century.\nPerson #1: The Descartes I'm referring to is a standup comedian and interior designer from Montreal... | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-23 | line of conversation (for extraction):\nHuman: {input}\n\nOutput:", template_format='f-string', validate_template=True), entity_extraction_prompt=PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation betw... | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-24 | history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX,... | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-25 | doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{histo... | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-26 | Bases: langchain.memory.chat_memory.BaseChatMemory
Knowledge graph memory for storing conversation memory.
Integrates with external knowledge graph to store and retrieve
information about knowledge triples in the conversation.
Parameters
chat_memory (langchain.schema.BaseChatMessageHistory) β
output_key (Optional[str]... | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-27 | Return type
None
attribute ai_prefix: str = 'AI'ο | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-28 | attribute entity_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the la... | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-29 | going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various ... | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-30 | #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHum... | https://api.python.langchain.com/en/latest/modules/memory.html |
159549d20e18-31 | attribute human_prefix: str = 'Human'ο
attribute k: int = 2ο
attribute kg: langchain.graphs.networkx_graph.NetworkxEntityGraph [Optional]ο | https://api.python.langchain.com/en/latest/modules/memory.html |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 2