ontocast.tool.llm
¶
Language Model (LLM) integration tool for OntoCast.
This module provides integration with various language models through LangChain, supporting both OpenAI and Ollama providers. It enables text generation and structured data extraction capabilities.
LLMTool
¶
Bases: Tool
Tool for interacting with language models.
This class provides a unified interface for working with different language model providers (OpenAI, Ollama) through LangChain. It supports both synchronous and asynchronous operations.
Attributes:
Name | Type | Description |
---|---|---|
provider |
str
|
The LLM provider to use (default: "openai"). |
model |
str
|
The specific model to use (default: "gpt-4o-mini"). |
api_key |
Optional[str]
|
Optional API key for the provider. |
base_url |
Optional[str]
|
Optional base URL for the provider. |
temperature |
float
|
Temperature parameter for generation (default: 0.1). |
Source code in ontocast/tool/llm.py
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 |
|
llm
property
¶
Get the underlying language model instance.
Returns:
Name | Type | Description |
---|---|---|
BaseChatModel |
BaseChatModel
|
The configured language model. |
Raises:
Type | Description |
---|---|
RuntimeError
|
If the LLM has not been properly initialized. |
__call__(*args, **kwds)
¶
Call the language model directly.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
*args
|
Any
|
Positional arguments passed to the LLM. |
()
|
**kwds
|
Any
|
Keyword arguments passed to the LLM. |
{}
|
Returns:
Name | Type | Description |
---|---|---|
Any |
Any
|
The LLM's response. |
Source code in ontocast/tool/llm.py
__init__(**kwargs)
¶
Initialize the LLM tool.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
**kwargs
|
Additional keyword arguments passed to the parent class. |
{}
|
acreate(**kwargs)
async
classmethod
¶
Create a new LLM tool instance asynchronously.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
**kwargs
|
Keyword arguments for initialization. |
{}
|
Returns:
Name | Type | Description |
---|---|---|
LLMTool |
A new instance of the LLM tool. |
Source code in ontocast/tool/llm.py
complete(prompt, **kwargs)
async
¶
Generate a completion for the given prompt.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prompt
|
str
|
The input prompt for generation. |
required |
**kwargs
|
Additional keyword arguments for generation. |
{}
|
Returns:
Name | Type | Description |
---|---|---|
Any |
Any
|
The generated completion. |
Source code in ontocast/tool/llm.py
create(**kwargs)
classmethod
¶
Create a new LLM tool instance synchronously.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
**kwargs
|
Keyword arguments for initialization. |
{}
|
Returns:
Name | Type | Description |
---|---|---|
LLMTool |
A new instance of the LLM tool. |
Source code in ontocast/tool/llm.py
extract(prompt, output_schema, **kwargs)
async
¶
Extract structured data from the prompt according to a schema.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prompt
|
str
|
The input prompt for extraction. |
required |
output_schema
|
Type[T]
|
The Pydantic model class defining the output structure. |
required |
**kwargs
|
Additional keyword arguments for extraction. |
{}
|
Returns:
Name | Type | Description |
---|---|---|
T |
T
|
The extracted data conforming to the output schema. |
Source code in ontocast/tool/llm.py
setup()
async
¶
Set up the language model based on the configured provider.
Raises:
Type | Description |
---|---|
ValueError
|
If the provider is not supported. |