Skip to content

Commit d8a204a

Browse files
authored
fix some docstring issues affecting rendering (ag2ai#1739)
* fix some docstring issues affecting rendering * Update pydoc-markdown.yml * undo double backtick * Update compressible_agent.py
1 parent 2750391 commit d8a204a

File tree

10 files changed

+142
-129
lines changed

10 files changed

+142
-129
lines changed

autogen/agentchat/contrib/compressible_agent.py

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -84,10 +84,9 @@ def __init__(
8484
compress_config (dict or True/False): config for compression before oai_reply. Default to False.
8585
You should contain the following keys:
8686
- "mode" (Optional, str, default to "TERMINATE"): Choose from ["COMPRESS", "TERMINATE", "CUSTOMIZED"].
87-
"TERMINATE": terminate the conversation ONLY when token count exceeds the max limit of current model.
88-
`trigger_count` is NOT used in this mode.
89-
"COMPRESS": compress the messages when the token count exceeds the limit.
90-
"CUSTOMIZED": pass in a customized function to compress the messages.
87+
1. `TERMINATE`: terminate the conversation ONLY when token count exceeds the max limit of current model. `trigger_count` is NOT used in this mode.
88+
2. `COMPRESS`: compress the messages when the token count exceeds the limit.
89+
3. `CUSTOMIZED`: pass in a customized function to compress the messages.
9190
- "compress_function" (Optional, callable, default to None): Must be provided when mode is "CUSTOMIZED".
9291
The function should takes a list of messages and returns a tuple of (is_compress_success: bool, compressed_messages: List[Dict]).
9392
- "trigger_count" (Optional, float, int, default to 0.7): the threshold to trigger compression.

autogen/agentchat/contrib/qdrant_retrieve_user_proxy_agent.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -29,12 +29,12 @@ def __init__(
2929
name (str): name of the agent.
3030
human_input_mode (str): whether to ask for human inputs every time a message is received.
3131
Possible values are "ALWAYS", "TERMINATE", "NEVER".
32-
(1) When "ALWAYS", the agent prompts for human input every time a message is received.
32+
1. When "ALWAYS", the agent prompts for human input every time a message is received.
3333
Under this mode, the conversation stops when the human input is "exit",
3434
or when is_termination_msg is True and there is no human input.
35-
(2) When "TERMINATE", the agent only prompts for human input only when a termination message is received or
35+
2. When "TERMINATE", the agent only prompts for human input only when a termination message is received or
3636
the number of auto reply reaches the max_consecutive_auto_reply.
37-
(3) When "NEVER", the agent will never prompt for human input. Under this mode, the conversation stops
37+
3. When "NEVER", the agent will never prompt for human input. Under this mode, the conversation stops
3838
when the number of auto reply reaches the max_consecutive_auto_reply or when is_termination_msg is True.
3939
is_termination_msg (function): a function that takes a message in the form of a dictionary
4040
and returns a boolean value indicating if this received message is a termination message.

autogen/agentchat/contrib/retrieve_user_proxy_agent.py

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -77,17 +77,17 @@ def __init__(
7777
retrieve_config: Optional[Dict] = None, # config for the retrieve agent
7878
**kwargs,
7979
):
80-
"""
80+
r"""
8181
Args:
8282
name (str): name of the agent.
8383
human_input_mode (str): whether to ask for human inputs every time a message is received.
8484
Possible values are "ALWAYS", "TERMINATE", "NEVER".
85-
(1) When "ALWAYS", the agent prompts for human input every time a message is received.
85+
1. When "ALWAYS", the agent prompts for human input every time a message is received.
8686
Under this mode, the conversation stops when the human input is "exit",
8787
or when is_termination_msg is True and there is no human input.
88-
(2) When "TERMINATE", the agent only prompts for human input only when a termination message is received or
88+
2. When "TERMINATE", the agent only prompts for human input only when a termination message is received or
8989
the number of auto reply reaches the max_consecutive_auto_reply.
90-
(3) When "NEVER", the agent will never prompt for human input. Under this mode, the conversation stops
90+
3. When "NEVER", the agent will never prompt for human input. Under this mode, the conversation stops
9191
when the number of auto reply reaches the max_consecutive_auto_reply or when is_termination_msg is True.
9292
is_termination_msg (function): a function that takes a message in the form of a dictionary
9393
and returns a boolean value indicating if this received message is a termination message.
@@ -136,10 +136,11 @@ def __init__(
136136
- custom_text_types (Optional, List[str]): a list of file types to be processed. Default is `autogen.retrieve_utils.TEXT_FORMATS`.
137137
This only applies to files under the directories in `docs_path`. Explicitly included files and urls will be chunked regardless of their types.
138138
- recursive (Optional, bool): whether to search documents recursively in the docs_path. Default is True.
139-
**kwargs (dict): other kwargs in [UserProxyAgent](../user_proxy_agent#__init__).
139+
`**kwargs` (dict): other kwargs in [UserProxyAgent](../user_proxy_agent#__init__).
140+
141+
Example:
140142
141-
Example of overriding retrieve_docs:
142-
If you have set up a customized vector db, and it's not compatible with chromadb, you can easily plug in it with below code.
143+
Example of overriding retrieve_docs - If you have set up a customized vector db, and it's not compatible with chromadb, you can easily plug in it with below code.
143144
```python
144145
class MyRetrieveUserProxyAgent(RetrieveUserProxyAgent):
145146
def query_vector_db(

autogen/agentchat/utils.py

Lines changed: 18 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -25,27 +25,32 @@ def consolidate_chat_info(chat_info, uniform_sender=None) -> None:
2525

2626

2727
def gather_usage_summary(agents: List[Agent]) -> Tuple[Dict[str, any], Dict[str, any]]:
28-
"""Gather usage summary from all agents.
28+
r"""Gather usage summary from all agents.
2929
3030
Args:
3131
agents: (list): List of agents.
3232
3333
Returns:
3434
tuple: (total_usage_summary, actual_usage_summary)
3535
36-
Example return:
37-
total_usage_summary = {
38-
'total_cost': 0.0006090000000000001,
39-
'gpt-35-turbo':
40-
{
41-
'cost': 0.0006090000000000001,
42-
'prompt_tokens': 242,
43-
'completion_tokens': 123,
44-
'total_tokens': 365
45-
}
36+
Example:
37+
38+
```python
39+
total_usage_summary = {
40+
"total_cost": 0.0006090000000000001,
41+
"gpt-35-turbo": {
42+
"cost": 0.0006090000000000001,
43+
"prompt_tokens": 242,
44+
"completion_tokens": 123,
45+
"total_tokens": 365
4646
}
47-
`actual_usage_summary` follows the same format.
48-
If none of the agents incurred any cost (not having a client), then the total_usage_summary and actual_usage_summary will be {'total_cost': 0}.
47+
}
48+
```
49+
50+
Note:
51+
52+
`actual_usage_summary` follows the same format.
53+
If none of the agents incurred any cost (not having a client), then the total_usage_summary and actual_usage_summary will be `{'total_cost': 0}`.
4954
"""
5055

5156
def aggregate_summary(usage_summary: Dict[str, any], agent_summary: Dict[str, any]) -> None:

autogen/cache/cache.py

Lines changed: 0 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -14,16 +14,6 @@ class Cache:
1414
Attributes:
1515
config (Dict[str, Any]): A dictionary containing cache configuration.
1616
cache: The cache instance created based on the provided configuration.
17-
18-
Methods:
19-
redis(cache_seed=42, redis_url="redis://localhost:6379/0"): Static method to create a Redis cache instance.
20-
disk(cache_seed=42, cache_path_root=".cache"): Static method to create a Disk cache instance.
21-
__init__(self, config): Initializes the Cache with the given configuration.
22-
__enter__(self): Context management entry, returning the cache instance.
23-
__exit__(self, exc_type, exc_value, traceback): Context management exit.
24-
get(self, key, default=None): Retrieves an item from the cache.
25-
set(self, key, value): Sets an item in the cache.
26-
close(self): Closes the cache.
2717
"""
2818

2919
ALLOWED_CONFIG_KEYS = ["cache_seed", "redis_url", "cache_path_root"]

autogen/cache/cache_factory.py

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -28,11 +28,17 @@ def cache_factory(seed, redis_url=None, cache_path_root=".cache"):
2828
and the provided redis_url.
2929
3030
Examples:
31-
Creating a Redis cache
32-
> redis_cache = cache_factory("myseed", "redis://localhost:6379/0")
3331
34-
Creating a Disk cache
35-
> disk_cache = cache_factory("myseed", None)
32+
Creating a Redis cache
33+
34+
```python
35+
redis_cache = cache_factory("myseed", "redis://localhost:6379/0")
36+
```
37+
Creating a Disk cache
38+
39+
```python
40+
disk_cache = cache_factory("myseed", None)
41+
```
3642
"""
3743
if RedisCache is not None and redis_url is not None:
3844
return RedisCache(seed, redis_url)

autogen/function_utils.py

Lines changed: 16 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -225,21 +225,22 @@ def get_function_schema(f: Callable[..., Any], *, name: Optional[str] = None, de
225225
TypeError: If the function is not annotated
226226
227227
Examples:
228-
```
229-
def f(a: Annotated[str, "Parameter a"], b: int = 2, c: Annotated[float, "Parameter c"] = 0.1) -> None:
230-
pass
231-
232-
get_function_schema(f, description="function f")
233-
234-
# {'type': 'function',
235-
# 'function': {'description': 'function f',
236-
# 'name': 'f',
237-
# 'parameters': {'type': 'object',
238-
# 'properties': {'a': {'type': 'str', 'description': 'Parameter a'},
239-
# 'b': {'type': 'int', 'description': 'b'},
240-
# 'c': {'type': 'float', 'description': 'Parameter c'}},
241-
# 'required': ['a']}}}
242-
```
228+
229+
```python
230+
def f(a: Annotated[str, "Parameter a"], b: int = 2, c: Annotated[float, "Parameter c"] = 0.1) -> None:
231+
pass
232+
233+
get_function_schema(f, description="function f")
234+
235+
# {'type': 'function',
236+
# 'function': {'description': 'function f',
237+
# 'name': 'f',
238+
# 'parameters': {'type': 'object',
239+
# 'properties': {'a': {'type': 'str', 'description': 'Parameter a'},
240+
# 'b': {'type': 'int', 'description': 'b'},
241+
# 'c': {'type': 'float', 'description': 'Parameter c'}},
242+
# 'required': ['a']}}}
243+
```
243244
244245
"""
245246
typed_signature = get_typed_signature(f)

autogen/oai/openai_utils.py

Lines changed: 61 additions & 62 deletions
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,7 @@ def get_config_list(
103103
list: A list of configs for OepnAI API calls.
104104
105105
Example:
106-
```
106+
```python
107107
# Define a list of API keys
108108
api_keys = ['key1', 'key2', 'key3']
109109
@@ -292,32 +292,32 @@ def config_list_from_models(
292292
list: A list of configs for OpenAI API calls, each including model information.
293293
294294
Example:
295-
```
296-
# Define the path where the API key files are located
297-
key_file_path = '/path/to/key/files'
298-
299-
# Define the file names for the OpenAI and Azure OpenAI API keys and bases
300-
openai_api_key_file = 'key_openai.txt'
301-
aoai_api_key_file = 'key_aoai.txt'
302-
aoai_api_base_file = 'base_aoai.txt'
303-
304-
# Define the list of models for which to create configurations
305-
model_list = ['gpt-4', 'gpt-3.5-turbo']
306-
307-
# Call the function to get a list of configuration dictionaries
308-
config_list = config_list_from_models(
309-
key_file_path=key_file_path,
310-
openai_api_key_file=openai_api_key_file,
311-
aoai_api_key_file=aoai_api_key_file,
312-
aoai_api_base_file=aoai_api_base_file,
313-
model_list=model_list
314-
)
295+
```python
296+
# Define the path where the API key files are located
297+
key_file_path = '/path/to/key/files'
298+
299+
# Define the file names for the OpenAI and Azure OpenAI API keys and bases
300+
openai_api_key_file = 'key_openai.txt'
301+
aoai_api_key_file = 'key_aoai.txt'
302+
aoai_api_base_file = 'base_aoai.txt'
303+
304+
# Define the list of models for which to create configurations
305+
model_list = ['gpt-4', 'gpt-3.5-turbo']
306+
307+
# Call the function to get a list of configuration dictionaries
308+
config_list = config_list_from_models(
309+
key_file_path=key_file_path,
310+
openai_api_key_file=openai_api_key_file,
311+
aoai_api_key_file=aoai_api_key_file,
312+
aoai_api_base_file=aoai_api_base_file,
313+
model_list=model_list
314+
)
315315
316-
# The `config_list` will contain configurations for the specified models, for example:
317-
# [
318-
# {'api_key': '...', 'base_url': 'https://api.openai.com', 'model': 'gpt-4'},
319-
# {'api_key': '...', 'base_url': 'https://api.openai.com', 'model': 'gpt-3.5-turbo'}
320-
# ]
316+
# The `config_list` will contain configurations for the specified models, for example:
317+
# [
318+
# {'api_key': '...', 'base_url': 'https://api.openai.com', 'model': 'gpt-4'},
319+
# {'api_key': '...', 'base_url': 'https://api.openai.com', 'model': 'gpt-3.5-turbo'}
320+
# ]
321321
```
322322
"""
323323
config_list = config_list_openai_aoai(
@@ -383,40 +383,39 @@ def filter_config(config_list, filter_dict):
383383
in `filter_dict`.
384384
385385
Example:
386-
```
387-
# Example configuration list with various models and API types
388-
configs = [
389-
{'model': 'gpt-3.5-turbo'},
390-
{'model': 'gpt-4'},
391-
{'model': 'gpt-3.5-turbo', 'api_type': 'azure'},
392-
{'model': 'gpt-3.5-turbo', 'tags': ['gpt35_turbo', 'gpt-35-turbo']},
393-
]
394-
395-
# Define filter criteria to select configurations for the 'gpt-3.5-turbo' model
396-
# that are also using the 'azure' API type
397-
filter_criteria = {
398-
'model': ['gpt-3.5-turbo'], # Only accept configurations for 'gpt-3.5-turbo'
399-
'api_type': ['azure'] # Only accept configurations for 'azure' API type
400-
}
401-
402-
# Apply the filter to the configuration list
403-
filtered_configs = filter_config(configs, filter_criteria)
404-
405-
# The resulting `filtered_configs` will be:
406-
# [{'model': 'gpt-3.5-turbo', 'api_type': 'azure', ...}]
407-
408-
409-
# Define a filter to select a given tag
410-
filter_criteria = {
411-
'tags': ['gpt35_turbo'],
412-
}
413-
414-
# Apply the filter to the configuration list
415-
filtered_configs = filter_config(configs, filter_criteria)
416-
417-
# The resulting `filtered_configs` will be:
418-
# [{'model': 'gpt-3.5-turbo', 'tags': ['gpt35_turbo', 'gpt-35-turbo']}]
419-
386+
```python
387+
# Example configuration list with various models and API types
388+
configs = [
389+
{'model': 'gpt-3.5-turbo'},
390+
{'model': 'gpt-4'},
391+
{'model': 'gpt-3.5-turbo', 'api_type': 'azure'},
392+
{'model': 'gpt-3.5-turbo', 'tags': ['gpt35_turbo', 'gpt-35-turbo']},
393+
]
394+
395+
# Define filter criteria to select configurations for the 'gpt-3.5-turbo' model
396+
# that are also using the 'azure' API type
397+
filter_criteria = {
398+
'model': ['gpt-3.5-turbo'], # Only accept configurations for 'gpt-3.5-turbo'
399+
'api_type': ['azure'] # Only accept configurations for 'azure' API type
400+
}
401+
402+
# Apply the filter to the configuration list
403+
filtered_configs = filter_config(configs, filter_criteria)
404+
405+
# The resulting `filtered_configs` will be:
406+
# [{'model': 'gpt-3.5-turbo', 'api_type': 'azure', ...}]
407+
408+
409+
# Define a filter to select a given tag
410+
filter_criteria = {
411+
'tags': ['gpt35_turbo'],
412+
}
413+
414+
# Apply the filter to the configuration list
415+
filtered_configs = filter_config(configs, filter_criteria)
416+
417+
# The resulting `filtered_configs` will be:
418+
# [{'model': 'gpt-3.5-turbo', 'tags': ['gpt35_turbo', 'gpt-35-turbo']}]
420419
```
421420
422421
Note:
@@ -467,7 +466,7 @@ def config_list_from_json(
467466
keys representing field names and values being lists or sets of acceptable values for those fields.
468467
469468
Example:
470-
```
469+
```python
471470
# Suppose we have an environment variable 'CONFIG_JSON' with the following content:
472471
# '[{"model": "gpt-3.5-turbo", "api_type": "azure"}, {"model": "gpt-4"}]'
473472
@@ -511,7 +510,7 @@ def get_config(
511510
Constructs a configuration dictionary for a single model with the provided API configurations.
512511
513512
Example:
514-
```
513+
```python
515514
config = get_config(
516515
api_key="sk-abcdef1234567890",
517516
base_url="https://api.openai.com",

autogen/retrieve_utils.py

Lines changed: 14 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -276,8 +276,10 @@ def create_vector_db_from_dir(
276276
custom_text_types (Optional, List[str]): a list of file types to be processed. Default is TEXT_FORMATS.
277277
recursive (Optional, bool): whether to search documents recursively in the dir_path. Default is True.
278278
extra_docs (Optional, bool): whether to add more documents in the collection. Default is False
279+
279280
Returns:
280-
API: the chromadb client.
281+
282+
The chromadb client.
281283
"""
282284
if client is None:
283285
client = chromadb.PersistentClient(path=db_path)
@@ -353,13 +355,17 @@ def query_vector_db(
353355
functions, you can pass it here, follow the examples in `https://docs.trychroma.com/embeddings`.
354356
355357
Returns:
356-
QueryResult: the query result. The format is:
357-
class QueryResult(TypedDict):
358-
ids: List[IDs]
359-
embeddings: Optional[List[List[Embedding]]]
360-
documents: Optional[List[List[Document]]]
361-
metadatas: Optional[List[List[Metadata]]]
362-
distances: Optional[List[List[float]]]
358+
359+
The query result. The format is:
360+
361+
```python
362+
class QueryResult(TypedDict):
363+
ids: List[IDs]
364+
embeddings: Optional[List[List[Embedding]]]
365+
documents: Optional[List[List[Document]]]
366+
metadatas: Optional[List[List[Metadata]]]
367+
distances: Optional[List[List[float]]]
368+
```
363369
"""
364370
if client is None:
365371
client = chromadb.PersistentClient(path=db_path)

0 commit comments

Comments
 (0)