Skip to content

Commit 19db837

Browse files
committed
Restore folders accidentally removed during merge
1 parent cfbb354 commit 19db837

File tree

14 files changed

+1376
-0
lines changed

14 files changed

+1376
-0
lines changed
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# How to Integrate ChatGPT's API With Python Projects
2+
3+
This folder contains supporting materials for the Real Python tutorial [How to Integrate ChatGPT's API With Python Projects](https://realpython.com/chatgpt-api-python/).
Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
from openai import OpenAI
2+
3+
client = OpenAI()
4+
5+
text_response = client.responses.create(
6+
model="gpt-5", input="Tell me a joke about Python programming"
7+
)
8+
9+
print(f"Joke:\n{text_response.output_text}")
Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
from openai import OpenAI
2+
3+
user_input = input("How can I help you? ")
4+
5+
client = OpenAI()
6+
7+
code_response = client.responses.create(
8+
model="gpt-5",
9+
input=[
10+
{
11+
"role": "developer",
12+
"content": (
13+
"You are a Python coding assistant. "
14+
"Only accept Python related questions."
15+
),
16+
},
17+
{
18+
"role": "user",
19+
"content": f"{user_input}",
20+
},
21+
],
22+
)
23+
24+
print(f"\n{code_response.output_text}")
Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
from openai import OpenAI
2+
from pydantic import BaseModel
3+
4+
client = OpenAI()
5+
6+
7+
class CodeOutput(BaseModel):
8+
function_name: str
9+
code: str
10+
explanation: str
11+
example_usage: str
12+
13+
14+
code_response = client.responses.parse(
15+
model="gpt-5",
16+
input=[
17+
{
18+
"role": "developer",
19+
"content": (
20+
"You are a coding assistant. Generate clean,"
21+
"well-documented Python code."
22+
),
23+
},
24+
{
25+
"role": "user",
26+
"content": "Write a simple Python function to add two numbers",
27+
},
28+
],
29+
text_format=CodeOutput,
30+
)
31+
32+
code_result = code_response.output_parsed
33+
34+
print(f"Function Name: {code_result.function_name}")
35+
print("\nCode:")
36+
print(code_result.code)
37+
print(f"\nExplanation: {code_result.explanation}")
38+
print(f"\nExample Usage:\n{code_result.example_usage}")
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
from openai import OpenAI
2+
3+
client = OpenAI()
4+
print("OpenAI client created successfully!")
5+
print(f"Using API key: {client.api_key[:8]}...")

tinydb/geopandas-basics/README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# GeoPandas Basics: Maps, Projections, and Spatial Joins
2+
3+
This folder provides the code examples for the tutorial [GeoPandas Basics: Maps, Projections, and Spatial Joins](https://realpython.com/geopandas/).

tinydb/geopandas-basics/geopandas-basics.ipynb

Lines changed: 1165 additions & 0 deletions
Large diffs are not rendered by default.

tinydb/ollama-python-sdk/README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# How to Integrate Local LLMs With Ollama and Python
2+
3+
This folder provides the code examples for the Real Python tutorial [How to Integrate Local LLMs With Ollama and Python](https://realpython.com/ollama-python/).

tinydb/ollama-python-sdk/chat.py

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
from ollama import chat
2+
3+
messages = [
4+
{
5+
"role": "user",
6+
"content": "Explain what Python is in one sentence.",
7+
},
8+
]
9+
10+
response = chat(model="llama3.2:latest", messages=messages)
11+
print(response.message.content)
Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
from ollama import chat
2+
3+
messages = [
4+
{
5+
"role": "system",
6+
"content": "You are an expert Python tutor.",
7+
},
8+
{
9+
"role": "user",
10+
"content": "Define list comprehensions in a sentence.",
11+
},
12+
]
13+
response = chat(model="llama3.2:latest", messages=messages)
14+
print(response.message.content)
15+
16+
messages.append(response.message) # Keep context
17+
messages.append(
18+
{
19+
"role": "user",
20+
"content": "Provide a short, practical example.",
21+
}
22+
)
23+
response = chat(model="llama3.2:latest", messages=messages)
24+
print(response.message.content)

0 commit comments

Comments
 (0)