Slide 1: AI vs Machine Learning
Artificial Intelligence (AI) and Machine Learning (ML) are often used interchangeably, but they have distinct differences. AI is a broader concept that aims to create systems capable of performing tasks that typically require human intelligence. Machine Learning, on the other hand, is a subset of AI that focuses on algorithms that improve through experience and data.
# AI: Rule-based system for chess
def ai_chess_move(board):
if is_check(board):
return find_best_defensive_move(board)
else:
return find_best_offensive_move(board)
# ML: Learning to play chess
def ml_chess_move(board, model):
features = extract_features(board)
return model.predict(features)
# Train ML model
model = train_ml_model(historical_chess_data)
Slide 2: Deep Learning vs Generative AI
Deep Learning is a subset of Machine Learning that uses neural networks with multiple layers to extract high-level features from raw input. Generative AI, while often utilizing deep learning techniques, focuses on creating new content rather than just classifying or predicting.
import tensorflow as tf
# Deep Learning: Image Classification
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation='softmax')
])
# Generative AI: Text Generation
class TextGenerator(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, rnn_units):
super().__init__()
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(rnn_units, return_sequences=True, return_state=True)
self.dense = tf.keras.layers.Dense(vocab_size)
def call(self, inputs, states=None, return_state=False, training=False):
x = self.embedding(inputs, training=training)
if states is None:
states = self.gru.get_initial_state(x)
x, states = self.gru(x, initial_state=states, training=training)
x = self.dense(x, training=training)
if return_state:
return x, states
else:
return x
Slide 3: ANI to ASI
The journey from Artificial Narrow Intelligence (ANI) to Artificial Superintelligence (ASI) represents the evolution of AI capabilities. ANI excels at specific tasks, while Artificial General Intelligence (AGI) aims to match human-level intelligence across various domains. ASI, the theoretical pinnacle, would surpass human intelligence in all areas.
class ANI:
def perform_specific_task(self, task):
if task == "chess":
return self.play_chess()
elif task == "image_recognition":
return self.recognize_image()
else:
return "Task not supported"
class AGI:
def perform_any_task(self, task):
return self.general_problem_solving(task)
class ASI:
def perform_task_beyond_human(self, task):
solution = self.superintelligent_reasoning(task)
return self.optimize_beyond_human_capability(solution)
# Example usage
ani = ANI()
print(ani.perform_specific_task("chess")) # Output: Chess moves
agi = AGI()
print(agi.perform_any_task("Write a poem")) # Output: Generated poem
asi = ASI()
print(asi.perform_task_beyond_human("Solve climate change")) # Output: Innovative solution
Slide 4: LLM Key Concepts
Large Language Models (LLMs) are advanced AI systems that process and generate human-like text. Key concepts include tokenization (breaking text into smaller units), attention mechanisms (focusing on relevant parts of input), and transformers (architecture for processing sequential data).
import torch
from transformers import BertTokenizer, BertModel
# Tokenization
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
text = "Hello, how are you?"
tokens = tokenizer.tokenize(text)
print("Tokens:", tokens)
# Attention mechanism and Transformer
model = BertModel.from_pretrained('bert-base-uncased')
input_ids = torch.tensor([tokenizer.encode(text, add_special_tokens=True)])
outputs = model(input_ids)
attention = outputs.attentions
# Visualize attention (simplified)
import matplotlib.pyplot as plt
plt.imshow(attention[0][0][0].detach().numpy())
plt.title("Attention Visualization")
plt.show()
Slide 5: Advanced AI Learning Techniques
Zero-shot, few-shot, and many-shot learning are techniques that allow AI models to perform tasks with varying amounts of labeled examples. These approaches enable models to generalize knowledge and adapt to new scenarios quickly.
from transformers import pipeline
# Zero-shot classification
classifier = pipeline("zero-shot-classification")
sequence = "I love playing tennis."
candidate_labels = ["sports", "music", "food"]
result = classifier(sequence, candidate_labels)
print("Zero-shot classification:", result)
# Few-shot learning example
few_shot_examples = [
{"text": "I enjoy playing basketball", "label": "sports"},
{"text": "The guitar is my favorite instrument", "label": "music"}
]
def few_shot_classifier(input_text, examples, labels):
similarities = [compute_similarity(input_text, ex["text"]) for ex in examples]
most_similar_index = similarities.index(max(similarities))
return examples[most_similar_index]["label"]
def compute_similarity(text1, text2):
# Simplified similarity calculation
return len(set(text1.split()) & set(text2.split())) / len(set(text1.split()) | set(text2.split()))
input_text = "I practice piano every day"
result = few_shot_classifier(input_text, few_shot_examples, ["sports", "music"])
print("Few-shot classification:", result)
Slide 6: LLM Fine-Tuning and Beyond
Fine-tuning allows pre-trained language models to adapt to specific tasks or domains. Reinforcement Learning from Human Feedback (RLHF) incorporates human preferences to guide model behavior. Parameter-Efficient Fine-Tuning (PEFT) optimizes the process by updating only a subset of model parameters.
from transformers import AutoModelForSequenceClassification, AutoTokenizer, Trainer, TrainingArguments
import torch
# Load pre-trained model and tokenizer
model_name = "bert-base-uncased"
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Prepare dataset (simplified)
train_texts = ["Positive review", "Negative review"]
train_labels = [1, 0]
train_encodings = tokenizer(train_texts, truncation=True, padding=True)
train_dataset = torch.utils.data.TensorDataset(
torch.tensor(train_encodings["input_ids"]),
torch.tensor(train_encodings["attention_mask"]),
torch.tensor(train_labels)
)
# Fine-tuning
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=3,
per_device_train_batch_size=16,
learning_rate=2e-5,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
)
trainer.train()
# PEFT example (LoRA)
from peft import get_peft_model, LoraConfig, TaskType
peft_config = LoraConfig(
task_type=TaskType.SEQ_CLS,
r=8,
lora_alpha=32,
lora_dropout=0.1,
)
peft_model = get_peft_model(model, peft_config)
trainer = Trainer(model=peft_model, args=training_args, train_dataset=train_dataset)
trainer.train()
Slide 7: AI Vulnerabilities
AI systems can be vulnerable to adversarial attacks, where carefully crafted inputs mislead the model. Watermarking and robustness strategies help protect against such vulnerabilities and ensure the integrity of AI-generated content.
import numpy as np
from sklearn.svm import SVC
from art.attacks.evasion import FastGradientMethod
from art.estimators.classification import SklearnClassifier
# Train a simple classifier
X = np.array([[0, 0], [1, 1]])
y = np.array([0, 1])
clf = SVC(kernel="linear")
clf.fit(X, y)
# Wrap the model
classifier = SklearnClassifier(model=clf, clip_values=(0, 1))
# Create an attack
attack = FastGradientMethod(estimator=classifier, eps=0.2)
# Generate adversarial examples
X_test = np.array([[0.5, 0.5]])
X_adv = attack.generate(X_test)
print("Original prediction:", clf.predict(X_test))
print("Adversarial prediction:", clf.predict(X_adv))
# Simple watermarking example
def add_watermark(text, watermark):
return text + " " + watermark
def verify_watermark(text, watermark):
return text.endswith(watermark)
original_text = "This is an AI-generated text."
watermarked_text = add_watermark(original_text, "[AI-GENERATED]")
print("Watermarked text:", watermarked_text)
print("Watermark verified:", verify_watermark(watermarked_text, "[AI-GENERATED]"))
Slide 8: The Future of AI: XAI and Responsible AI
Explainable AI (XAI) aims to make AI decision-making processes transparent and interpretable. Responsible AI focuses on developing and using AI systems ethically, ensuring fairness, accountability, and societal benefit.
from sklearn.tree import DecisionTreeClassifier, plot_tree
import matplotlib.pyplot as plt
# Train a simple decision tree (interpretable model)
X = [[1, 2], [2, 3], [3, 1], [4, 4]]
y = [0, 0, 1, 1]
clf = DecisionTreeClassifier(max_depth=2)
clf.fit(X, y)
# Visualize the decision tree
plt.figure(figsize=(10,10))
plot_tree(clf, feature_names=['feature1', 'feature2'], class_names=['class0', 'class1'], filled=True)
plt.title("Explainable AI: Decision Tree Visualization")
plt.show()
# Responsible AI: Fairness assessment
from aif360.datasets import BinaryLabelDataset
from aif360.metrics import BinaryLabelDatasetMetric
# Create a simple dataset
data = [[1, 0, 0], [1, 1, 1], [0, 1, 1], [0, 0, 0]]
protected_attribute = [1, 1, 0, 0]
labels = [1, 1, 0, 0]
dataset = BinaryLabelDataset(
df=pd.DataFrame(data),
label_names=['outcome'],
protected_attribute_names=['protected_attribute'],
favorable_label=1,
unfavorable_label=0
)
metric = BinaryLabelDatasetMetric(dataset, unprivileged_groups=[{'protected_attribute': 0}], privileged_groups=[{'protected_attribute': 1}])
print("Disparate Impact:", metric.disparate_impact())
print("Statistical Parity Difference:", metric.statistical_parity_difference())
Slide 9: Real-Life Example: Smart Home Automation
AI and Machine Learning play crucial roles in smart home automation. Let's explore how these technologies can be used to create an intelligent thermostat system that learns from user behavior and optimizes energy consumption.
import numpy as np
from sklearn.linear_model import LinearRegression
class SmartThermostat:
def __init__(self):
self.model = LinearRegression()
self.temperature_history = []
self.time_history = []
def record_temperature(self, temperature, time):
self.temperature_history.append(temperature)
self.time_history.append(time)
def train_model(self):
X = np.array(self.time_history).reshape(-1, 1)
y = np.array(self.temperature_history)
self.model.fit(X, y)
def predict_temperature(self, time):
return self.model.predict([[time]])[0]
def adjust_temperature(self, current_time, desired_temp):
predicted_temp = self.predict_temperature(current_time)
if abs(predicted_temp - desired_temp) > 1:
return desired_temp
return predicted_temp
# Simulate smart thermostat usage
thermostat = SmartThermostat()
# Record temperature data
for hour in range(24):
temperature = 20 + 5 * np.sin(hour / 24 * 2 * np.pi) # Simulated daily temperature cycle
thermostat.record_temperature(temperature, hour)
thermostat.train_model()
# Adjust temperature
current_time = 14 # 2 PM
desired_temp = 22
adjusted_temp = thermostat.adjust_temperature(current_time, desired_temp)
print(f"Current time: {current_time}:00")
print(f"Desired temperature: {desired_temp}°C")
print(f"Adjusted temperature: {adjusted_temp:.2f}°C")
Slide 10: Natural Language Processing in Customer Service
Natural Language Processing (NLP) enables computers to understand and respond to human language. In customer service, NLP powers chatbots that can efficiently handle customer queries, improving response times and customer satisfaction.
import random
class CustomerServiceBot:
def __init__(self):
self.intents = {
"greeting": ["Hello!", "Hi there!", "Welcome!"],
"farewell": ["Goodbye!", "Have a great day!", "Thank you for contacting us!"],
"password_reset": ["To reset your password, please follow these steps: 1. Go to the login page. 2. Click 'Forgot Password'. 3. Enter your email address. 4. Follow the instructions in the email you receive."],
"order_status": ["To check your order status: 1. Log in to your account. 2. Go to 'Order History'. 3. Find your order and click 'Track Package'."],
"unknown": ["I'm not sure about that. Can you please rephrase your question?", "I don't have that information. Would you like to speak with a human representative?"]
}
def classify_intent(self, user_input):
user_input = user_input.lower()
if "hello" in user_input or "hi" in user_input:
return "greeting"
elif "bye" in user_input or "goodbye" in user_input:
return "farewell"
elif "password" in user_input and "reset" in user_input:
return "password_reset"
elif "order" in user_input and "status" in user_input:
return "order_status"
else:
return "unknown"
def respond(self, user_input):
intent = self.classify_intent(user_input)
return random.choice(self.intents[intent])
# Example usage
bot = CustomerServiceBot()
print(bot.respond("Hello, I need help"))
print(bot.respond("How do I reset my password?"))
print(bot.respond("What's the status of my order?"))
print(bot.respond("Goodbye"))
Slide 11: Computer Vision in Agriculture
Computer Vision, a field of AI that enables machines to interpret visual information, has found valuable applications in agriculture. It can be used for crop monitoring, disease detection, and yield prediction, helping farmers make data-driven decisions.
import cv2
import numpy as np
class CropMonitor:
def __init__(self):
self.healthy_color = np.array([45, 100, 50]) # HSV green color
self.color_range = 30
def analyze_crop_health(self, image_path):
image = cv2.imread(image_path)
hsv_image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
lower_bound = self.healthy_color - self.color_range
upper_bound = self.healthy_color + self.color_range
mask = cv2.inRange(hsv_image, lower_bound, upper_bound)
healthy_area = cv2.countNonZero(mask)
total_area = image.shape[0] * image.shape[1]
health_percentage = (healthy_area / total_area) * 100
return health_percentage
# Example usage (assuming an image file exists)
monitor = CropMonitor()
health_score = monitor.analyze_crop_health("crop_image.jpg")
print(f"Crop health score: {health_score:.2f}%")
# Note: This is a simplified example. Real-world applications would use
# more sophisticated algorithms and consider additional factors.
Slide 12: Reinforcement Learning in Robotics
Reinforcement Learning (RL) is a type of machine learning where an agent learns to make decisions by interacting with an environment. In robotics, RL can be used to teach robots complex tasks without explicit programming.
import numpy as np
class RobotArm:
def __init__(self):
self.position = 0
self.goal = 5
self.actions = [-1, 0, 1] # move left, stay, move right
def step(self, action):
self.position += self.actions[action]
self.position = max(0, min(self.position, 10)) # keep within bounds
reward = -abs(self.position - self.goal)
done = self.position == self.goal
return self.position, reward, done
class QLearningAgent:
def __init__(self, action_space):
self.q_table = {}
self.action_space = action_space
self.learning_rate = 0.1
self.discount_factor = 0.95
self.epsilon = 0.1
def get_action(self, state):
if np.random.random() < self.epsilon:
return np.random.choice(self.action_space)
if state not in self.q_table:
self.q_table[state] = np.zeros(len(self.action_space))
return np.argmax(self.q_table[state])
def update_q_table(self, state, action, reward, next_state):
if state not in self.q_table:
self.q_table[state] = np.zeros(len(self.action_space))
if next_state not in self.q_table:
self.q_table[next_state] = np.zeros(len(self.action_space))
current_q = self.q_table[state][action]
next_max_q = np.max(self.q_table[next_state])
new_q = current_q + self.learning_rate * (reward + self.discount_factor * next_max_q - current_q)
self.q_table[state][action] = new_q
# Training loop (simplified)
env = RobotArm()
agent = QLearningAgent(action_space=range(3))
for episode in range(1000):
state = env.position
done = False
while not done:
action = agent.get_action(state)
next_state, reward, done = env.step(action)
agent.update_q_table(state, action, reward, next_state)
state = next_state
print("Training complete. Q-table:", agent.q_table)
Slide 13: Ethical Considerations in AI Development
As AI systems become more prevalent and influential, it's crucial to consider the ethical implications of their development and deployment. This includes issues of privacy, fairness, transparency, and accountability.
class EthicalAIFramework:
def __init__(self):
self.principles = [
"Fairness",
"Transparency",
"Privacy",
"Accountability",
"Safety and Security"
]
def assess_fairness(self, model, test_data, sensitive_attributes):
# Placeholder for fairness assessment logic
pass
def explain_decision(self, model, input_data):
# Placeholder for model explainability logic
pass
def ensure_privacy(self, data):
# Placeholder for privacy-preserving techniques
pass
def track_decisions(self, model, input_data, output):
# Placeholder for decision logging and auditing
pass
def evaluate_safety(self, model, test_scenarios):
# Placeholder for safety evaluation logic
pass
# Example usage
ethical_framework = EthicalAIFramework()
# Assessing fairness
# ethical_framework.assess_fairness(my_model, test_data, ['gender', 'race'])
# Explaining a decision
# explanation = ethical_framework.explain_decision(my_model, user_input)
# Ensuring data privacy
# anonymized_data = ethical_framework.ensure_privacy(sensitive_user_data)
# Tracking model decisions
# ethical_framework.track_decisions(my_model, user_input, model_output)
# Evaluating model safety
# safety_score = ethical_framework.evaluate_safety(my_model, safety_test_scenarios)
print("Ethical AI Principles:", ethical_framework.principles)
Slide 14: Additional Resources
For those interested in diving deeper into AI and machine learning, here are some valuable resources:
- ArXiv.org: A repository of electronic preprints of scientific papers in various fields, including AI and ML.
- Machine Learning section: https://arxiv.org/list/cs.LG/recent
- Artificial Intelligence section: https://arxiv.org/list/cs.AI/recent
- Coursera: Online learning platform offering AI and ML courses from top universities.
- Google AI: Provides educational resources, research papers, and tools for AI development.
- OpenAI: Offers research papers, blog posts, and resources on cutting-edge AI technologies.
- MIT OpenCourseWare: Free online courses on various AI and ML topics.
- IEEE Xplore: Digital library providing access to technical literature in engineering and technology.
- Association for Computing Machinery (ACM) Digital Library: Comprehensive collection of computing and information technology resources.
Remember to verify the credibility and recency of information from these sources, as the field of AI is rapidly evolving.