Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pull] master from bregman-arie:master #96

Merged
merged 9 commits into from
Sep 30, 2024
697 changes: 664 additions & 33 deletions README-zh_CN.md

Large diffs are not rendered by default.

23 changes: 22 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3953,10 +3953,31 @@ True

<details>
<summary>What is the workflow of retrieving data from Ceph?</summary><br><b>
The work flow is as follows:

1. The client sends a request to the ceph cluster to retrieve data:
> **Client could be any of the following**
>> * Ceph Block Device
>> * Ceph Object Gateway
>> * Any third party ceph client


2. The client retrieves the latest cluster map from the Ceph Monitor
3. The client uses the CRUSH algorithm to map the object to a placement group. The placement group is then assigned to a OSD.
4. Once the placement group and the OSD Daemon are determined, the client can retrieve the data from the appropriate OSD


</b></details>

<details>
<summary>What is the workflow of retrieving data from Ceph?</summary><br><b>
<summary>What is the workflow of writing data to Ceph?</summary><br><b>
The work flow is as follows:

1. The client sends a request to the ceph cluster to retrieve data
2. The client retrieves the latest cluster map from the Ceph Monitor
3. The client uses the CRUSH algorithm to map the object to a placement group. The placement group is then assigned to a Ceph OSD Daemon dynamically.
4. The client sends the data to the primary OSD of the determined placement group. If the data is stored in an erasure-coded pool, the primary OSD is responsible for encoding the object into data chunks and coding chunks, and distributing them to the other OSDs.

</b></details>

<details>
Expand Down
2 changes: 1 addition & 1 deletion certificates/aws-cloud-practitioner.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
## AWS - Cloud Practitioner

A summary of what you need to know for the exam can be found [here](https://codingshell.com/aws-cloud-practitioner)
A summary of what you need to know for the exam can be found [here](https://aws.amazon.com/certification/certified-cloud-practitioner/)

#### Cloud 101

Expand Down
1 change: 1 addition & 0 deletions scripts/aws s3 event triggering/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
[](./sample.png)
122 changes: 122 additions & 0 deletions scripts/aws s3 event triggering/aws_s3_event_trigger.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
#!/bin/bash

# always put up the detail of scripts . version, author, what it does, what event triggers and all ..

###
# Author: Adarsh Rawat
# Version: 1.0.0
# Objective: Automate Notification for a object uploaded or created in s3 bucket.
###

# debug what is happening
set -x

# all these cmds are aws cli commands | abhishek veermalla day 4-5 devops

# store aws account id in a variable
aws_account_id=$(aws sts get-caller-identity --query 'Account' --output text)

# print the account id from the variable
echo "aws account id: $aws_account_id"

# set aws region, bucket name and other variables
aws_region="us-east-1"
aws_bucket="s3-lambda-event-trigger-bucket"
aws_lambda="s3-lambda-function-1"
aws_role="s3-lambda-sns"
email_address="[email protected]"

# create iam role for the project
role_response=$(aws iam create-role --role-name s3-lambda-sns --assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com",
"s3.amazonaws.com",
"sns.amazonaws.com"
]
}
}]
}')

# jq is json parser here parse the role we created

# extract the role arn from json resposne and store in variable
role_arn=$(echo "$role_response" | jq -r '.Role.Arn')

# print the role arn
echo "Role ARN: $role_arn"

# attach permissions to the role
aws iam attach-role-policy --role-name $aws_role --policy-arn arn:aws:iam::aws:policy/AWSLambda_FullAccess
aws iam attach-role-policy --role-name $aws_role --policy-arn arn:aws:iam::aws:policy/AmazonSNSFullAccess

# create s3 bucket and get the output in a variable
bucket_output=$(aws s3api create-bucket --bucket "$aws_bucket" --region "$aws_region")

# print the output from the variable
echo "bucket output: $bucket_output"

# upload a file to the bucket
aws s3 cp ./sample.png s3://"$aws_bucket"/sample.png

# create a zip file to upload lambda function
zip -r s3-lambda.zip ./s3-lambda

sleep 5

# create a lambda function
aws lambda create-function \
--region $aws_region \
--function $aws_lambda \
--runtime "python3.8" \
--handler "s3-lambda/s3-lambda.lambda_handler" \
--memory-size 128 \
--timeout 30 \
--role "arn:aws:iam::$aws_account_id:role/$aws_role" \
--zip-file "fileb://./s3-lambda.zip"

# add permissions to s3 bucket to invoke lambda
LambdaFunctionArn="arn:aws:lambda:us-east-1:$aws_account_id:function:s3-lambda"
aws s3api put-bucket-notification-configuration \
--region "$aws_region" \
--bucket "$aws_bucket" \
--notification-configuration '{
"LambdaFunctionConfigurations": [{
"LambdaFunctionArn": "'"$LambdaFunctionArn"'",
"Events": ["s3:ObjectCreated:*"]
}]
}'

aws s3api put-bucket-notification-configuration \
--region "$aws_region" \
--bucket "$aws_bucket" \
--notification-configuration '{
"LambdaFunctionConfigurations": [{
"LambdaFunctionArn": "'"$LambdaFunctionArn"'",
"Events": ["s3:ObjectCreated:*"]
}]
}'

# create an sns topic and save the topic arn to a variable
topic_arn=$(aws sns create-topic --name s3-lambda-sns --output json | jq -r '.TopicArn')

# print the topic arn
echo "SNS Topic ARN: $topic_arn"

# Trigger SNS topic using lambda function

# Add sns topic using lambda function
aws sns subscribe \
--topic-arn "$topic_arn" \
--protocol email \
--notification-endpoint "$email_address"

# publish sns
aws sns publish \
--topic-arn "$topic_arn" \
--subject "A new object created in s3 bucket" \
--message "Hey, a new data object just got delievered into the s3 bucket $aws_bucket"
1 change: 1 addition & 0 deletions scripts/aws s3 event triggering/s3-lambda/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
boto3==1.17.95
38 changes: 38 additions & 0 deletions scripts/aws s3 event triggering/s3-lambda/s3-lambda.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
import boto3
import json

def lambda_handler(event, context):

# i want to know that event thing
print(event)

# extract relevant information from the s3 event trigger
bucket_name=event['Records'][0]['s3']['bucket']['name']
object_key=event['Records'][0]['s3']['object']['key']

# perform desired operations with the upload file
print(f"File '{object_key}' was uploaded to bucket '{bucket_name}'")

# example: send a notification via sns
sns_client=boto3.client('sns')
topic_arn='arn:aws:sns:us-east-1:<account-id>:s3-lambda-sns'
sns_client.publish(
TopicArn=topic_arn,
Subject='s3 object created !!',
Message=f"File '{object_key}' was uploaded to bucket '{bucket_name}"
)

# Example: Trigger another Lambda function
# lambda_client = boto3.client('lambda')
# target_function_name = 'my-another-lambda-function'
# lambda_client.invoke(
# FunctionName=target_function_name,
# InvocationType='Event',
# Payload=json.dumps({'bucket_name': bucket_name, 'object_key': object_key})
# )
# in case of queuing and other objective similar to the netflix flow of triggering

return {
'statusCode': 200,
'body': json.dumps("Lambda function executed successfully !!")
}
Binary file added scripts/aws s3 event triggering/sample.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
28 changes: 14 additions & 14 deletions topics/aws/README.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,16 @@
# AWS

**Note**: Some of the exercises <b>cost $$$</b> and can't be performed using the free tier/resources
**Note**: Some of the exercises <b>cost $$$</b> and can't be performed using the free tier or resources

**2nd Note**: Provided solutions are using the AWS console. It's recommended you'll use IaC technologies to solve the exercises (e.g. Terraform, Pulumi).<br>
**2nd Note**: The provided solutions are using the AWS console. It's recommended you use IaC technologies to solve the exercises (e.g., Terraform, Pulumi).<br>

- [AWS](#aws)
- [Exercises](#exercises)
- [IAM](#iam)
- [EC2](#ec2)
- [S3](#s3)
- [ELB](#elb)
- [Auto Scaling Groups](#auto-scaling-groups)
- [Auto Scaling Groups] (#auto-scaling-groups)
- [VPC](#vpc)
- [Databases](#databases)
- [DNS](#dns)
Expand All @@ -24,14 +24,14 @@
- [Global Infrastructure](#global-infrastructure)
- [IAM](#iam-1)
- [EC2](#ec2-1)
- [AMI](#ami)
- [EBS](#ebs)
- [Instance Store](#instance-store)
- [EFS](#efs)
- [Pricing Models](#pricing-models)
- [Launch Template](#launch-template)
- [ENI](#eni)
- [Placement Groups](#placement-groups)
- [AMI](#ami)
- [EBS](#ebs)
- [Instance Store](#instance-store)
- [EFS](#efs)
- [Pricing Models](#pricing-models)
- [Launch Template](#launch-template)
- [ENI](#eni)
- [Placement Groups](#placement-groups)
- [VPC](#vpc-1)
- [Default VPC](#default-vpc)
- [Lambda](#lambda-1)
Expand Down Expand Up @@ -63,7 +63,7 @@
- [SNS](#sns)
- [Monitoring and Logging](#monitoring-and-logging)
- [Billing and Support](#billing-and-support)
- [AWS Organizations](#aws-organizations)
- [AWS Organizations](#aws-organizations)
- [Automation](#automation)
- [Misc](#misc-2)
- [High Availability](#high-availability)
Expand Down Expand Up @@ -3485,6 +3485,6 @@ More details are missing to determine for sure but it might be better to decoupl
<details>
<summary>What's an ARN?</summary><br><b>

ARN (Amazon Resources Names) used for uniquely identifying different AWS resources.
It is used when you would like to identify resource uniqely across all AWS infra.
ARN (Amazon Resources Names) are used for uniquely identifying different AWS resources.
It is used when you would like to identify resource uniqely across all AWS infrastructures.
</b></details>
12 changes: 11 additions & 1 deletion topics/cloud/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,16 @@ AWS definition: "AWS Auto Scaling monitors your applications and automatically a
Read more about auto scaling [here](https://aws.amazon.com/autoscaling)
</b></details>

<details>
<summary>What is the difference between horizontal scaling and vertical scaling?</summary><br><b>

[AWS Docs](https://wa.aws.amazon.com/wellarchitected/2020-07-02T19-33-23/wat.concept.horizontal-scaling.en.html):

A "horizontally scalable" system is one that can increase capacity by adding more computers to the system. This is in contrast to a "vertically scalable" system, which is constrained to running its processes on only one computer; in such systems the only way to increase performance is to add more resources into one computer in the form of faster (or more) CPUs, memory or storage.

Horizontally scalable systems are oftentimes able to outperform vertically scalable systems by enabling parallel execution of workloads and distributing those across many different computers.
</b></details>

<details>
<summary>True or False? Auto Scaling is about adding resources (such as instances) and not about removing resource</summary><br><b>

Expand All @@ -105,4 +115,4 @@ False. Auto scaling adjusts capacity and this can mean removing some resources b
* Instance should have minimal permissions needed. You don't want an instance-level incident to become an account-level incident
* Instances should be accessed through load balancers or bastion hosts. In other words, they should be off the internet (in a private subnet behind a NAT).
* Using latest OS images with your instances (or at least apply latest patches)
</b></details>
</b></details>
19 changes: 19 additions & 0 deletions topics/linux/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -360,6 +360,7 @@ It contains useful information about the processes that are currently running, i

<details>
<summary>What makes /proc different from other filesystems?</summary><br><b>
/proc is a special virtual filesystem in Unix-like operating systems, including Linux, that provides information about processes and system resources.
</b></details>

<details>
Expand Down Expand Up @@ -433,6 +434,10 @@ Its a bit that only allows the owner or the root user to delete or modify the fi

<details>
<summary>What is sudo? How do you set it up?</summary><br><b>
sudo is a command-line utility in Unix-like operating systems that allows users to run programs with the privileges of another user, usually the superuser (root). It stands for "superuser do.

The sudo program is installed by default in almost all Linux distributions. If you need to install sudo in Debian/Ubuntu, use the command apt-get install sudo

</b></details>

<details>
Expand Down Expand Up @@ -2138,6 +2143,20 @@ This is a good article about the topic: https://ops.tips/blog/how-linux-creates-

<details>
<summary>You executed a script and while still running, it got accidentally removed. Is it possible to restore the script while it's still running?</summary><br><b>
It is possible to restore a script while it's still running if it has been accidentally removed. The running script process still has the code in memory. You can use the /proc filesystem to retrieve the content of the running script.
1.Find the Process ID by running
```
ps aux | grep yourscriptname.sh
```
Replace yourscriptname.sh with your script name.
2.Once you have the PID, you can access the script's memory through the /proc filesystem. The script will be available at /proc/<PID>/fd/, where <PID> is the process ID of the running script. Typically, the script's file descriptor is 0 or 1.

You can copy the script content to a new file using the cp command:
```
cp /proc/<PID>/fd/0 /path_to_restore_your_file/yourscriptname.sh
```
Replace <PID> with the actual PID of the script and /path_to_restore_your_file/yourscriptname.sh with the path where you want to restore the script.

</b></details>

<a name="questions-linux-memory"></a>
Expand Down
Loading