-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OPS-6220-Ionos-Exporter Extension #4
Open
aebyss
wants to merge
56
commits into
main
Choose a base branch
from
DBP-ionos-exporter-expansion
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…ll CRUD methods metrics
…ldungsplattform/ionos-exporter into DBP-ionos-exporter-expansion
dimapin
reviewed
Aug 13, 2024
…ults change in a potentially breaking way
simoncolincap
approved these changes
Oct 14, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Changes being implemented are big, sorry in advance but I will detail them as I go and link to specific files so it is easier to check and give feedback.
If interested in graphical representation of how functions behave check the Documentation folder with corresponding
Sequence diagrams for s3, postgres.
I will start with ionos-scraper.go
ionos_scraper.go
Here I am doing expanding upon already defined structures and variables in master,
here I defined few help functions which are going to fetch NATGateways
This fetches the NATGateways of given datacenter from IONOS and gives the list back, also we give error back if we cant query the Gateways.
fetch NetworkLoadbalancers
This fetches all of the Networkloadbalancer, same principle as NAT Gateways.
fetch IPBlocks
With fetch IPBlocks we get the all of the IP Addresses in Datacenter, again same principle as with above mentioned.
fetch ApplicationLoadbalancers
Same as the Networkloadblanacers.
Processing of those objects.
Everthing we fetch will also be processed with help of process Helper functions.
Returned values from those process methods are going to be stored in a variable
and then written to the map, which is then used in prometheus collector to iterate throught them and expose that on /metrics Endpoint.
one notable change is here here I have instead os.Exit(1) added a counter variable, which updates a FailedApiRequest metric, and also isntead of exit, I have continue added, it is subject to change I think, but since after 15 minutes of scrape interval this reqeust will be newly generated I tought exiting totally would break all other scraping functions and therefore ionos exporter would be down and we would have a empty result untill it is restarted, there is another ticket which is working on retrying this 3 times and then deciding what to do.
postgres_scraper.go
In postgres metrics we go about same as with ionos-scraper.go where we define structs which encapsulate data which is going to be worked on and then exporter to the prometheus, also we define 2 additional Structs, TelemetryMetric struct which is goign to store all the values, Values is 2D Array which can take any datatype, that is why i have chosen interface and TelemetryResponse is there to capture the structure of telemetry data.
Methods used
PostgresCollectResourcers
This is a main Method, which has the main infinte loop for processing cluster
processCluster
In this function we go through datacenter Items and fetch Owner, and Database names
fetchTelemetryMetrics
Here we return telemetry response, and use it in processCluster function. In processCluster function we create a map in which we store all of those values that are going to be exposed as a metric later.
s3 scraper.go
S3 Scraper uses go aws sdk to query buckets and read objects and their data and then exposes that data to prometheus, data which is scraped is all the methods used on those buckets GET, PUT, POST, HEAD and also their request and response sizes.
Here are defined 2 structs EndpointConfig struct and Metrics struct.
Variables are declared in global scope and are being used for later iterating through Maps and collector will expose that data to /metrics endpoint
createS3ServiceClient
creates a session using AWS (int this example) IONOS access and secret Key which can be generated in Object Storage Console in DCD. Function also gets endpoint and region as argument, it returns new session, this will be used in S3CollectResources to establish 2 connections one for "de" endpoint and another with "eu-central-2' endpoint.
S3CollectResources
here we have infinite loop with sleep wait cycle, we check for already crreated maps with the endpoint if it exists just skip creation and go to establishing new connection, first we are using the Getenv to load IONOS Object Storage keys and check if we have set them or not if not return from function since we can't do anything then. Also we define a endpoints variable which takes a map which is of EndpointConfig datatype and has necessary keys and values of those endpoints.
furthermore we create a semaphore channel which limits how many concurrent processes can be launched,
in the main for loop we establish a client, which takes all of the values of EndpointConfig and then,
this line
basically lists all the buckets, for current owner, that means that all the buckets even those that are not in same endpoints are listed, and this is where AccessDenied error is being thrown if we are not in same endpoint, so basically next step is to iterate over Buckets for which we launch 1 go routine for each bucket, to limit number of concurrent goroutines we use a buffered channel named semaphore.
In a processBucket function it is similar, each object is processed in its own goroutine.
Other functions implemented are the ones for getting getBucketTags(), processObject() and processLine().
In Process Object we are using go AWS SDK functions to get the Object Input where we give its Bucket Name and its Key, on which we then check if we can get it if not then error would be logged and returned from the function, else it makes new buffer in which we are going to stream all the lines of a logfile. When EOF reached break and go to the processLine where we check for all the Methods GET, PUT, POST, HEAD.
Collectors
Every scraper has its own collector, for example ionos_scraper.go has also ionos_collector.go for keeping it readable, since collectors have all the metrics that should be known on compile time, these metrics will then be serverd on /metrics endpoint.
There is nothing complicated about those collectors, they all have basically three methods one constructor to define every descriptor of a metric and returns a pointer to the collector in case of ionos_collector.go it is NewIonosCollector
then there is Describe function which writes all of those descriptors to desc channel and Collect in which we have made iteration over a IonosDatacenters Map and serve those values to prometheus.
Helper.go
Is a file where we defined some structs and functions which are used throughout exporter. GetHeadBucket
makes a request to check if the permissions are ok to check inside that bucket if not it should skip it. So we don't make
lots of get requests that return bigger payload compared to head request.
Config.yaml
The config.yaml is used for all the metrics that are usefull for postgres cluster, I took all the descriptions from IONOS Telemetry website
ConfigMap.yaml
We added here to load the config.yaml charts/ionos-exporter/templates/ConfigMap.yaml
charts/ionos-exporter/templates/deployment.yaml
Here we haev configure how the AWS GO SDK credentials are being handeld, the ACCESS KEY AND SECRET KEY are in 1Pass of monitoring user and is used for S3 scraping.
values.yaml
charts/ionos-exporter/values.yaml
added and commented out token usage. Tought we would need token to authenticate but seems like it works with username and pass still, also variables are added for s3 credentials.
Links to Tickets or other PRs
TIcket OPS-6220
docu confluence
#4
Approval for review