- Splunk-Export-Flow-Logs
- Table Of Contents
- Introduction
- Design Goals
- Components
- Other Concepts
- Flow Log Enrichment Functionality
- Setup a Splunk Trial for Testing
- Pre-requisites
- Scope of Tutorial
- Quickstart For Setup On OCI Side
- Create Compartments and Groups
- Create a Dynamic Group
- Create Tenancy IAM policy - Flow Log Dynamic Group
- Create Compartment Level IAM Policy
- Create a VCN, Subnet & Security Lists
- Configure Cloud Shell
- The Cloud Shell
- Create a Function Application
- Getting Started with Fn Deployment
- Step-8
- Deploy the Functions
- Set the Environment Variables for Each Function
- Deploy Event Rules
The Flow Logs of a VCN is central to any kind of debugging in the network and proves to be an extremely useful tool in understanding traffic flow patterns in the context of security as well.
Event driven
Scalable
Low-Cost
Zero maintenance
Secure
Least Privilege Access
- The
VCN Flow Logs
are what helps understand the flow of network traffic in a given subnet. - The
OCI Logging Service
is required to enable flow log collection and storage for a given subnet. The OCI Logging service can be used to collect logs for other Oracle Cloud Native Services as well. - The
OCI Events Service
is used to trigger Functions every time the Logging service creates an object with the flow logs of a given Subnet. - The
OCI Functions
trigger a Function that enriches flow-logs and publishes events to the Splunk HTTP Event Collector End point. - The
Splunk HTTP Event Collector
is a simplified mechanism that splunk provides to publish events in a standard format
- VCN Flow Logs are generated for every subnet.
- The Subnet is a regional resource and therefore this has to be setip for every region your tenancy is subscribed to.
- The architecture uses Logging service that populates logs of a given resource in the Object Storage of a compartment.
- Event Service Triggers are scoped to a given compartment , so it is necessary to configure logging to populate logs in a single compartment.
- If there is a requirement to have the logs populated in multiple compartments eg. PROD, UAT etc. or , you would have to create as many events as there are compartments where VCN Flow Logs are written.
Raw Log
HEADERS
--------
<version>
<srcaddr>
<dstaddr>
<srcport>
<dstport>
<protocol>
<packets>
<bytes>
<start_time>
<end_time>
<action>
<status>
2 172.16.2.145 172.16.2.179 82 64 13 112 441 1557424462 1557424486 REJECT OK
Enriched JSON
{
"version": "",
"srcaddr": "-",
"dstaddr": "-",
"srcport": "-",
"dstport": "-",
"protocol": "-",
"packets": "-",
"bytes": "-",
"start_time": "",
"end_time": "
"status": "",
"compartmentId": "",
"compartmentName": "",
"availabilityDomain": "",
"vcnId": "",
"vcnName": "",
"subnetId": "",
"subnetName": "",
"vnicId": "",
"vnicName": "",
"securityListIds": [""],
"securityListNames": [""],
"nsgIds": [],
"nsgNames": []
}
- Splunk provides a 15 day Cloud trial and the capability to store about 5GB worth of event data that you forward/export to Splunk. Here's the link to sign-up Splunk Sign Up
- Select Splunk-Cloud, provide your data and Login to Splunk.
- To setup the HTTP Event Collector which we leverage, the solution refer to link Setup HTTP event collector
- Points to note
- Whitelist your Tenancy for Logging
- Whitelist your Tenancy for VCN Flow Logs
Here's the link to the process for Cloud Native LA.
- This tutorial does not help you setup OCI Logging in the tenancy, you will recieve a document from oracle once the tenancy is whitelisted to help setup logging in tenancy.
- The recommendation as per this architecture is to have a dedicated logging compartment that fetches resources from multiple compartments and populates logs in a single compartment.
- This reduces the number of Event Rules to be written and number of Function Deployments.
This quickstart assumes you have working understanding of basic principles of OCI around IAM | Networking and you know how to get around the OCI Console. Since Flow Logs contain sensitive network Information,
Burger-Menu
-->Identity
-->Compartments | Users | Groups
- Create a Compartment
flow-log-compartment
- Create a Dynamic Group
flow-log-dg
- Write appropriate IAM Policies at the tenancy level and compartment level.
Burger-Menu
-->Identity
-->Dynamic Groups
- Create a Dynamic Group
flow-log-dg
Instances that meet the criteria defined by any of these rules will be included in the group.
ANY {resource.type = 'fnfunc', resource.compartment.id = [flow-log-compartment OCID]}
This policy is to allow for enriching flow logs with VCN Information and Compartment Information based on the flow log data recieved
Burger-Menu
-->Identity
-->Policies
- Create an IAM Policy
flow-log-dg-tenancy-policy
with the following policy statements in theroot
compartment
Allow dynamic-group flow-log-dg to read virtual-network-family in tenancy
Allow dynamic-group flow-log-dg to read compartments in tenancy
Burger-Menu
-->Identity
-->Policies
Create an IAM Policyflow-log-dg-compartment-policy
inside the compartmentflow-log-compartment
Allow dynamic-group flow-log-dg to read objects in compartment flow-log-compartment
Allow dynamic-group flow-log-dg to use virtual-network-family in compartment flow-log-compartment
Burger-Menu
-->Networking
-->Virtual Cloud Networks
- Use VCN Quick Start to Create a VCN
flow-log-vcn
. - Create only a private-subnet
flow-log-private-subnet
- Go to Security List and Delete all a
Stateful Ingress Rules
in theDefault Security list
. - Go to Default Security List and create a
Stateful Egress Rule
is available in theDefault Security List
to allow egress traffic for0.0.0.0/0
on port443
protocolTCP
0.0.0.0/0
on port8088
protocolTCP
0.0.0.0/0
on port53
protocolUDP
Setup Cloud Shell in your tenancy - Link
- Oracle Cloud Infrastructure Cloud (OCI) Shell is a web browser-based terminal accessible from the Oracle Cloud Console.
- Cloud Shell provides access to a Linux shell, with a pre-authenticated Oracle Cloud Infrastructure CLI, a pre-authenticated Functions, Ansible installation, and other useful tools for following Oracle Cloud Infrastructure service tutorials and labs.
Burger-Menu
-->Developer Services
-->Functions
- Create a Function Application
flow-log-app
in the compartmentflow-log-compartment
while selectingflow-log-vcn
and thePrivate Subnet
If you are not an IAM Policy Expert, just create these policies as shown in the Function Pre-requisites
- Setup Papertrail / OCI Logging Service to debug Function executions if required. Setup PaperTrail , check them out.
Click on the Getting started Icon after Function Application Creation is done.
Follow the steps on the screen for simplified Fn Deployment. While you have the option of local Fn Development environment, I'd recommend using the cloud shell if you simply want to deploy Functions.
Follow the steps until Step-7
Instead of creating a new fn we are deploying an existing function. So clone
Clone the Repo in the cloud shell
git clone https://github.com/vamsiramakrishnan/splunk-export-logs.git
Each folder within the repo represents a function , go to each folder and deploy the function using the the fn --verbose deploy
cd splunk-export-logs
cd enrich-flow-logs
fn --verbose deploy flow-log-app enrich-flow-logs
The Deploy Automatically Triggers an Fn Build and Fn Push to the Container registry repo setup for the functions.
These environment variables help call other functions. One after the other.
Fn-Name | Parameter Name | Description | Example |
---|---|---|---|
enrich-flow-logs | source_source_name | The Source Name that you would like Splunk to see | oci-hec-event-collector |
enrich-flow-logs | source_host_name | Source Hostname that you would like Splunk to see | oci-vcn-flow-logs |
enrich-flow-logs | splunk_url | Splunk Cloud URL ( Append input to the beginning of your splunk cloud url, do not add any http/https etc. | input-prd-p-hh6835czm4rp.cloud.splunk.com |
enrich-flow-logs | splunk_hec_token | The Token that is unqiue to that HEC | TOKEN |
enrich-flow-logs | splunk_index_name | The index into which you'd like these logs to get aggregated | main |
enrich-flow-logs | splunk_hec_port | The listener port of the HEC Endpoint of Splunk | 8088 |
Create an Event Rule in the compartment that gets triggered based on Service Object Storage
on Object-Create
under the condition that it bears attribute all_flows