Designed for developers and system administrators, this Docker solution provides a centralized and searchable audit logging system leveraging Elasticsearch's powerful scalability and search capabilities.
- Introduction
- Getting Started
- Usage
- Running Backups
- Elasticsearch and Kibana
- License
The log-audits-to-elasticsearch
project offers a robust, Docker-based solution for centralized audit logging.
Leveraging Elasticsearch's powerful search capabilities, it's designed to assist developers and system administrators in efficiently storing, searching, and managing audit logs.
This ensures high scalability for growing application needs, offering an invaluable tool for monitoring and security compliance.
Before you begin, ensure you have installed the following tools:
- Docker (version 25.0.3 or newer)
- Docker Compose Compose (version 2.26.1 or newer)
Start by cloning the project repository to your local machine:
git clone [email protected]:bulletinmybeard/log-audits-to-elasticsearch.git
Navigate to the project directory and start the Docker containers. This step builds the Docker images if they're not already built or rebuilds them if any changes were made.
cd log-audits-to-elasticsearch
docker-compose up --build
The smooth operation of the log-audits-to-elasticsearch application depends on correctly configuring two essential files:
- The
.env
file for Docker environments - The
config.yaml
for the FastAPI application itself
The .env
file contains essential environment variables, such as Elasticsearch configuration parameters,
application port settings, and other required variables for various environments.
Duplicate the provided .env-sample file to create the .env
file.
Just as the .env
file is essential for Docker, the config.yaml
file configures the operational parameters of the FastAPI application.
Create the config.yaml
by copying from config-sample.yaml.
This file will enable you to customize the FastAPI application settings, ensuring it operates according to your specific requirements.
Run the application either standalone using the bash script run_dev.sh or run it together with Elasticsearch and Kibana via Docker Compose.
Make sure the run_dev.sh
script is executable (chmod +x run_dev.sh
) and then execute it with ./run_dev.sh
.
The script reads all environment variables from .env
, checks whether poetry
is installed, installs Python dependencies, and runs the FastAPI standalone.
Once you have Docker and Docker Compose installed on your machine, you can start the Python Audit Logger application alongside Elasticsearch and Kibana by executing the command docker-compose up
.
For instances where you need to build or rebuild the Docker images before running, you can append the --build
flag as follows: docker-compose up --build
.
This streamlined process ensures that all necessary components, including Elasticsearch and Kibana,
are automatically set up and interconnected, facilitating a seamless development and testing environment.
If you want to utilize an external Elasticsearch instance, modify the
.env
file accordingly and only initiate the audit log service by running:docker-compose up audit-logger
.
Request Method | Endpoint | Authentication | Body/Get Query parameters | Description |
---|---|---|---|---|
GET |
/health |
None | None | Health check endpoint. |
POST |
/create |
X-API-KEY | JSON audit log | Create a single audit log entry. |
POST |
/create-bulk |
X-API-KEY | JSON audit logs | Create up to 500 audit log entries at once. |
POST |
/create/create-bulk-auto |
X-API-KEY | { "bulk_limit": 500 } (Optional) |
Generates up to 500 fictitious audit log entries using the Faker library. Note: Only available in the development environment to prevent accidental use in production. |
POST |
/search |
X-API-KEY | JSON search parameters | Combine multiple different search parameters and filters to run a search against the Elasticsearch index |
Note: To use endpoints that require an
X-API-KEY
, define the key in theconfig.yaml
.
The log-audits-to-elasticsearch application provides a robust RESTful API tailored for systematic event and activity logging. This ensures that critical information is captured efficiently within Elasticsearch, facilitating easy retrieval and analysis.
Field | Optional/Mandatory | Default Value | Description |
---|---|---|---|
timestamp | Optional | None | The date and time when the event occurred, in ISO 8601 format. |
event_name | Mandatory | - | Name of the event. |
actor.identifier | Mandatory | - | Unique identifier of the actor. Can be an email address, username, etc. |
actor.type | Mandatory | None | Type of actor, e.g., 'user' or 'system'. |
actor.ip_address | Optional | None | IPv4 address of the actor (will be stored anonymized). |
actor.user_agent | Optional | None | User agent string of the actor's device. |
application_name | Mandatory | - | Application name. |
module | Mandatory | - | Module name. |
action | Mandatory | - | Action performed. |
comment | Optional | None | Optional comment about the event. |
context | Optional | None | The operational context in which the event occurred. |
resource.type | Optional | None | Type of the resource that was acted upon. |
resource.id | Optional | None | Unique identifier of the resource. |
operation | Optional | None | Type of operation performed. |
status | Optional | None | Status of the event. |
endpoint | Optional | None | The API endpoint or URL accessed, if applicable. |
server.hostname | Optional | None | Hostname of the server where the event occurred. |
server.vm_name | Optional | None | Name of the virtual machine where the event occurred. |
server.ip_address | Optional | None | IP address of the server (will be stored anonymized). |
meta | Optional | {} | Optional metadata about the event. |
{
"timestamp": "2024-04-06T23:02:25.934470+02:00",
"event_name": "data_deletion",
"actor": {
"identifier": "buckleyjames",
"type": "user",
"ip_address": "192.0.0.0",
"user_agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.1 (KHTML, like Gecko) Chrome/54.0.855.0 Safari/536.1"
},
"application_name": "user-management-service",
"module": "admin",
"action": "DELETE",
"comment": "Monthly housekeeping.",
"context": "admin_user_management",
"resource": {
"type": "user_data"
},
"operation": "delete",
"status": "success",
"endpoint": "/user/{user_id}/delete",
"server": {
"hostname": "server-01",
"vm_name": "vm-01",
"ip_address": "10.10.0.0"
},
"meta": {
"api-version": "1",
"app-version": "1.0.0"
}
}
To record an individual log entry, dispatch a POST
request to the endpoint /create
. The request must include a JSON payload detailing the specifics of the log entry.
To log up to 500
entries in a single operation, utilize the /create-bulk
endpoint. Similar to the single entry endpoint, this requires a JSON payload containing an array of log entry details.
The /create/create-bulk-auto
endpoint provides an automated solution for generating and logging up to 500
demo audit log entries simultaneously. Leveraging Python's Faker library, this endpoint is ideal for populating your Elasticsearch with realistic yet fictitious data for development or testing purposes.
It's important to note that
/create/create-bulk-auto
is only available in the development environment to prevent accidental use in production.
These endpoints serve as your gateway to effective log management, enabling you to maintain a comprehensive audit trail of system events and user activities.
To search within the audit logs, send a POST request to the /search
endpoint.
This endpoint accepts a JSON payload that defines your search criteria and filters,
allowing for complex queries such as range searches, text searches, and field-specific searches.
- Flexible Querying: Tailor your searches by using a variety of query parameters to refine results according to specific requirements, such as time frames, user actions, and status codes.
- Range Searches: Specify start and end dates to retrieve logs from a specific period.
- Text Searches: Utilize keywords or phrases to search within log entries for precise information.
- Field-Specific Searches: Focus your search on specific fields within the log entries for more targeted results.
Explore SEARCH_GUIDE.md for a deep dive into search query examples and instructions.
To ensure data persistence and safety, it's crucial to back up data volumes regularly.
This includes volumes like sa_elasticsearch_data
and sa_kibana_data
.
To facilitate this, we use a BusyBox Docker container to execute the volume_backups.sh script for performing the backups.
The backup data is stored in the backup folder for safekeeping.
To initiate the backup process through a Docker container, run the following command:
docker-compose run backup
To automate the backup process, you can schedule the backup operation using crontab. This enables the Docker container to run at predefined times without manual intervention.
First, ensure that the backup script is executable:
# Grant execute permissions to the backup script
chmod +x volume_backups.sh
Next, open your crontab file in edit mode:
# Open crontab in edit mode
crontab -e
Add the following line to your crontab to schedule a daily backup at 2 AM.
Ensure to replace /path/to/your/
with the actual path to your docker-compose.yml
file:
# Schedule daily backups at 2 AM
0 2 * * * docker-compose -f /path/to/your/docker-compose.yml run backup
This setup ensures that your data volumes are backed up daily, minimizing the risk of data loss and maintaining the integrity of your system.
Elasticsearch is used for storing, searching, and analyzing audit log documents, while Kibana's frontend offers visual analytics on the data stored in Elasticsearch.
ES is best accessed via cURL:
# Example: Retrieve the cluster health status
curl -X GET "http://localhost:9200/_cluster/health?pretty"
Documentation: elasticsearch/reference/current/getting-started.html
Access the Kibana dashboard via http://localhost:5601.
Documentation: kibana/current/introduction.html
This project is licensed under the MIT License - see the LICENSE file for details.