This project is a Ground Station platform that includes a web application, databases for user management and satellite data storage, and an API integrated with an ASN.1 compiler. The API is responsible for generating database tables and inserting data received in ASN.1 and CSV formats. The system is designed to efficiently handle satellite telemetry and telecommand data, ensuring data integrity and security.
This guide explains how to install Ansible, generate SSH keys, distribute them across the nodes, and set up a Kubernetes cluster using Ansible playbooks.
First, install Ansible on your control machine (usually your local machine or the master node).
For Ubuntu:
sudo apt update
sudo apt install ansible -yFor other distributions, follow the official Ansible installation guide.
To communicate with your remote nodes (master and workers) without passwords, generate SSH keys on your control machine:
ssh-keygen -t rsa -b 4096 -N '' -f ~/.ssh/id_rsaThis will create a public key (id_rsa.pub) and a private key (id_rsa) in the ~/.ssh directory.
You need to distribute the SSH public key (id_rsa.pub) to each of the nodes (master and workers). This allows the control machine to connect to them without requiring a password each time.
You can either manually copy the SSH key to each node:
ssh-copy-id user@node_ip_addressOr, use Ansible to distribute the keys automatically. Ensure your hosts.ini file contains the IPs of all your nodes and the username of the user you want to use to connect to the nodes:
[all]
master ansible_host=xxx.xxx.xxx.xxx
worker1 ansible_host=xxx.xxx.xxx.xxx
worker2 ansible_host=xxx.xxx.xxx.xxx
[master]
master
[workers]
worker1
worker2
[all:vars]
ansible_user=username
ansible_python_interpreter=/usr/bin/python3Run an Ansible playbook (ssh-keys.yml) to distribute the SSH keys
ansible-playbook -i hosts.ini ssh-keys.ymlNow that SSH access is set up, you can create the Kubernetes cluster. Use the provided playbook to configure the firewall, install Docker, and set up Kubernetes on all nodes.
Run the following playbook (k8s-cluster.yml) to install and configure the Kubernetes cluster:
ansible-playbook -i hosts.ini k8s-cluster.yml --ask-become-passNote
The --ask-become-pass option is used to avoid entering the sudo password for each command.
This playbook will:
- Allow necessary ports through the firewall
- Install Docker and containerd
- Install Kubernetes components (
kubeadm,kubectl,kubelet) - Initialize the Kubernetes master node and retrieve the join command for the worker nodes
- Join the worker nodes to the master to form the cluster
- Install the Calico network plugin
Important
Run the following command to get permission to access the Kubernetes cluster:
sudo cp -f /etc/kubernetes/admin.conf $HOME/.kube/configTo enable WireGuard on the nodes, run the following command(wait calico pods to be ready):
kubectl patch felixconfiguration default --type='merge' -p '{"spec":{"wireguardEnabled":true}}'Once the playbook has finished running, you can verify the cluster setup by logging into the master node and running the following command:
kubectl get nodes -o wideYou should see the master and worker nodes listed, indicating the cluster is running successfully.
Warning
Before deploying the application, you should create your own custom container image and change it in the deployment files.
To deploy the application within a Kubernetes cluster, use the auto-gs.py script. This script automates the deployment process, ensuring that all components are set up correctly. The primary deployment command is:
python3 auto-gs.py -create 3 -rf 3-create n: Deploys the application withnreplicas of the Cassandra pods.-rf n(optional): Specifies the replication factor for Cassandra. If not provided, a default value of 3 is used.
Important
The number of Cassandra pods must be equal or lower than the number of Kubernetes nodes.
Important
The replication factor must be set to a value lower than the number of Cassandra nodes.
After deployment, data can be managed either through the web application or directly by accessing the containers.
-
Copy ASN.1 Files:
python3 auto-gs.py -cpASN file
Copies ASN.1 files to the
asn1sccpod in the/dmt/filesASN1/directory. -
Copy CSV Files:
python3 auto-gs.py -cpCSV file
Copies CSV files to the
asn1sccpod in the/dmt/filesCSV/directory. -
Open ASN.1 Compiler Console:
python3 auto-gs.py -asn
Opens an interactive console within the
asn1sccpod. -
Open Web Application Console:
python3 auto-gs.py -web
Opens a console in the
webpod for web application management.
Note
First time you need to run the command python3 manage.py createsuperuser on the web pod
The ASN.1 compiler is used to create database tables and insert data into Cassandra. Here are the essential commands:
-
Create Data Model:
This command compiles ASN.1 files into a data model, creating tables in the specified keyspace.
python3 /src/asn2dataModel.py -modulesTelecommand "DataTypes-Telecommands" -keyspace tfm -contact_points cassandra -clusterPort 9042 ./filesASN1 DataTypesTelecommands.asn DataTypes-Telemetries.asn
Tip
The -modulesTelecommand parameter is optional and can be omitted if the telecommand data is not required.
-
Insert Telemetry/Telecommand Data:
This command inserts data from CSV files into the corresponding tables.
python3 /src/ReadWriteTMTC/readCSV.py ./filesCSV -keyspace tfm -contact_points cassandra -clusterPort 9042 -filesTelecommands datatypes_telecommands.csv
-
Create Telecommand CSV:
Generates a CSV file from the specified tables, which can be sent as a telecommand.
python3 /src/ReadWriteTMTC/createCSV.py ./filesTelecommand "datatypes_telecommands" -keyspace tfm -contact_points cassandra -clusterPort 9042 -sendTelecommands True
Tip
The -sendTelecommands parameter is optional by default is set to False.
For more information go to Compiler Usage Guide.


