This hardening guide, and the operating system (OS) configuration it produces, is in early beta and will change. The baseline is currently version 0.1.4. 18F is currently testing this baseline for deployment into production.
There are many recommended controls that are already in place with given a fresh install of either Ubuntu 12.04 LTS or Ubuntu 14.04 LTS. These controls are fully itemized at TBD and are covered by our an open source compliance testing suite we are currently working on.
Controls were implemented on both an Ubuntu 12.04 box downloaded from the VagrantCloud and the default 64-bit Ubuntu 14.04 Amazon Machine Image (AMI).
There might be additional controls necessary for your system at the OS level, above and beyond the following. Please consult with your cybersecurity and DevOps teams.
We strongly encourage the community to help us improve this baseline given an ever-changing risk environment. We believe there is rarely any security in obscurity and that there will always be a greater level of expertise on the outside of team than inside. By doing this work in the open, we will more quickly and effectively identify flaws or potential improvements.
We've edited this guide to address both the Vagrant and Amazon Web Services (AWS) use cases. As we improve the baseline, we will likely separate the guide based on the deployment environment. For the time being, differences between Vagrant and AWS are noted in-line.
The guide is currently written presuming intermediate familiarity with Linux, the command line, Vagrant/AWS, and system administration in general. References are provided where applicable to provide additional background.
These are controls to implement in a production environment. They may not be currently appropriate for a development environment that is in constant change. Some potential workarounds are discussed at the end of the guide. [TBD]
First and foremost we'll need a VM to work with. To do this, we'll start by getting Vagrant installed then use it to abstract away all the complications of spinning up a new VM.
Vagrant is a quick, easy way to configure and launch consistent virtual environments across a variety of platforms for test and development. As usual homebrew makes using Vagrant simple for Mac OSX users.
brew install vagrant
With vagrant and installed we can start preparing our environment.
mkdir fisma-ready-ubuntu
cd fisma-ready-ubuntu
vagrant init ubuntu/trusty64
Before we can bring a virtual machine online we need one more thing, a provider for Vagrant to work with. This guide covers both virtualbox for running locally and aws to launch your machine on Amazon EC2.
Homebrew comes in handy once again with a little help from the homebrew extension cask.
First, we'll add then cask extension.
brew install caskroom/cask/brew-cask
Then we'll use cask to set up virtualbox.
brew cask install virtualbox
Users of other operating systems can find downloads and instructions on for the installation of vagrant and virtualbox at vagrantup and virtualbox respectively.
Add a second disk and finish configuring your box for use with the virtualbox provider by replacing the contents of the newly created Vagrantfile with the following:
./Vagrantfile
Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.provider "virtualbox" do | vm |
file_to_disk = './disks/xvdk.vdi'
vm.customize ['createhd',
'--filename', file_to_disk,
'--size', 40 * 1024]
vm.customize ['storageattach', :id,
'--storagectl', 'SATAController',
'--port', 1,
'--device', 0,
'--type', 'hdd',
'--medium', file_to_disk]
end
end
This reconfigures the vagrant box we just initialized to feature a second, 40GB disk which we'll start carving up in just a moment. Don't worry, this new disk won't actually take up 40GB of space. It will only consume as much space as the data data we place on it through the course of this exercise which isn't much.
Add the aws provider plugin to Vagrant.
vagrant plugin install vagrant-aws
Add the blank 'dummy' box which we'll use as a base for launching into AWS.
vagrant box add dummy https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box
Export your AWS credentials as environment variables.
export AWS_ACCESS_KEY=YOURAWSACCESSKEY
export AWS_SECRET_KEY=YOURAWSSECRETKEY
Add a second disk and finish configuring your box for use with the virtualbox provider by replacing the contents of the newly created Vagrantfile with the following.
./Vagrantfile
Vagrant.configure("2") do |config|
config.vm.box = "dummy"
config.vm.provider :aws do |aws, override|
aws.keypair_name = "your-keypair-name"
aws.ami = "ami-9eaa1cf6"
override.ssh.username = "ubuntu"
override.ssh.private_key_path = "/path/to/your-keypair-name.pem"
aws.tags = {
'Name' => 'fisma-ready/ubuntu-lts'
}
aws.block_device_mapping = [{ 'DeviceName' => '/dev/xvdk', 'Ebs.VolumeSize' => 40 }]
aws.security_groups = ['your-security-group-which-allows-ssh']
end
end
There are several placeholder parameters that will need updating.
-
your-keypair-name - The name of the AWS keypair to use with the instance.
-
/path/to/your-keypair-name.pem - The path and filename of your AWS private key.
-
your-security-group-which-allows-ssh - The name of an EC2 security group which allows SSH.
Lets go ahead and start our machine with vagrant up and connect to it via SSH with vagrant ssh.
vagrant up
vagrant ssh
If all goes well you'll find yourself at the prompt of a fresh Ubuntu 14.04.1 LTS (Trusty) VM.
vagrant@vagrant-ubuntu-trusty-64:~$
Since we created and attached a second disk as part of the Vagrantfile above, there's very little to do here. Just confirm the disk is present.
sgdisk -p /dev/sdb
Creating new GPT entries.
...
Total free space is 83886013 sectors (40.0 GiB)
Number Start (sector) End (sector) Size Code Name
vagrant@vagrant-ubuntu-trusty-64:~$
In the AWS device namespace, it becomes problematic if you occupy the sdb - sde device names. Within your instance, you may see these mapped to xvdb - xvde respectively. In the AWS Provider Vagrantfile above we've mapped our second disk to /dev/xvdk to avoid any potential conflicts.
The rest of the partition guidance in this section is written from the perspective of the virtualbox provider using device /dev/sdb
. If you're running in AWS simply substitute /dev/xvdk
for /dev/sdb
.
Before we do anything, let's make sure we're all patched up.
sudo apt-get update && sudo apt-get upgrade -y
Grab a snack, this will take a bit.
References:
Pay special attention when partitioning. Depending on your provider /dev/xvda
or /dev/sda
is the home of your system disk. Re-partitioning these devices will most assuredly break your box!
Users of the AWS provider might notice one or more additional devices at /dev/xvdb
. This is your instance store which is beyond the scope of this doc and can be safely ignored.
You can now can add partitions to the second disk. Check to see if it's there.
sudo sgdisk -p /dev/sdb
In this listing, you should now see:
Disk /dev/xvdk: 83886080 sectors, 40.0 GiB
...
Total free space is 83886013 sectors (40.0 GiB)
Number Start (sector) End (sector) Size Code Name
Since we have 40GB to work with we'll make each partition 10GB. Pay special attention to the final partition, we're not specifying a set size, rather it will use all remaining space on the disk.
sudo sgdisk -n 1:0:+10G /dev/sdb
sudo sgdisk -n 2:0:+10G /dev/sdb
sudo sgdisk -n 3:0:+10G /dev/sdb
sudo sgdisk -n 4:0:+ /dev/sdb
Lets have a look at our newly created partitions.
Disk /dev/sdb: 83886080 sectors, 40.0 GiB
...
Total free space is 2014 sectors (1007.0 KiB)
Number Start (sector) End (sector) Size Code Name
1 2048 20973567 10.0 GiB 8300
2 20973568 41945087 10.0 GiB 8300
3 41945088 62916607 10.0 GiB 8300
4 62916608 83886046 10.0 GiB 8300
Looks great! But all that's happened is that you've created device listing within the OS. Ubuntu still doesn't think these are physical volumes ready for use.
Reference: A Beginner's Guide To LVM
The first thing we'll need to get started with LVM is the lvm2
package.
sudo apt-get install -y lvm2
The Logical Volume Management (LVM) app can take these devices and make partitions. The first step is to create physical volumes (PV).
sudo pvcreate /dev/sdb1 /dev/sdb2 /dev/sdb3 /dev/sdb4
Check your work.
sudo pvdisplay
Now we can create a volume group (VG) to contain our logical volumes (LV). We need to give our VG a name - securefolders.
sudo vgcreate securefolders /dev/sdb1 /dev/sdb2 /dev/sdb3 /dev/sdb4
If this is confusing, check out this diagram from Wikipedia and this StackExchange article
The order of abstraction is PV > VG > LV.
Make a logical volume - this is the last abstraction, and where we will actually mount our folders. I'll begin with a LV for /tmp, which I'll call temp.
sudo lvcreate --name temp --size 10G securefolders
If you go back and look at vgdisplay at this point, you should see that your Free PE has now dropped.
Abstractions are over, so let's get an actual filesystem going! At the moment, ext4 is the latest and greatest, so we'll use that.
sudo mkfs.ext4 /dev/securefolders/temp
We now have some safety checks before we start mounting. A key configuration file fstab will be altered, so let's make a backup.
sudo cp /etc/fstab /etc/fstab.$(date +%Y-%m-%d)
To be extra careful, let's make a backup of all the files we're going to re-mount. I've heard rsync can preserve permissions better than cp. Go to the top level and make some backup folders first.
sudo mkdir homeBackup
sudo mkdir varBackup
sudo rsync -aXS /home/* /homeBackup
sudo rsync -aXS /var/* /varBackup
Ok, we finally have a thing we can mount to! Let's tackle /tmp first.
sudo mount /dev/securefolders/temp /tmp
df -H
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 43G 1.4G 40G 4% /
none 4.1k 0 4.1k 0% /sys/fs/cgroup
udev 252M 13k 252M 1% /dev
tmpfs 52M 373k 52M 1% /run
none 5.3M 0 5.3M 0% /run/lock
none 257M 0 257M 0% /run/shm
none 105M 0 105M 0% /run/user
vagrant 500G 45G 455G 9% /vagrant
/dev/mapper/securefolders-temp 11G 24M 9.9G 1% /tmp
Let's put some security options on that folder.
sudo mount -o remount,nodev,nosuid,noexec /tmp
But what happens if we reboot? We don't want to deal with creating a shell script for this. This should be in the baseline. There's a file called fstab that will cover it for us.
sudo vi /etc/fstab
Go to the end and add the following lines.
/dev/securefolders/temp /tmp ext4 defaults,rw,nodev,nosuid,noexec 0 2
/dev/securefolders/variables /var ext4 defaults,rw,nodev,nosuid,nobootwait 0 0
/dev/securefolders/audits /var/log/audit ext4 defaults,rw,nodev,nosuid,noexec,nobootwait 0 0
/dev/securefolders/house /home ext4 defaults,rw,nodev,nosuid,noexec,nobootwait 0 0
Nice - after a reboot, everything will mount correctly with your new security options.
With all the concepts in the clear let's bundle things up for the other assets that will be in securefolders VG.
sudo lvcreate --name variables --size 10G securefolders
sudo lvcreate --name audits --size 10G securefolders
sudo lvcreate --name house -l 100%FREE securefolders
sudo mkfs.ext4 /dev/securefolders/variables
sudo mkfs.ext4 /dev/securefolders/audits
sudo mkfs.ext4 /dev/securefolders/house
Let's break things up here. First, get /var mounted.
sudo mount /dev/securefolders/variables /var
Pausing here, we want to bind mount /var/tmp to /tmp. Beyond inheriting previous security modifications of /tmp, this keeps things tidy.
Since we still have the new OS smell, /var/tmp may not exist yet. Same with /var/log/audit. We'll make them both and do necessary binding.
cd /var
sudo mkdir /var/tmp
sudo mount --bind /tmp /var/tmp
sudo mkdir log
cd log
sudo mkdir audit
Check your binding.
mount | grep -e "^/tmp" | grep /var/tmp
Just like everything else, we need to modify our /etc/fstab to have this persist beyond a reboot.
We'll add the following:
/tmp /var/tmp none bind 0 0
Finish up the mounting.
sudo mount /dev/securefolders/audits /var/log/audit
sudo mount /dev/securefolders/house /home
Look under the hood.
df -H
...
/dev/mapper/securefolders-temp 11G 24M 9.9G 1% /tmp
/dev/mapper/securefolders-variables 11G 24M 9.9G 1% /var
/dev/mapper/securefolders-audits 11G 24M 9.9G 1% /var/log/audit
/dev/mapper/securefolders-house 11G 24M 9.9G 1% /home
One more secure modification in this area - see /run/shm above? This is shared memory, a likely vector for certain attacks. Let's lock it down.
sudo mount -o remount,noexec,nosuid,nodev /run/shm
One more edit to /etc/fstab to bake it in. Add:
none /run/shm tmpfs defaults,nodev,noexec,nosuid 0 0
Now we'll restore the previously backed up contents of /var and /home and clean up afterwards.
sudo rsync -aXS /homeBackup/* /home
sudo rsync -aXS /varBackup/* /var
Clean up.
sudo rm -rf /homeBackup
sudo rm -rf /varBackup
References: Need to find some
Every installed application creates an potential attack surface. A hardened OS reduces this surface to the absolute minimum for system functionality.
One of the places we can reduce the attack surface is in a configuration file stored in /etc/modprobe.d
cd /etc/modprobe.d
touch 18Fhardened.conf
vi 18Fhardened.conf
Add:
#Applications
install cramfs /bin/true
install freevxfs /bin/true
install jffs2 /bin/true
install hfs /bin/true
install hfsplus /bin/true
install squashfs /bin/true
install udf /bin/true
# Protocols
install dccp /bin/true
install sctp /bin/true
install rds /bin/true
install tipc /bin/true
References: Ubuntu and GRUB
Boot loaders need to be protected. Anyone who is not root doesn't need to write to this file. Likely everything is already in order, but just to be sure:
chmod og-wx /boot/grub/grub.cfg
We don't want anyone messing with booting, so let's create a password.
grub-mkpasswd-pbkdf2
Be sure to keep this password somewhere safe. You'll also get an encrypted version of the password. Hang on to it as we need to jump in to configuration settings again.
vim /etc/grub.d/40_custom
Add:
set superusers="INSERT USER HERE"
password_pbkdf2 INSERT USER HERE <encrypted-password>
EOF
Sub out with the actual value you just got, otherwise the next command will fail. This is the value that starts with grub.pbkdf2.sha512
Save the file and then use:
update-grub
Unless this OS is powering a router, we can harden how it handles ICMP redirects.
/etc/sysctl.conf controls these settings.
vim /etc/sysctl.conf
You'll find all the controls already here, but commented out. You'll want to remove the # for the following, or add these lines if they're not there:
# Spoof protection
net.ipv4.conf.default.rp_filter=1
net.ipv4.conf.all.rp_filter=1
# Do not accept ICMP redirects (prevent MITM attacks)
net.ipv4.conf.all.accept_redirects=0
net.ipv6.conf.all.accept_redirects=0
net.ipv4.conf.default.accept_redirects=0
net.ipv6.conf.default.accept_redirects=0
net.ipv4.conf.all.secure_redirects=0
net.ipv4.conf.default.secure_redirects=0
# Do not send ICMP redirects (we are not a router)
net.ipv4.conf.all.send_redirects=0
net.ipv4.conf.default.send_redirects=0
# Do not accept IP source route packets (we are not a router)
net.ipv4.conf.all.accept_source_route=0
net.ipv6.conf.all.accept_source_route=0
net.ipv4.conf.default.accept_source_route=0
net.ipv6.conf.default.accept_source_route=0
# Log packets from Mars
net.ipv4.conf.all.log_martians=1
net.ipv4.conf.default.log_martians=1
Update your kernel parameters to match - for each of these lines enter:
/sbin/sysctl -w [INSERT LINE HERE W/ VALUE]
Then, flush!
/sbin/sysctl -w net.ipv4.route.flush=1
/sbin/sysctl -w net.ipv6.route.flush=1
Reference: Notes about auditing configuration
Audit strategy is highly environment and application specific. In the near future, we will post some overall best practices here, liley including a more standardized configuration for /etc/audit/auditd.conf
auditd is great, but it can't audit processes that run before it starts - or can it?
vi /etc/default/grub
Then modify the following line to read:
GRUB_CMDLINE_LINUX="audit=1"
run:
update-grub
By default, you won't capture audit events when a user changes system date and time, or when users change user accts or passwords. This ain't no TARDIS and you're no Doctor. Let's capture those events by modifying /etc/audit/audit.rules.
vi /etc/audit/audit.rules
Add:
Date/time:
-a always,exit -F arch=b64 -S adjtimex -S settimeofday -k time-change
-a always,exit -F arch=b32 -S adjtimex -S settimeofday -S stime -k time-change
-a always,exit -F arch=b64 -S clock_settime -k time-change
-a always,exit -F arch=b32 -S clock_settime -k time-change
-w /etc/localtime -p wa -k time-change
User/passwords:
-w /etc/group -p wa -k identity
-w /etc/passwd -p wa -k identity
-w /etc/gshadow -p wa -k identity
-w /etc/shadow -p wa -k identity
-w /etc/security/opasswd -p wa -k identity
Network stuff:
-a exit,always -F arch=b64 -S sethostname -S setdomainname -k system-locale
-a exit,always -F arch=b32 -S sethostname -S setdomainname -k system-locale
-w /etc/issue -p wa -k system-locale
-w /etc/issue.net -p wa -k system-locale
-w /etc/hosts -p wa -k system-locale
-w /etc/network -p wa -k system-locale
SELinux (likely you're using AppArmor instead, but just in case you pull SELinux packages from Debian, best to have this already listed)
-w /etc/selinux/ -p wa -k MAC-policy
Login and logout:
-w /var/log/faillog -p wa -k logins
-w /var/log/lastlog -p wa -k logins
-w /var/log/tallylog -p wa -k logins
Permission modifications:
-a always,exit -F arch=b64 -S chmod -S fchmod -S fchmodat -F auid>=500 -F auid!=4294967295 -k perm_mod
-a always,exit -F arch=b32 -S chmod -S fchmod -S fchmodat -F auid>=500 -F auid!=4294967295 -k perm_mod
-a always,exit -F arch=b64 -S chown -S fchown -S fchownat -S lchown -F auid>=500 -F auid!=4294967295 -k perm_mod
-a always,exit -F arch=b32 -S chown -S fchown -S fchownat -S lchown -F auid>=500 -F auid!=4294967295 -k perm_mod
-a always,exit -F arch=b64 -S setxattr -S lsetxattr -S fsetxattr -S removexattr -S lremovexattr -S fremovexattr -F auid>=500 -F auid!=4294967295 -k perm_mod
-a always,exit -F arch=b32 -S setxattr -S lsetxattr -S fsetxattr -S removexattr -S lremovexattr -S fremovexattr -F auid>=500 -F auid!=4294967295 -k perm_mod
Unauthorized access:
-a always,exit -F arch=b64 -S creat -S open -S openat -S truncate -S ftruncate -F exit=-EACCES -F auid>=500 -F auid!=4294967295 -k access
-a always,exit -F arch=b32 -S creat -S open -S openat -S truncate -S ftruncate -F exit=-EACCES -F auid>=500 -F auid!=4294967295 -k access
-a always,exit -F arch=b64 -S creat -S open -S openat -S truncate -S ftruncate -F exit=-EPERM -F auid>=500 -F auid!=4294967295 -k access
-a always,exit -F arch=b32 -S creat -S open -S openat -S truncate -S ftruncate -F exit=-EPERM -F auid>=500 -F auid!=4294967295 -k access
Collect filesystem mounts:
-a always,exit -F arch=b32 -S mount -F auid>=500 -F auid!=4294967295 -k mounts
-a always,exit -F arch=b64 -S mount -F auid>=500 -F auid!=4294967295 -k mounts
File deletion:
-a always,exit -F arch=b64 -S unlink -S unlinkat -S rename -S renameat -F auid>=500 -F auid!=4294967295 -k delete
-a always,exit -F arch=b32 -S unlink -S unlinkat -S rename -S renameat -F auid>=500 -F auid!=4294967295 -k delete
Change to sysadmin scope:
-w /etc/sudoers -p wa -k scope
Kernel loading:
-w /sbin/insmod -p x -k modules
-w /sbin/rmmod -p x -k modules
-w /sbin/modprobe -p x -k modules
-a always,exit arch=b64 -S init_module -S delete_module -k modules
Make audit config immutable
-e 2
Only root should modify what system jobs cron runs.
chown root:root /etc/crontab
chmod og-rwx /etc/crontab
chown root:root /etc/cron.hourly
chmod og-rwx /etc/cron.hourly
chown root:root /etc/cron.daily
chmod og-rwx /etc/cron.daily
chown root:root /etc/cron.weekly
chmod og-rwx /etc/cron.weekly
chown root:root /etc/cron.monthly
chmod og-rwx /etc/cron.monthly
chown root:root /etc/cron.d
chmod og-rwx /etc/cron.d
It's easier to manage a whitelist then a blacklist. Make the following additional modifications.
[As an aside, this guidance is a little off IMHO. If neither *.deny or *.allow files exist, access is automatically restricted to root. In the case of cron, this is how Ubuntu 14L is right out of the box.]
/bin/rm /etc/at.deny
touch /etc/cron.allow
touch /etc/at.allow
Modify both files so they have "root" listed. Then:
chmod og-rwx /etc/cron.allow
chmod og-rwx /etc/at.allow
chown root:root /etc/cron.allow
chown root:root /etc/at.allow
Script-kiddies and crackers should find no safe harbor here. Let's prevent common cracking by improving our password polcies.
Install a helpful module:
apt-get install libpam-cracklib
This module will automatically add a line to /etc/pam.d/common-password with some default settings. It will read:
password requisite pam_cracklib.so retry=3 minlen=8 difok=3
Our current recommended settings are more stringent. Let's also remove difok - it's ok if characters from old passswords repeat as long as there is significant entropy.
password requisite pam_cracklib.so retry=3 minlen=24 dcredit=-2 ucredit=-2 ocredit=-2 lcredit=-2
We can prevent password reuse by adding remember=5 to the next line:
password [success=1 default=ignore] pam_unix.so obscure use_authtok try_first_pass sha512 remember=5
Lockouts /etc/pam.d/login
auth required pam_tally2.so onerr=fail audit silent deny=5 unlock_time=900
Password expiration using /etc/login.defs
PASS_MAX_DAYS 90
PASS_MIN_DAYS 7
PASS_WARN_AGE 10
Modify a file at...
/etc/ssh/sshd_config
X11Forwarding no
MaxAuthTries 4
PermitRootLogin no
PermitEmptyPasswords no
PermitUserEnvironment no
Ciphers aes128-ctr,aes192-ctr,aes256-ctr
ClientAliveInterval 600
ClientAliveCountMax 0
Lock the file permissions down:
chown root:root /etc/ssh/sshd_config
chmod 600 /etc/ssh/sshd_config
Add one then correct permissions
chmod 644 /etc/motd
chmod 644 /etc/issue
chmod 644 /etc/issue.net
Modified config files
/etc/default/grub
/etc/audit/audit.rules
/etc/sysctl.conf
/etc/pam.d/common-password
/etc/pam.d/login
/etc/ssh/sshd_config
/etc/login.defs
Questions? Email [email protected].