There are two possibilities for setting up your own development version of CERN Analysis Preservation, a Bare Installation with python virtualenvwrapper and a Docker Installation.
This is a step-by-step guide for installing CERN Analysis Preservation on your machine.
CERN Analysis Preservation is based on Invenio v3.0 alpha, which requires some additional software packages:
For example, on Debian GNU/Linux, you can install them as follows:
sudo apt-get install elasticsearch postgresql rabbitmq-server redis-server
Now, add the following lines in your "elasticsearch.yml" (for
Debian GNU/Linux the full path is
/etc/elasticsearch/elasticsearch.yml
):
# CAP CONFIGURATION
cluster.name: cap
discovery.zen.ping.multicast.enabled: false
http.port: 9200
http.publish_port: 9200
In order to use PostgreSQL you need to start the database server. This is very operation system specific, so you should check how it works for yours. When the server is running, switch to the default PostgreSQL user and create a user who is allowed to create databases:
createuser -d $Username
Finally, do a system-wide install (see below for how to do a local install enclosed inside your virtual environment instead) for the Sass preprocessor by following Sass web guide and running:
sudo npm install -g [email protected] [email protected] uglify-js requirejs
Let's start by cloning the repository:
git clone https://github.com/cernanalysispreservation/analysispreservation.cern.ch.git cap
All else will be installed inside a python virtualenv for easy maintenance and encapsulation of the libraries required. From inside your cap folder you can choose anytime whatever virtual environment you want to work on (just type workon virtualenv_installed) or you can choose to create a new one.
To do the latter, create a new virtual environment to hold our CAP instance from inside the repository folder:
cd cap
mkvirtualenv cap
Install the CAP package from inside your cap
repository folder and
run npm to install the necessary JavaScript assets the Invenio modules
depend on:
pip install -r requirements.txt
cap npm
cdvirtualenv var/cap-instance/static
npm install bower
npm install
Build the assets from your repository folder:
cd -
cap collect -v
cap assets build
python ./scripts/schemas.py
Start Elasticsearch in the background:
elasticsearch &
Note: Instead of the following steps you may want to run
./scripts/init.sh
.
Create a database to hold persistent data:
cap db init
cap db create
Create test user accounts and roles with which you can log in later:
cap users create [email protected] -a --password infoinfo
cap users create [email protected] -a --password alicealice
cap users create [email protected] -a --password atlasatlas
cap users create [email protected] -a --password cmscms
cap users create [email protected] -a --password lhcblhcb
cap roles create [email protected]
cap roles create [email protected]
cap roles create [email protected]
cap roles create [email protected]
cap roles create [email protected]
cap roles add [email protected] [email protected]
cap roles add [email protected] [email protected]
cap roles add [email protected] [email protected]
cap roles add [email protected] [email protected]
cap roles add [email protected] [email protected]
info
is a superuser, alice
is an ALICE user, atlas
is an
ATLAS user, cms
is a CMS user and lhcb
is a LHCB user.
Create some basic collections for ElasticSearch:
cap collections create CERNAnalysisPreservation
cap collections create CMS -p CERNAnalysisPreservation
cap collections create CMSQuestionnaire -p CMS -q '_type:cmsquestionnaire'
cap collections create CMSAnalysis -p CMS -q '_type:cmsanalysis'
cap collections create LHCb -p CERNAnalysisPreservation
cap collections create LHCbAnalysis -p LHCb -q '_type:lhcbanalysis'
cap collections create ATLAS -p CERNAnalysisPreservation
cap collections create ATLASWorkflows -p ATLAS -q '_type:atlasworkflows'
cap collections create ATLASAnalysis -p ATLAS -q '_type:atlasanalysis'
cap collections create ALICE -p CERNAnalysisPreservation
Create the index in ElasticSearch using the mappings:
cap index init
Create a location for files:
cap files location local var/data --default
Now you are ready to run the server.
If you want to populate the database with example records simply run:
# For creating demo records with schema validation
cap fixtures records
# For creating demo records without validation ( --force )
cap fixtures records -f
To run an https server you will have to create a certificate. This needs to be done only once from inside your repository folder:
openssl genrsa 4096 > ssl.key
openssl req -key ssl.key -new -x509 -days 365 -sha256 -batch > ssl.crt
The certificate will be valid for 365 days.
Start a redis server in the background:
redis-server &
Start the web application locally in debug mode:
gunicorn -b 127.0.0.1:5000 --certfile=ssl.crt --keyfile=ssl.key cap.wsgi:application --workers 9 --log-level debug
Now you can log in locally in your browser by going to
https://localhost:5000/app/login
and entering one of the user
credentials created above, e.g. user [email protected]
with
password infoinfo
.
You can specify the python version for the virtual environment on creation as follows (e.g. to use python 2.7):
mkvirtualenv -p /usr/bin/python2.7 cap
You do not need to install sass and all npm dependencies globally on your system. You can install them inside your virtual environment so they will only be accessible from within it. Simply add:
export GEM_HOME="$VIRTUAL_ENV/gems"
export GEM_PATH=""
export PATH="$GEM_HOME/bin:$PATH"
export npm_config_prefix=$VIRTUAL_ENV
to the postactivate
of your .virtualenv
folder and run
cdvirtualenv
gem install sass
npm -g install [email protected] [email protected] uglify-js requirejs
after creating your virtual environment.
If you have trouble with the setup, check if you are missing one of the following requirements, e.g. on Debian GNU/Linux:
sudo apt-get install npm ruby gcc python-virtualenvwrapper
The version of Python 2 given by python --version
or
python2 --version
should be greater than 2.7.10.
If you encounter a problem with requirements that do not match it may be because the python eggs are not included in your virtualenv and you will have to update them running:
pip install -r requirements.txt
If you have trouble indexing the database try:
cap db destroy
cap db init
and if that does not work try:
curl -XDELETE 'http://localhost:9200/_all'
cap db init
First, install docker-engine
and docker-compose
on your machine.
Second, build the CERN Analysis Preservation images, using the development configuration:
docker-compose -f docker-compose-dev.yml build
Third, start the CERN Analysis Preservation application:
docker-compose -f docker-compose-dev.yml up -d
Fourth, create database and initialise default collections and users:
docker exec -i -t analysispreservationcernch_web_1 /code/scripts/init.sh
Fifth, populate the database with some example records (optional):
docker exec -i -t analysispreservationcernch_web_1 cap fixtures records -f
Finally, see the site in action:
firefox http://localhost/