Leveraging reproducible research
- AngularJS
- Bootstrap
bower install
Create a copy of the file client/app/config/configSample.js
and name it client/app/config/config.js
. You must configure the required application settings in this file, which is not part of the version control:
window.__env.server = /*String containing server address*/;
window.__env.api = /*String containing base api*/;
window.__env.sizeRestriction = /*integer*/;
window.__env.disableTracking = /*true/false, default is false*/;
window.__env.enableDebug = /*true/false, default is false*/;
window.__env.piwik = /*String containing piwik server adress*/;
window.__env.userLevels = {};
window.__env.userLevels.admin = /*Integer containing the required user level for admin status*/;
window.__env.userLevels.regular = /*Integer containing the required user level for regular status*/;
window.__env.userLevels.restricted = /*Integer containing the required user level for restricted status*/;
During development it is reasonable to disable the user tracking in the config file.
window.__env.disableTracking = true;
You can start all required o2r microservices (using latest images from Docker Hub) with just two commands using docker-compose
(version >= 1.6.0
).
There are several docker-compose
configurations in the directory test
of this repository starting a number of containers.
docker-compose-remote.yml
starts all microservices as well as the client as containers from Docker Hub. This is probably want you want to simply run the platform.docker-compose-remote-toolbox.yml
starts all microservices as well as the client as containers from Docker Hub and mounts a client configuration file suitable for typical settings when using Docker Toolbox.docker-compose-db.yml
starts the required databases and configures them. While this could be integrated into the other configurations, it is a lot easier to make sure the DBs are up and running before starting the microservices.mongodb
MongoDBelasticsearch
Elasticsearchmongoadmin
An instance of admin-mongo at port1234
docker-compose.yml
starts all microservices as containers downloaded from o2rproject on Docker Hub and mounts the client (the repository of this file) from the host into an nginx container. The client must be build on the host!docker-compose-host-nginx.yml
is a variant of the above but the nginx must run on the host and is not run in a container.docker-compose-local.yml
starts all microservices as containers that were build locally. Only useful for testing container-packaging of apps. The microservice image names are simply the name without leadingo2r-
, somuncher
,bouncer
, etc. The client is mounted from the host, see above.docker-compose-local-platformcontainer.yml
the same as the previous configuration, but the client is also started in a container based on the local image namedplatform
.
The configurations all use a common volume o2r_test_storage
(with the global name test_o2r_test_storage
because the name of the directory of this file is preprended by Docker), and a common network o2rnet
(with the global name test_o2rnet
).
The volume and network can be inspected for development purposes:
docker volume ls
docker volume inspect test_o2r_test_storage
docker network ls
docker network inspect test_o2rnet
You can remove the storage volumes by running docker-compose down -v
.
Elasticsearch requires the ability to create many memory-mapped areas (mmapss) for fast access. The usual max map count check setting is configured to low on many computers. You must configure vm.max_map_count
on the host to be at least 262144
, e.g. on Linux via sysctl
. You can find instructions for all hosts (including Docker Toolbox) in the Elasticsearch docs.
Some of the settings to run the platform cannot be published. These must be provided at runtime using envionment variables as is described in the OS-specific instructions below. Not providing one of these paramters results in untested behaviour.
The parameters are as follows:
OAUTH_CLIENT_ID
identifier for the platform with auth providerOAUTH_CLIENT_SECRET
password for identification with the auth providerOAUTH_URL_CALLBACK
the URL that the authentication service redirects the user to, important to complete the authentication (start with machine IP when using Docker Toolbox)ZENODO_TOKEN
authentication token for Zenodo, required for shipping to Zenodo (sandbox)
An adminMongo instance is running at http://localhost:1234. In mongoAdmin please manually create a connection to host db
, i.e. mongodb://db:27017
to edit the database (click "Update" first if you edit the existing connection, then "Connect").
docker-compose --file test/docker-compose-db.yml up -d
# wait at least 8 seconds for configuration container to run.
OAUTH_CLIENT_ID=<...> OAUTH_CLIENT_SECRET=<...> OAUTH_URL_CALLBACK=<...> ZENODO_TOKEN=<...> docker-compose --file test/docker-compose.yml up
The environmental variables must be passed seperately on Windows, followed by the docker-compose commands:
$env:OAUTH_CLIENT_ID = <...>
$env:OAUTH_CLIENT_SECRET = <...>
$env:OAUTH_URL_CALLBACK = <...>
$env:ZENODO_TOKEN = <...>
docker-compose --file test/docker-compose-db.yml up -d
docker-compose --file test/docker-compose-remote.yml up
The services are available at http://localhost
.
When using Compose with Docker Toolbox/Machine on Windows, volume paths are no longer converted from by default, but we need this conversion to be able to mount the docker volume to the o2r microservices. To re-enable this conversion for docker-compose >= 1.9.0
set the environment variable COMPOSE_CONVERT_WINDOWS_PATHS=1
.
Also, the client's defaults (i.e. using localhost
) does not work. We must mount a config file to point the API to the correct location, see test/config-toolbox.js
, and use the prepared configuration file docker-compose-remote-toolbox.yml
.
docker-compose --file test/docker-compose-db.yml up -d
COMPOSE_CONVERT_WINDOWS_PATHS=1 OAUTH_CLIENT_ID=<...> OAUTH_CLIENT_SECRET=<...> OAUTH_URL_CALLBACK=<...> ZENODO_TOKEN=<...> docker-compose --file test/docker-compose.yml up
The services are available at http://<machine-ip>
.
You can remove all containers and images by o2r with the following two commands on Linux:
docker ps -a | grep o2r | awk '{print $1}' | xargs docker rm -f
docker images | grep o2r | awk '{print $3}' | xargs docker rmi --force
If you run the o2r microservices locally as a developer, it is useful to run a local nginx to make all API endpoints available under one port (80
), and use the same nginx to serve the application in this repo. A nginx configuration file to achieve this is test/nginx.conf
.
#sed -i -e 's|http://o2r.uni-muenster.de/api/v1|http://localhost/api/v1|g' js/app.js
docker run --rm --name o2r-platform -p 80:80 -v $(pwd)/test/nginx.conf:/etc/nginx/nginx.conf -v $(pwd)/client:/etc/nginx/html $(pwd)/test:/etc/nginx/html/test nginx
If you run this in a Makefile, $(CURDIR)
will come in handy to create the mount paths instead of using $(pwd)
.
If you update the metadata structure of compendium
or jobs
and you already have indexed these in elasticsearch, you have to drop the elasticsearch o2r
-index via
curl -XDELETE 'http://172.17.0.3:9200/o2r'
Otherwise, new compendia will not be indexed anymore.
o2r-platform is licensed under Apache License, Version 2.0, see file LICENSE. Copyright © 2017 - o2r project.