Thanks for your interest in Conjur. Before contributing, please take a moment to read and sign our Contributor Agreement. This provides patent protection for all Conjur users and allows CyberArk to enforce its license terms. Please email a signed copy to [email protected].
For general contribution and community guidelines, please see the community repo.
- Contributing to Conjur
Table of contents generated with markdown-toc
Before getting started, you should install some developer tools. These are not required to deploy Conjur but they will let you develop using a standardized, expertly configured environment.
- git to manage source code
- Docker to manage dependencies and runtime environments
- Docker Compose to orchestrate Docker environments
- Ruby version 3 or higher installed - native installation or using RVM.
Pushing to github is a form of publication, especially when using a public repo. It is a good idea to use a hook to check for secrets before pushing code. Follow this link to learn how to configure git checks for secrets before every push.
It's easy to get started with Conjur and Docker:
-
Install dependencies (as above)
-
Clone this repository
-
Run the build script in your terminal:
$ ./build.sh ... Successfully built 9a18a1396977 $ docker images | grep conjur conjurinc/conjur latest a8229592474c 7 minutes ago 560.7 MB conjur latest a8229592474c 7 minutes ago 560.7 MB conjur-dev latest af98cb5b2a68 4 days ago 639.9 MB
Note: If you are going to debug Conjur using RubyMine IDE or Visual Studio Code IDE, see RubyMine IDE Debugging or Visual Studio Code IDE debugging respectively before setting up the development environment.
The dev
directory contains a docker-compose
file which creates a development
environment with a database container (pg
, short for postgres), and a
conjur
server container with source code mounted into the directory
/src/conjur-server
.
To use it:
-
Install dependencies (as above)
-
Start the container (and optional extensions):
$ cd dev $ ./start ... root@f39015718062:/src/conjur-server#
Once the
start
script finishes, you're in a Bash shell inside the Conjur server container. ToAfter starting Conjur, your instance will be configured with the following:
- Account:
cucumber
- User:
admin
- Password: Run
conjurctl role retrieve-key cucumber:user:admin
inside the container shell to retrieve the admin user API key (which is also the password)
- Account:
-
Run the server
root@f39015718062:/src/conjur-server# conjurctl server <various startup messages, then finally:> * Listening on tcp://localhost:3000 Use Ctrl-C to stop
The
conjurctl server
script performs the following:- wait for the database to be available
- create and/or upgrade the database schema according to the
db/migrate
directory - find or create the token-signing key
- start the web server
You may choose to debug Conjur using
pry.byebug
, RubyMine or Visual Studio Code IDEs. This will allow you to work in the debugger without the server timing out. To do so, run the following command instead ofconjurctl server
:pry.byebug
:rails server -b 0.0.0.0 webrick
- RubyMine and VS Code IDE, make sure you are in
/src/conjur-server
and run the following command:rdebug-ide --port 1234 --dispatcher-port 26162 --host 0.0.0.0 -- bin/rails s -b 0.0.0.0 -u webrick
- Now that the server is listening, debug the code via RubyMine's or VC Code's debuggers.
-
Cleanup
$ ./stop
Running
stop
removes the running Docker Compose containers and the data key.
To enable a user to log into Conjur using LDAP credentials, run start
with the --authn-ldap
flag:
$ cd dev
$ ./start --authn-ldap
...
root@f39015718062:/src/conjur-server#
The --authn-ldap
flag will:
- Start an OpenLDAP container.
- Load a user
alice
with the passwordalice
into the LDAP server. - Load a policy
authn-ldap/test
, that grantsalice
the ability to authenticate viahttp://localhost:3000/authn-ldap/test/cucumber/alice/authenticate
with the passwordalice
.
Validate authentication using the username alice
with the password alice
:
$ curl -v -k -X POST -d "alice" http://localhost:3000/authn-ldap/test/cucumber/alice/authenticate
To enable a host to log into Conjur using GCP identity token, run start
with the --authn-gcp
flag.
Form more information on how to setup Conjur Google Cloud (GCP) authenticator, follow the official documentation.
If you are going to be debugging Conjur using RubyMine IDE, follow these steps:
-
Add a debug configuration
- Go to: Run -> Edit Configurations
- In the Run/Debug Configuration dialog, click + on the toolbar and choose “Ruby remote debug”
- Specify a name for this configuration (i.e “debug Conjur server”)
- Specify these parameters:
- Remote host - the address of Conjur. if it's a local docker environment the address
should be
localhost
, otherwise enter the address of Conjur - Remote port - the port which RubyMine will try to connect to for its debugging protocol.
The convention is
1234
. If you changing this, remember to change also the exposed port indocker-compose.yml
& in therdebug-ide
command when running the server - Remote root folder:
/src/conjur-server
- Local port: 26162
- Local root folder:
/local/path/to/conjur/repository
- Remote host - the address of Conjur. if it's a local docker environment the address
should be
- Click "OK"
-
Create remote SDK
- Go to Preferences -> Ruby SDK and Gems
- In the Ruby SDK and Gems dialog, click + on the toolbar and choose “New remote...”
- Choose “Docker Compose” and specify these parameters:
- Server: Docker
- If Docker isn't configured, click "New..." and configure it.
- Configuration File(s):
./dev/docker-compose.yml
- Note: remove other
docker-compose
files if present.
- Note: remove other
- Service: conjur
- Environment variables: This can be left blank
- Ruby or version manager path: ruby
- Server: Docker
- Click "OK"
If you are going to be debugging Conjur using VS Code IDE, follow these steps:
- Go to: Debugger view
- Choose Ruby -> Listen for rdebug-ide from the prompt window, then you'll get the sample launch configuration in
.vscode/launch.json
. - Edit "Listen for rdebug-ide" configuration in the
launch.json
file:- remoteHost - the address of Conjur. if it's a local docker environment the address
should be
localhost
, otherwise enter the address of Conjur - remotePort - the port which VS Code will try to connect to for its debugging protocol.
The convention is
1234
. If you changing this, remember to change also the exposed port indocker-compose.yml
& in therdebug-ide
command when running the server - remoteWorkspaceRoot:
/src/conjur-server
- remoteHost - the address of Conjur. if it's a local docker environment the address
should be
As a developer, there are a number of common scenarios when actively working on Conjur.
The ./cli
script, located in the dev
folder is intended to streamline these tasks.
$ ./cli --help
NAME
cli - Development tool to simplify working with a Conjur container.
SYNOPSIS
cli [global options] command [command options] [arguments...]
GLOBAL OPTIONS
--help - Show this message
COMMANDS
exec - Steps into the running Conjur container, into a bash shell.
key - Displays the admin user API key
policy load <account> <policy/path.yml> - Loads a conjur policy into the provided account.
$ ./cli exec
root@88d43f7b3dfa:/src/conjur-server#
$ ./cli key
3xmx4tn353q4m02f8e0xc1spj8zt6qpmwv178f5z83g6b101eepwn1
$ ./cli policy load <account> <policy/path/from/project/root.yml>
For most development work, the account will be cucumber
, which is created when the development environment starts. The policy path must be inside the cyberark/conjur
project folder, and referenced from the project root.
Are you planning a change to the Conjur API? This could involve adding a new endpoint, extending an existing endpoint, or changing the response of an existing endpoint. When you make changes to the Conjur API, you must also update the Conjur OpenAPI Spec.
To prepare to make a change to the Conjur API, follow the process below:
- Clone the OpenAPI spec project and create a branch.
- Update the spec with your planned API changes and create a draft pull request; make sure it references
the Conjur issue you are working on. Note: it is expected that the automated tests in your spec branch
will fail, because they are running against the
conjur:edge
image which hasn't been updated with your API changes yet. - Return to your clone of the Conjur project, and make your planned changes to the Conjur API following the standard branch / review / merge workflow.
- Once your Conjur changes have been merged and the new
conjur:edge
image has been published, rerun the automation in your OpenAPI pull request to ensure that the spec is consistent with your API changes. Have your spec PR reviewed and merged as usual.
Note: Conjur's current API version is in the API_VERSION
file and should correspond to the OpenAPI version.
The Conjur database schema is implemented as Sequel database migration files. To add a new database migration, run the command inside the Conjur development container:
$ rails generate migration <migration_name>
...
create db/migrate/20210315172159_migration_name.rb
This creates a new file under db/migrate
with the migration name prefixed by a
timestamp.
The initial contents of the file are similar to:
Sequel.migration do
up do
...
end
down do
...
end
end
More documentation on how to write Sequel migrations is available here.
Database migrations are applied automatically when starting Conjur with the
conjurctl server
command.
Conjur has rspec
and cucumber
tests, and an automated CI Pipeline.
Note on performance testing: set WEB_CONCURRENCY: 0
- this configuration is
useful for recording accurate coverage data that can be used in
theci/docker-compose.yml and
conjur/ci/test_suites/authenticators_k8s/dev/dev_conjur.template.yaml.
This isn't a realistic configuration and should not be used for benchmarking.
The CI Pipeline is defined in the Jenkinsfile
, and documented in CI_README.md
RSpec tests are easy to run from within the conjur
server container:
root@aa8bc35ba7f4:/src/conjur-server# rspec
Run options: exclude {:performance=>true}
Randomized with seed 62317
.............................................
Finished in 3.84 seconds (files took 3.33 seconds to load)
45 examples, 0 failures
Cucumber tests require the Conjur server to be running. It's easiest to achieve
this by starting Conjur in one container and running Cucumber from another. Run
the service in the conjur
server container:
root@aa8bc35ba7f4:/src/conjur-server# conjurctl server
...
* Listening on tcp://localhost:3000
Use Ctrl-C to stop
Then, using the dev/cli
script, step into the Conjur container to run the cukes:
$ ./cli exec
...
root@9feae5e5e001:/src/conjur-server#
When adding new test suites, please follow the guidelines in the top comments
of the file ci/test
.
To run the cukes with an Open ID Connect (OIDC) compatible environment, run cli
with the --authn-oidc
flag:
$ ./cli exec --authn-oidc
...
root@9feae5e5e001:/src/conjur-server#
Prerequisites
- A Google Cloud Platform account. To create an account see https://cloud.google.com/.
- Google Cloud SDK installed. For information on how to install see https://cloud.google.com/sdk/docs
- Access to a running Google Compute Engine instance.
- Access to predefined Google cloud function with the following code.
To run the cukes with a Google Cloud Platform (GCP) compatible environment, run cli
with the --authn-gcp
flag and pass the following:
-
The name of a running Google Compute Engine (GCE) instance. (for example: my-gce-instance)
-
The URL of the Google Cloud Function (GCF). (for example: https://us-central1-exmaple.cloudfunctions.net/idtoken?audience=conjur/cucumber/host/demo-host)
$ ./cli exec --authn-gcp --gce [GCE_INSTANCE_NAME] --gcf [GCF_URL]
...
root@9feae5e5e001:/src/conjur-server#
When running with --authn-gcp
flag, the cli script executes another script which does the heavy lifting of
provisioning the ID tokens (required by the tests) from Google Cloud Platform.
To run the GCP authenticator test suite:
root@9feae5e5e001:/src/conjur-server# cucumber -p authenticators_gcp cucumber/authenticators_gcp/features
Below is the list of the available Cucumber suites:
- api
- authenticators_azure
- authenticators_config
- authenticators_gcp
- authenticators_jwt
- authenticators_ldap
- authenticators_oidc
- authenticators_status
- manual-rotators
- policy
- rotators
Each of the above suites can be executed using a profile of the same name.
For example, to execute the api
suite, your command might look like the following:
root@9feae5e5e001:/src/conjur-server# cucumber --profile api # runs api cukes
root@9feae5e5e001:/src/conjur-server# cucumber --profile api cucumber/api/features/resource_list.feature
Rake tasks are easy to run from within the conjur
server container:
- Get the next available error code from errors
The output will be similar to
root@aa8bc35ba7f4:/src/conjur-server# rake error_code:next
The next available error number is 63 ( CONJ00063E )
Several cucumber tests are written to verify conjur works properly when authenticating to Kubernetes. These tests have hooks to run against both Openshift and Google GKE.
The cucumber tests are located under cucumber/authenticators_k8s/features
and can be run by going into the ci/test_suites/authenticators_k8s
directory and running:
$ summon -f [secrets.ocp.yml|secrets.yml] ./init_k8s.sh [openshift|gke]
$ summon -f [secrets.ocp.yml|secrets.yml] ./entrypoint.sh [openshift|gke]
init_k8s.sh
- executes a simple login to Openshift or GKE to verify credentials as well as logging into the Docker Registry definedtest.sh
- executes the tests against the defined platform
The secrets file used for summons needs to contain the following environment variables
- openshift
OPENSHIFT_USERNAME
- username of an account that can create namespaces, adjust cluster properties, etcOPENSHIFT_PASSWORD
- password of the accountOPENSHIFT_URL
- the URL of the RedHat CRC cluster- If running this locally - use
https://host.docker.internal:6443
so the docker container can talk to the CRC containers
- If running this locally - use
OPENSHIFT_TOKEN
- the login token of the above username/password- only needed for local execution because the docker container executing the commands can't redirect for login
- obtained by running the following command locally after login -
oc whoami -t
- gke
GCLOUD_CLUSTER_NAME
- cluster name of the GKE environment in the cloudGCLOUD_ZONE
- zone of the GKE environment in the cloudGCLOUD_PROJECT_NAME
- project name of the GKE environmentGCLOUD_SERVICE_KEY
- service key of the GKE environment
To execute the tests locally, a few things will have to be done:
- Openshift
- Download and install the RedHat Code Ready Container
- This contains all the necessary pieces to have a local version of Openshift
- After install, copy down the kubeadmin username/password and update the secrets.ocp.yml file with the password
- Execute
oc whoami -t
and update the token property
- Download and install the RedHat Code Ready Container
- GKE
- Work with infrastructure to obtain a GKE environment
If the local revision of your files don't have a docker image built yet - build the docker images using the following command:
$ ./build_locally.sh <sni cert file>
- Fork the project
- Clone your fork
- Make local changes to your fork by editing files
- Commit your changes
- Push your local changes to the remote server
- Create new Pull Request
From here your pull request will be reviewed and once you've responded to all feedback it will be merged into the project. Congratulations, you're a contributor!
Use this guide to maintain consistent style across the Conjur project.
The changelog file is maintained based on Keep a Changelog guidelines.
Each accepted change to the Conjur code (documentation and website updates
excepted) requires adding a changelog entry to the corresponding Added
,
Changed
, Deprecated
, Removed
, Fixed
and/or Security
sub-section (add
one as necessary) of the Unreleased section in the changelog.
Bumping the version number after each and every change is not required, advised nor expected. Valid reasons to bump the version are for example:
- Enough changes have accumulated,
- An important feature has been implemented,
- An external project depends on one of the recent changes.
- Review the NOTICES.txt file and ensure it reflects the current set of dependencies in the Gemfile
- If a new dependency has been added, a dependency has been dropped, or a version has changed since the last tag - make sure the NOTICES file is up-to-date with the new versions
- Examine the changelog and decide on the version bump rank (major, minor, patch).
- Change the title of Unreleased section of the changelog to the target
version.
- Be sure to add the date (ISO 8601 format) to the section header.
- Add a new, empty Unreleased section to the changelog.
- Remember to update the references at the bottom of the document.
- Change VERSION file to reflect the change. This file is used by some scripts.
- Change the API_VERSION file to reflect the correct OpenAPI spec release if there has been an update to the API. If the OpenAPI spec is out of date with the current API, it will need to be updated and released before you can release this project.
- Create a branch and commit these changes (including the changes to
NOTICES.txt, if there are any).
Bump version to x.y.z
is an acceptable commit message. - Push your changes and get the PR reviewed and merged.
-
Tag the version on the master branch using eg.
git tag -s v1.2.3
. Note this requires you to be able to sign releases. Consult the github documentation on signing commits on how to set this up.- Git will ask you to enter the tag message, which should just be
v1.2.3
.
- Git will ask you to enter the tag message, which should just be
-
Push the tag:
git push v1.2.3
(orgit push origin v1.2.3
if you are working from your local machine).
Note: you may find it convenient to use the release
script to add the
tag. In general, deleting and changing tags should be avoided.
- Create a new release from the tag in the GitHub UI
- Add the CHANGELOG for the current version to the GitHub release description
Visit the Red Hat project page once the images have been pushed and manually choose to publish the latest release.