This boilerplate is meant to be used with Hapify. To learn more about Hapify setup, please refer to https://www.hapify.io/get-started.
This boilerplate provides an API built with Slim 4, MySQL and Docker.
- Option 1: Clone and configure this boilerplate using command
hpf new --boilerplate slim_php_tractr
. - Option 2: You can clone this repository and change the project id in file
hapify.json
by running commandhpf use
.
Then you need to generate code from your Hapify project using hpf generate
.
If you want to see an example of a generated boilerplate, you can download this one.
.gitignore
. You should edit this file and remove the last lines before committing.
This API should be used with docker and docker-compose.
Run installation scripts to create SQL structure and insert an admin:
Install dependencies
docker-compose run --rm composer
Start database
docker-compose up -d mysql
Wait about 30 seconds to allow MySQL to start properly before running the next line.
Setup database
docker-compose run --rm php php app/cmd/setup/index.php
Insert admin user
docker-compose run --rm php php app/cmd/insert-admin/index.php
You may need to change admin fields in file app/cmd/insert-admin/admin.php
, depending on your user structure.
Or run all in one command
docker-compose run --rm composer && docker-compose run --rm php bash -c "sleep 30 && php app/cmd/setup/index.php && php app/cmd/insert-admin/index.php"
The login and password of the admin user is defined in file app/cmd/insert-admin/admin.php
([email protected]
/ admin
).
To start the API, run this command
docker-compose up api php
Now the API is available on http://localhost:3000
.
To insert randomized data into the database, run this command
docker-compose run --rm php php app/cmd/populate/index.php
You can run PhpMyAdmin to browse database. Start the service by running the command bellow and go to http://localhost:8000
docker-compose up phpmyadmin
This boilerplate can be used with those front-end boilerplates:
If you need to update you data models and re-generate code (using Hapify,
you should run this command docker-compose run --rm php php app/cmd/setup/index.php
to update the SQL structure.
Please refer to Hapify Best Practices to learn more about Git patches within Hapify context.
Once the API has been generated, open the file API.md
.
This boilerplate includes the following modules
- user sessions (sessions are stored in Redis)
- users accesses management
This boilerplate interprets Hapify data-models fields properties as described bellow:
- Primary: Represent the MySQL Id.
- Unique: Creates an unique index and throw a 409 in case of conflict.
- Label: Allow partial match search for this field.
- Nullable: Allow null value to be send for POST and PATCH endpoints. Also define the column as nullable in MySQL.
- Multiple: Only used for entity relation for Many-to-Many relation. If also searchable, it performs search using operator
OR
. - Embedded: Only used for entity relation. It joins related entities in search results. Related entities are always joined in a read response.
- Searchable: Allows this field in query params for search and count endpoints. If the field is also a
DateTime
or aNumber
, it addsmin
andmax
query params. It also creates an index in MySQL. - Sortable: Allows this field's name as value in the query param
_sort
of the endpoint search. Also create an index for MySQL. - Hidden: Hide this field from API's responses.
- Internal: This field is not settable. For the POST endpoint, it guess a suitable value for this field. You may need to edit this default value after code generation.
- Restricted: This field is allowed in POST and PATCH endpoints only for admins. An admin is a user with the field
role='admin'
. - Ownership: This field is used to allow the request when the access of the action is made by an
owner
. The value of the field is compared to the connected user id. For search and count endpoints, if also searchable, it forces to perform the lookup in the owner's documents.
Add a documentation of the generated API.Add a population script that inserts random data in the database.- Ability to migrate data structure when running the setup script.