Skip to content
This repository has been archived by the owner on May 3, 2024. It is now read-only.

Latest commit

 

History

History

strapi

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 

How to integrate CORTX with Google Compute Engine and Strapi

List of Contents

Prerequisites

What is strapi

Strapi is a open source headless cms that you can use to create a beautiful website without the need of a backend service, using strapi you can create database, store image, and create api out of that more easily, however when we first starting strapi the image storage is in the public folder inside of the strapi itself, this will make our image prone to error, like when we migrate our strapi we need to migrate our public folder as well, luckily strapi have some system that can integrate with s3 easily now i'm gonna show you how to do that.

Video demo

IMAGE ALT TEXT HERE

Install cortx-s3server in Google Compute Engine

So in order for our cortx-s3server to works, first we need to create an image spesifically for google compute engine, because cortx-s3server can't be build in other image other than centos7.8.2003 up until now, and should be using kernel 3.10.0-1127, so first of all open your virtualbox and create a virtual machine :

create virtual machine

You can named it whatever you want, the most important part is you should choose type linux and choose version other linux (64-bit) then click next

ram

Put your memory size to 2048 MB, because centos will not be created if you put default memory, then click next

create virtual

Click create virtual hard disk now then click create

choose vmdk

Choose VMDK then click on next

dinamically allocated

Click on dinamically allocated

storage

Set your storage to be 30 GB, because approximately when we install our cortx-s3server later it will use 15 GB of our disk so just to make it safe we put 30 GB, but basically you can resize it later as well after that click create

start

Now click on start to start it up

choose your iso

Choose the centos7.8.2003 minimal iso then click start

centos

Click install centos 7

continue

Click continue

installation destination

Click on installation destination

done

Click on ATA VBOX HARDDISK then click done

begin

Click on begin installation

set

Set the password and user as you wish and wait for the installation to complete

finish

Click on finish configuration

reboot

click on reboot

login

After you login to centos, issue this command :

vi /etc/default/grub

Remove the rhgb and quiet kernel command-line arguments. and change it to console=ttyS0,38400n8d just like the image above then type :wq to close vim editor

qemu

Now go to your command line if you're on windows i recommend use wsl issue this command on terminal :

sudo apt-get install qemu -y

Now go to C:\Users\<your windows username>\VirtualBox VMs\seagate open terminal there and issue this command :

qemu-img convert -f vmdk seagate.vmdk  -O raw disk.raw

You will then get disk.raw file now let's compress this image using this command :

tar -cSzf centos782003.tar.gz disk.raw

Now go to your google cloud storage dashboard and upload the centos782003.tar.gz

upload

Here is the screenshot after you successfully upload it to your google cloud storage

image

Now go to google compute engine images dashboard and click create image

image

Fill the name as you wish and select your cloud storage tar that you create earlier

Now click on VM instances and let's create some virtual machine by clicking Create Instance

8gb

Pick the 8gb memory one

change

Change boot disk to the one you create earlier

allow

Allow both http and https traffic then click create

external

Copy paste your external api then open putty

ssh

Copy paste the ip address to hostname and click open to login to your ssh then follow the tutorial here

Install strapi

Run this command:

yarn create strapi-app my-project --quickstart

Then stop the strapi process in terminal, then run this command :

yarn add strapi-provider-upload-aws-s3

Create my-project/config/plugins.js then fill in this code :

module.exports = ({ env }) => ({
    upload: {
      provider: 'aws-s3',
      providerOptions: {
        s3ForcePathStyle: true,
          endpoint:"http://<your external ip address>",
          sslEnabled:false,
        accessKeyId: env('AWS_ACCESS_KEY_ID'),
        secretAccessKey: env('AWS_ACCESS_SECRET'),
        region: env('AWS_REGION'),
        
        params: {
            Bucket: env('AWS_BUCKET'),
            StorageClass: env('AWS_S3_STORAGE_CLASSES')
        },
        logger: console
      }
    }
   });

create .env in the root of the strapi project folder that we create earlier and write this code :

HOST=0.0.0.0
PORT=1337
AWS_ACCESS_KEY_ID=<your cortx access key id>
AWS_SECRET_ACCESS_KEY=<your cortx secret access key>
AWS_REGION=US
AWS_BUCKET=<your cortx bucket name>
AWS_S3_STORAGE_CLASSES=S3 Standard

Now you can go and upload some image in strapi

strapi

Click on Media Library

assets

Click on upload assets then upload some image then go to your putty again and run this command :

aws s3 ls s3://<your bucket name>

file

If you see some list like the above image then you're successfully upload from strapi to cortx-s3server in your google compute engine

Error you may encounter

Wrong kernel

kernel

you might find error like this to fix that run this command one by one:

curl https://linuxsoft.cern.ch/cern/centos/7/updates/x86_64/Packages/Packages/kernel-devel-3.10.0-1127.el7.x86_64.rpm -o kernel.rpm
rpm -i kernel.rpm

Wrong url of kafka in documentation

When you try to run this command :

curl "http://cortx-storage.colo.seagate.com/releases/cortx/third-party-deps/centos/centos-7.8.2003-2.0.0-latest/commons/kafka/kafka-2.13_2.7.0-el7.x86_64.rpm" -o kafka.rpm

you will got this error :

Could not resolve host: cortx-storage.colo.seagate.com; Unknown error

To fix that run this command one by one:

mkdir kafka
cd kafka
curl https://www.apache.org/dyn/closer.cgi?path=/kafka/2.7.0/kafka_2.13-2.7.0.tgz -o kafka.tgz
tar xf kafka.tgz --strip 1
yum -y install java-1.8.0-openjdk
cd ..
ln -s kafka kafka
echo "export PATH=$PATH:/root/kafka/bin" >> ~/.bash_profile
source ~/.bash_profile
zookeeper-server-start.sh -daemon /root/kafka/config/zookeeper.properties
kafka-server-start.sh -daemon /root/kafka/config/server.properties

References