Skip to content
/ Charles Public

The AI NFT world-premier combining fun and tech

Notifications You must be signed in to change notification settings

onicai/Charles

Repository files navigation

Charles

Charles is an AI NFT, running on the Internet Computer.

Charles Image

Model

Download from https://huggingface.co/onicai/charles/tree/main

  • Files are model.bin & tok4096.bin
  • Store in the folder: ./models/out-09

Training

Training data of Charles can be found at:

https://huggingface.co/datasets/onicai/Charles-training-data

For details, see training/README.md

Run llama2.c from command line

To do some prompt testing, it is nice to run llama2.c directly from the llama2.c github repo.

git clone https://github.com/icppWorld/llama2.c
cd llama2.c

conda create --name llama2-c python=3.10
conda activate llama2-c
pip install -r requirements.txt

make run

# Example command
./run ../charles/models/out-09/model.bin -z ../charles/models/out-09/tok4096.bin -t 0.1 -p 0.9 -i "Charles loves ice cream"

Backend

Instructions in backend/README-bioniq-setup.md

charlesStorybook (Frontend)

Welcome to your new charlesStorybook project and to the Internet Computer development community. By default, creating a new project adds this README and some template files to your project directory. You can edit these template files to customize your project and to include your own code to speed up the development cycle.

To get started, you might want to explore the project directory structure and the default configuration file. Working with this project in your development environment will not affect any production deployment or identity tokens.

To learn more before you start working with charlesStorybook, see the following documentation available online:

If you want to start working on your project right away, you might want to try the following commands:

cd charlesStorybook/
dfx help
dfx canister --help

Running the project locally

If you want to test your project locally, you can use the following commands:

# Starts the replica, running in the background
dfx start --background

# Deploys your canisters to the replica and generates your candid interface
dfx deploy

Once the job completes, your application will be available at http://localhost:4943?canisterId={asset_canister_id}.

If you have made changes to your backend canister, you can generate a new candid interface with

npm run generate

at any time. This is recommended before starting the frontend development server, and will be run automatically any time you run dfx deploy.

If you are making frontend changes, you can start a development server with

npm start

Which will start a server at http://localhost:8080, proxying API requests to the replica at port 4943.

Note on frontend environment variables

If you are hosting frontend code somewhere without using DFX, you may need to make one of the following adjustments to ensure your project does not fetch the root key in production:

  • setDFX_NETWORK to ic if you are using Webpack
  • use your own preferred method to replace process.env.DFX_NETWORK in the autogenerated declarations
    • Setting canisters -> {asset_canister_id} -> declarations -> env_override to a string in dfx.json will replace process.env.DFX_NETWORK with the string in the autogenerated declarations
  • Write your own createActor constructor

About

The AI NFT world-premier combining fun and tech

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •