Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

@ubiquityos gpt command #1

Merged
merged 58 commits into from
Oct 18, 2024

Conversation

Keyrxng
Copy link
Contributor

@Keyrxng Keyrxng commented Jul 13, 2024

Resolves ubiquity-os/plugins-wishlist#29

I followed your prompt template and kept the system message short and sweet.

It seems it's able to lose the question being asked so I think it might be better to prioritize the question.

I think filling the chat history slightly would do the trick

  1. system
  2. user - long prompt
  3. assistant - manually inserted short acknowledgement of the context received
  4. user - directly ask the question
  5. assistant - the real API response

  • Should this plugin be able to read it's own comments or not?
  • Are we only going one level deep with the linked issue context?
  • Are there to be any kind of safe guards, formatting or anything (not including the spec prompt template) included in the system message or it's free reign with little guidance like it is now?

.github/workflows/compute.yml Outdated Show resolved Hide resolved
src/handlers/ask-gpt.ts Outdated Show resolved Hide resolved
src/plugin.ts Outdated Show resolved Hide resolved
src/types/context.ts Outdated Show resolved Hide resolved
src/utils/format-chat-history.ts Outdated Show resolved Hide resolved
src/utils/format-chat-history.ts Outdated Show resolved Hide resolved
src/utils/issue.ts Outdated Show resolved Hide resolved
src/utils/issue.ts Outdated Show resolved Hide resolved
src/utils/issue.ts Outdated Show resolved Hide resolved
Copy link
Contributor

github-actions bot commented Jul 15, 2024

Unused dependencies (1)

Filename dependencies
package.json dotenv

Unused types (2)

Filename types
src/types/github.ts IssueComment
ReviewComment

@0x4007
Copy link
Member

0x4007 commented Sep 25, 2024

command-ask is fine for now. Your QA makes it look stable. Can we start using it? Also I want to mention that I have access to o1 from the API now.

https://platform.openai.com/docs/guides/reasoning

I'm not sure which model is best. I'm assuming o1-mini is pretty solid for our use case though.

The maximum output token limits are:
o1-preview: Up to 32,768 tokens
o1-mini: Up to 65,536 tokens


Hi there,I’m

Nikunj, PM for the OpenAI API. We’ve been working on expanding access to the OpenAI o1 beta and we’re excited to provide API access to you today. We’ve developed these models to spend more time thinking before they respond. They can reason through complex tasks and solve harder problems than previous models in science, coding, and math.As a trusted developer on usage tier 4, you’re invited to get started with the o1 beta today.

Hi there,

I’m Nikunj, PM for the OpenAI API. We’ve been working on expanding access to the OpenAI o1 beta and we’re excited to provide API access to you today. We’ve developed these models to spend more time thinking before they respond. They can reason through complex tasks and solve harder problems than previous models in science, coding, and math.

As a trusted developer on usage tier 4, you’re invited to get started with the o1 beta today.
Read the docs
You have access to two models:

Our larger model, o1-preview, which has strong reasoning capabilities and broad world knowledge.
Our smaller model, o1-mini, which is 80% cheaper than o1-preview.

Try both models! You may find one better than the other for your specific use case. But keep in mind o1-mini is faster, cheaper, and competitive with o1-preview at coding tasks (you can see how it performs here). We’ve also written up more about these models in our blog post.

These models currently have a rate limit of 100 requests per minute for developers on usage tier 4, but we’ll be increasing rate limits soon. To get immediately notified of updates, follow @OpenAIDevs. I can’t wait to see what you build with o1—please don’t hesitate to reply with any questions.

@Keyrxng
Copy link
Contributor Author

Keyrxng commented Sep 26, 2024

command-ask is fine for now. Your QA makes it look stable. Can we start using it? Also I want to mention that I have access to o1 from the API now.

https://platform.openai.com/docs/guides/reasoning

I'm not sure which model is best. I'm assuming o1-mini is pretty solid for our use case though.

o1 in my opinion is too slow compared to 4o, I'd prefer to use it and honestly, reasoning models on the OpenAi website have not impressed me so far idk about you guys.

But keep in mind o1-mini is faster, cheaper, and competitive with o1-preview at coding tasks

i.e it's faster and cheaper than o1-preview but it drags compared to 4o.

Your QA makes it look stable. Can we start using it?

I hope so and as soon as it gets merged. I will apply the finishing touches and it should be mergeable following any other review comments.

@Keyrxng
Copy link
Contributor Author

Keyrxng commented Sep 26, 2024

Typically slash command type plugins have a commands entry in the manifest but with this since I'm unsure what to do basically, if the command is configurable then an entry does not make sense however if it's going to be a constant then I guess I could add one.

  • Currently we pass in the bot name via the config. Should this be an env var or hardcoded so partner's can't change it? Should we use the app_slug or the bot.user.id and fetch it's username?
  • Since the slash command in this case now is @UbiquityOS which may be subject to change (if it's not subject to change then it's easy) should I write a commands entry or just have it forward the payload since the plugin does the processing anyway?

.env.example Outdated Show resolved Hide resolved
Copy link
Member

@gentlementlegen gentlementlegen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be nice to be able to configure the ChatGpt endpoint and model through the configuration (can be inside another issue).

@0x4007
Copy link
Member

0x4007 commented Sep 26, 2024

o1 in my opinion is too slow compared to 4o

I think it's fine. A comment responding ten seconds later isn't a problem

@Keyrxng
Copy link
Contributor Author

Keyrxng commented Sep 26, 2024

I moved UBIQUITY_OS_APP_SLUG into .env so that we set it when we deploy the worker. I done this to make it impossible for a partner to whitelabel it and alter the command as I got the feeling that what's intended with this plugin.

package.json Outdated Show resolved Hide resolved
src/types/plugin-inputs.ts Outdated Show resolved Hide resolved
@Keyrxng
Copy link
Contributor Author

Keyrxng commented Oct 2, 2024

Some recent additional QA that was built on top of this plugin:

ubq-testing#2

I think it's fine. A comment responding ten seconds later isn't a problem

I noticed that I don't have o1 access so I had to specify in the config or it would error for me. I know as an org we'll use o1 but should we use a stable GPT4 as the default to avoid this error for others?

@sshivaditya2019 sshivaditya2019 mentioned this pull request Oct 5, 2024
11 tasks
@sshivaditya2019
Copy link
Collaborator

@Keyrxng I think it would be a good idea for me to continue with this PR, as my #2 PR builds on it.

@0x4007 rfc

@Keyrxng
Copy link
Contributor Author

Keyrxng commented Oct 6, 2024

This PR should be merged separate from your feature. If required branch off from this PR do not add your logic to it

This PR is held back by review only

@Keyrxng
Copy link
Contributor Author

Keyrxng commented Oct 6, 2024

Realize I never pushed the branch to my repo which facilitated the onboarding bot built on top of this PR

ubiquity-os-marketplace/text-vector-embeddings#18
https://github.com/ubq-testing/ask-plugin/tree/onboarding-bot

Is this getting merged or closed in favour of #2 @0x4007?

@Keyrxng Keyrxng changed the title /gpt slash command @ubiquityos gpt command Oct 6, 2024
@0x4007 0x4007 merged commit a1e47df into ubiquity-os-marketplace:development Oct 18, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

/gpt ask a context aware question
4 participants