Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

429 From OpenAPI #8

Open
joshghent opened this issue Apr 12, 2023 · 2 comments
Open

429 From OpenAPI #8

joshghent opened this issue Apr 12, 2023 · 2 comments

Comments

@joshghent
Copy link

Hi there! Awesome project, really cool use of AI!
I am having some issues getting it running though. I always get this error when sending a message.

/telegram-chatgpt-concierge-bot/node_modules/openai/node_modules/axios/lib/core/createError.js:16
  var error = new Error(message);
              ^
Error: Request failed with status code 429

I have added my card to OpenAPI and already pay for ChatGPT Plus so not sure why I am getting this.
Interesting to note that OpenAPI doesn't see that the API key has ever been used.

Thanks again for the great project.

@janek
Copy link

janek commented Apr 13, 2023

I think I have the same. There's an article about it in OpenAI docs: https://help.openai.com/en/articles/5955604-how-can-i-solve-429-too-many-requests-errors

It seems like it could be an issue with OpenAI, but potentially fixable in the repo. Or it could be an issue with the project (perhaps it's making repeated requests?)

(toggle for request summary) Input: h Entering new agent_executor chain... AxiosError: Request failed with status code 429 at createError (/Users/janek/Developer/telegram-chatgpt-concierge-bot/node_modules/langchain/dist/util/axios-fetch-adapter.cjs:313:16) at settle (/Users/janek/Developer/telegram-chatgpt-concierge-bot/node_modules/langchain/src/util/axios-fetch-adapter.js:47:3) at /Users/janek/Developer/telegram-chatgpt-concierge-bot/node_modules/langchain/dist/util/axios-fetch-adapter.cjs:181:19 at new Promise () at fetchAdapter (/Users/janek/Developer/telegram-chatgpt-concierge-bot/node_modules/langchain/dist/util/axios-fetch-adapter.cjs:173:12) at processTicksAndRejections (node:internal/process/task_queues:95:5) { config: { transitional: { silentJSONParsing: true, forcedJSONParsing: true, clarifyTimeoutError: false }, adapter: [AsyncFunction: fetchAdapter], transformRequest: [ [Function: transformRequest] ], transformResponse: [ [Function: transformResponse] ], timeout: 0, xsrfCookieName: 'XSRF-TOKEN', xsrfHeaderName: 'X-XSRF-TOKEN', maxContentLength: -1, maxBodyLength: -1, validateStatus: [Function: validateStatus], headers: { Accept: 'application/json, text/plain, */*', 'Content-Type': 'application/json', 'User-Agent': 'OpenAI/NodeJS/3.2.1', Authorization: 'Bearer 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' }, method: 'post', data: '{"model":"gpt-3.5-turbo","temperature":1,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"max_tokens":1000,"n":1,"stop":["Observation:"],"stream":false,"messages":[{"role":"system","content":"Assistant is a large language model trained by OpenAI.\\n\\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\\n\\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\\n\\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist."},{"role":"user","content":"TOOLS\\n------\\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\\n\\nGoogle Search Tool: This is Google. Use this tool to search the internet. Input should be a string\\n\\nRESPONSE FORMAT INSTRUCTIONS\\n----------------------------\\n\\nWhen responding to me please, please output a response in one of two formats:\\n\\n**Option 1:**\\nUse this if you want the human to use a tool.\\nMarkdown code snippet formatted in the following schema:\\n\\n```json\\n{\\n \\"action\\": string \\\\ The action to take. Must be one of Google Search Tool\\n \\"action_input\\": string \\\\ The input to the action\\n}\\n```\\n\\n**Option #2:**\\nUse this if you want to respond directly to the human. Markdown code snippet formatted in the following schema:\\n\\n```json\\n{\\n \\"action\\": \\"Final Answer\\",\\n \\"action_input\\": string \\\\ You should put what you want to return to use here\\n}\\n```\\n\\nUSER\'S INPUT\\n--------------------\\nHere is the user\'s input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\\n\\nh"}]}', url: 'https://api.openai.com/v1/chat/completions' }, request: Request { [Symbol(realm)]: { settingsObject: [Object] }, [Symbol(state)]: { method: 'POST', localURLsOnly: false, unsafeRequest: false, body: [Object], client: [Object], reservedClient: null, replacesClientId: '', window: 'client', keepalive: false, serviceWorkers: 'all', initiator: '', destination: '', priority: null, origin: 'client', policyContainer: 'client', referrer: 'client', referrerPolicy: '', mode: 'cors', useCORSPreflightFlag: false, credentials: 'same-origin', useCredentials: false, cache: 'default', redirect: 'follow', integrity: '', cryptoGraphicsNonceMetadata: '', parserMetadata: '', reloadNavigation: false, historyNavigation: false, userActivation: false, taintedOrigin: false, redirectCount: 0, responseTainting: 'basic', preventNoCacheCacheControlHeaderModification: false, done: false, timingAllowFailed: false, headersList: [HeadersList], urlList: [Array], url: [URL] }, [Symbol(signal)]: AbortSignal { aborted: false }, [Symbol(headers)]: HeadersList { cookies: null, [Symbol(headers map)]: [Map], [Symbol(headers map sorted)]: null } }, response: { ok: false, status: 429, statusText: 'Too Many Requests', headers: HeadersList { cookies: null, [Symbol(headers map)]: [Map], [Symbol(headers map sorted)]: null }, config: { transitional: [Object], adapter: [AsyncFunction: fetchAdapter], transformRequest: [Array], transformResponse: [Array], timeout: 0, xsrfCookieName: 'XSRF-TOKEN', xsrfHeaderName: 'X-XSRF-TOKEN', maxContentLength: -1, maxBodyLength: -1, validateStatus: [Function: validateStatus], headers: [Object], method: 'post', data: '{"model":"gpt-3.5-turbo","temperature":1,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"max_tokens":1000,"n":1,"stop":["Observation:"],"stream":false,"messages":[{"role":"system","content":"Assistant is a large language model trained by OpenAI.\\n\\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\\n\\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\\n\\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist."},{"role":"user","content":"TOOLS\\n------\\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\\n\\nGoogle Search Tool: This is Google. Use this tool to search the internet. Input should be a string\\n\\nRESPONSE FORMAT INSTRUCTIONS\\n----------------------------\\n\\nWhen responding to me please, please output a response in one of two formats:\\n\\n**Option 1:**\\nUse this if you want the human to use a tool.\\nMarkdown code snippet formatted in the following schema:\\n\\n```json\\n{\\n \\"action\\": string \\\\ The action to take. Must be one of Google Search Tool\\n \\"action_input\\": string \\\\ The input to the action\\n}\\n```\\n\\n**Option #2:**\\nUse this if you want to respond directly to the human. Markdown code snippet formatted in the following schema:\\n\\n```json\\n{\\n \\"action\\": \\"Final Answer\\",\\n \\"action_input\\": string \\\\ You should put what you want to return to use here\\n}\\n```\\n\\nUSER\'S INPUT\\n--------------------\\nHere is the user\'s input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\\n\\nh"}]}', url: 'https://api.openai.com/v1/chat/completions' }, request: Request { [Symbol(realm)]: [Object], [Symbol(state)]: [Object], [Symbol(signal)]: [AbortSignal], [Symbol(headers)]: [HeadersList] }, data: { error: [Object] } } } { message: '{"message":"You exceeded your current quota, please check your plan and billing details.","type":"insufficient_quota","param":null,"code":null}' }

@joshghent
Copy link
Author

What is odd though is that according to those docs there is a 10K limit per minute. I can't understand why after 1 request it would flag with a 429?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants