Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: large dataset error - Maximum call stack size exceeded #10504

Open
chillpilllike opened this issue Dec 9, 2024 · 13 comments
Open

[Bug]: large dataset error - Maximum call stack size exceeded #10504

chillpilllike opened this issue Dec 9, 2024 · 13 comments

Comments

@chillpilllike
Copy link

Package.json file

{
  "name": "medusa-starter-default",
  "version": "0.0.1",
  "description": "A starter for Medusa projects.",
  "author": "Medusa (https://medusajs.com)",
  "license": "MIT",
  "keywords": [
    "sqlite",
    "postgres",
    "typescript",
    "ecommerce",
    "headless",
    "medusa"
  ],
  "scripts": {
    "build": "medusa build",
    "predeploy": "medusa db:migrate",
    "seed": "medusa exec ./src/scripts/seed.ts",
    "start": "medusa start",
    "dev": "medusa develop",
    "test:integration:http": "TEST_TYPE=integration:http NODE_OPTIONS=--experimental-vm-modules jest --silent=false --runInBand --forceExit",
    "test:integration:modules": "TEST_TYPE=integration:modules NODE_OPTIONS=--experimental-vm-modules jest --silent --runInBand --forceExit",
    "test:unit": "TEST_TYPE=unit NODE_OPTIONS=--experimental-vm-modules jest --silent --runInBand --forceExit"
  },
  "dependencies": {
    "@medusajs/admin-sdk": "latest",
    "@medusajs/cli": "latest",
    "@medusajs/framework": "latest",
    "@medusajs/core-flows": "latest",
    "@medusajs/medusa": "latest",
    "@mikro-orm/core": "5.9.7",
    "@mikro-orm/knex": "5.9.7",
    "@mikro-orm/migrations": "5.9.7",
    "@mikro-orm/postgresql": "5.9.7",
    "awilix": "^8.0.1",
    "pg": "^8.13.0"
  },
  "devDependencies": {
    "@medusajs/test-utils": "latest",
    "@mikro-orm/cli": "5.9.7",
    "@swc/core": "1.5.7",
    "@swc/jest": "^0.2.36",
    "@types/jest": "^29.5.13",
    "@types/node": "^20.0.0",
    "@types/react": "^18.3.2",
    "@types/react-dom": "^18.2.25",
    "jest": "^29.7.0",
    "prop-types": "^15.8.1",
    "react": "^18.2.0",
    "react-dom": "^18.2.0",
    "ts-node": "^10.9.2",
    "typescript": "^5.6.2",
    "vite": "^5.2.11"
  },
  "engines": {
    "node": ">=20"
  }
}

Node.js version

20

Database and its version

16

Operating system name and version

22.04

Browser name

safari

What happended?

When I try to open storefront, it gives error 500 and backend give the below error:

[14:58:13.000] ERROR:
message: "Maximum call stack size exceeded"
stack: [
{
"columnNumber": 16,
"fileName": "/app/.medusa/server/node_modules/@mikro-orm/core/platforms/ExceptionConverter.js",
"functionName": "PostgreSqlExceptionConverter.convertException",
"lineNumber": 8,
"methodName": "convertException",
"native": false,
"typeName": "PostgreSqlExceptionConverter"
},
{
"columnNumber": 22,
"fileName": "/app/.medusa/server/node_modules/@mikro-orm/postgresql/PostgreSqlExceptionConverter.js",
"functionName": "PostgreSqlExceptionConverter.convertException",
"lineNumber": 42,
"methodName": "convertException",
"native": false,
"typeName": "PostgreSqlExceptionConverter"
},
{
"columnNumber": 54,
"fileName": "/app/.medusa/server/node_modules/@mikro-orm/core/drivers/DatabaseDriver.js",
"functionName": "PostgreSqlDriver.convertException",
"lineNumber": 201,
"methodName": "convertException",
"native": false,
"typeName": "PostgreSqlDriver"
},
{
"columnNumber": 24,
"fileName": "/app/.medusa/server/node_modules/@mikro-orm/core/drivers/DatabaseDriver.js",
"functionName": null,
"lineNumber": 205,
"methodName": null,
"native": false,
"typeName": null
},
{
"columnNumber": 5,
"fileName": "node:internal/process/task_queues",
"functionName": "processTicksAndRejections",
"lineNumber": 95,
"methodName": null,
"native": false,
"typeName": null
},
{
"columnNumber": 31,
"fileName": "/app/.medusa/server/node_modules/knex/lib/formatter/wrappingFormatter.js",
"functionName": "unwrapRaw",
"lineNumber": 104,
"methodName": null,
"native": false,
"typeName": null
},
{
"columnNumber": 15,
"fileName": "/app/.medusa/server/node_modules/knex/lib/formatter/wrappingFormatter.js",
"functionName": "wrap",
"lineNumber": 80,
"methodName": null,
"native": false,
"typeName": null
},
{
"columnNumber": 11,
"fileName": "/app/.medusa/server/node_modules/knex/lib/query/querycompiler.js",
"functionName": "QueryCompiler_PG.get tableName [as tableName]",
"lineNumber": 1472,
"methodName": "get tableName [as tableName]",
"native": false,
"typeName": "QueryCompiler_PG"
},
{
"columnNumber": 13,
"fileName": "/app/.medusa/server/node_modules/knex/lib/query/querycompiler.js",
"functionName": "QueryCompiler_PG.onlyJson",
"lineNumber": 800,
"methodName": "onlyJson",
"native": false,
"typeName": "QueryCompiler_PG"
},
{
"columnNumber": 25,
"fileName": "/app/.medusa/server/node_modules/knex/lib/query/querycompiler.js",
"functionName": "QueryCompiler_PG.columns",
"lineNumber": 313,
"methodName": "columns",
"native": false,
"typeName": "QueryCompiler_PG"
},
{
"columnNumber": 40,
"fileName": "/app/.medusa/server/node_modules/knex/lib/query/querycompiler.js",
"functionName": null,
"lineNumber": 135,
"methodName": null,
"native": false,
"typeName": null
},
{
"columnNumber": null,
"fileName": null,
"functionName": "Array.forEach",
"lineNumber": null,
"methodName": "forEach",
"native": false,
"typeName": "Array"
},
{
"columnNumber": 16,
"fileName": "/app/.medusa/server/node_modules/knex/lib/query/querycompiler.js",
"functionName": "QueryCompiler_PG.select",
"lineNumber": 134,
"methodName": "select",
"native": false,
"typeName": "QueryCompiler_PG"
},
{
"columnNumber": 29,
"fileName": "/app/.medusa/server/node_modules/knex/lib/query/querycompiler.js",
"functionName": "QueryCompiler_PG.toSQL",
"lineNumber": 75,
"methodName": "toSQL",
"native": false,
"typeName": "QueryCompiler_PG"
},
{
"columnNumber": 41,
"fileName": "/app/.medusa/server/node_modules/knex/lib/formatter/wrappingFormatter.js",
"functionName": "unwrapRaw",
"lineNumber": 102,
"methodName": null,
"native": false,
"typeName": null
}
]
[14:58:13.000] USERLVL:
message: "172.18.0.1 - - [08/Dec/2024:14:58:13 +0000] "GET /store/products?limit=1&offset=999999&region_id=reg_01JEE2P8Z6ESDR4XZPEHWNTDEB HTTP/1.1" 500 86 "-" "undici""

Expected behavior

Products should be fetched without errors using api

Actual behavior

{"code":"unknown_error","type":"unknown_error","message":"An unknown error occurred."}%

Link to reproduction repo

https://github.com/medusajs/medusa.git

@chillpilllike
Copy link
Author

This is my backend url

https://secretgreen-au-backend-server.g5edov.easypanel.host/

I can not get products list even 10 products, fetching 10 products using offset now gives the error:

curl -X GET "https://secretgreen-au-backend-server.g5edov.easypanel.host/store/products?limit=1&offset=10"
-H "x-publishable-api-key: pk_1f84a883e7f454772abcc1dd2ae1e3d562d64a5cf6b58f4c5e5e75bc2b1078a8"
-H "Content-Type: application/json"

I have 130,000 products and growing.

@riqwan
Copy link
Contributor

riqwan commented Dec 9, 2024

@chillpilllike do you have a dump or seed you can share that can help me reproduce this? riqwan[at]medusajs.com

@chillpilllike
Copy link
Author

I have shared the database backup file. Thank you!

@algorithm-diva
Copy link

Hi, I’m also getting the same error and feeling stuck. I’ve tried a couple of things, but nothing has worked so far. Are you able to resolve this out?

@olivermrbl
Copy link
Contributor

@algorithm-diva, can you share more about your machine(s) and store volume, e.g. number of product?

@algorithm-diva
Copy link

Our setup is running on a company’s provided AWS instance equipped with 36 vCPUs and 96 GB RAM. Here’s what we observed regarding store volume:

1.	Initial Test:
•	I imported 80k products from their old platform as a test.
•	Both the backend and storefront performed without any issues.

2.	Scaling to 160k Products:
•	I ran the script overnight on a new database, increasing the number of products to 160k.
•	The backend continued to work seamlessly with no performance lags or errors.
•	However, we noticed storefront started giving error 500.

3.	Reducing to 80k Products:
•	When the number of products was reduced back to around 80k, the storefront stopped giving error 500 and resumed normal operations.

4.	Further Investigation:
•	Thinking the issue might be related to storefront coding, I tested using starter storefronts from Medusa.
•	All storefronts worked well with database of 80k products, but same error as mentioned here started to appear when connecting them to the database with 160k products.

@riqwan
Copy link
Contributor

riqwan commented Dec 11, 2024

@algorithm-diva if you do a curl on the store/products endpoint, does that return any results?

@algorithm-diva
Copy link

@riqwan The curl command works perfectly fine with the database containing 80k products, returning the product list as expected. However, when I run the same curl command on a database with 160k products, it fails with the following response:

{"code":"unknown_error","type":"unknown_error","message":"An unknown error occurred."}

The backend logs show the same recurring error: “Maximum call stack size exceeded.”

This indicates that the issue occurs specifically when dealing with the larger dataset of 160k products.

@chillpilllike
Copy link
Author

@riqwan were you able to reproduce the error using the dump file i provided?

@chillpilllike
Copy link
Author

hello team, any solution other than his problem?

@riqwan
Copy link
Contributor

riqwan commented Dec 16, 2024

Hey, unfortunately there isn't a solution for this at this current moment.

Medusa v2 introduces a new modular architecture with a strict domain separation at the database level. This brings significant improvements and future enablements, but also introduces (temporary) limitations.

One known limitation is the size of product catalogs. Our cross-module filtering tooling isn’t yet optimized for the fully modular architecture, and stores with very large catalog sizes may encounter issues with database queries becoming too large, as highlighted in this GitHub issue.

We are committed to removing this limitation within the next 3-4 months. In in the meantime, we recommend integrating a search engine like Meilisearch as a store-facing query layer for large catalogs.

We will provide updates as we make progress. Thank you for your patience and for raising these issues.

@chillpilllike
Copy link
Author

Oh, I’m feeling a bit disappointed. Medusa was truly my top choice among other platforms. :(

I’ll check back in the future when Medusa has matured further. Thank you!

@olivermrbl
Copy link
Contributor

@chillpilllike, I'd love to learn more about your use case, if you don't mind sharing. As @riqwan alluded to, there are several workarounds now, that might apply to you.

Feel free to send me an email at oli[at]medusajs.com, if this is of interest.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants