-
Notifications
You must be signed in to change notification settings - Fork 1
feat: Personalized search blog #168
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
|
||
| The architecture we built today hits a "Sweet Spot" for 95% of use cases. It gives you **hybrid search** and **personalization** with **zero added infrastructure**. You aren't managing separate inference clusters (like Ray), paying for Re-ranker API calls, or maintaining ETL pipelines. You just have Postgres. | ||
|
|
||
| The main consideration? **Resource Management**. Because search runs on your database, it shares CPU and RAM with your transactional workload. For most applications, this is a non-issue given Postgres's efficiency. For high-scale deployments, you can simply run this on a **dedicated read replica** to isolate inference load from your primary writes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is worth expanding upon. It's an interesting concept to have dedicated replicas and we want to educate the market. We can mention that the dedicated replica can be used both via:
- physical replication --- in which case the index is also on the primary and you pay a small write price, but you can then only serve read queries from a subset of your read replicas. The benefit is you get full transactionality and schema replication
- logical replication -- properly isolated, the index is only on the replica/subset of replicas, but you lose transactionality.
Best approach depends on your infra and requirements
Co-authored-by: Philippe Noël <[email protected]> Signed-off-by: Ankit <[email protected]>
Co-authored-by: Philippe Noël <[email protected]> Signed-off-by: Ankit <[email protected]>
Co-authored-by: Philippe Noël <[email protected]> Signed-off-by: Ankit <[email protected]>
Co-authored-by: Philippe Noël <[email protected]> Signed-off-by: Ankit <[email protected]>
Co-authored-by: Philippe Noël <[email protected]> Signed-off-by: Ankit <[email protected]>
| src={pgconf} | ||
| alt="Presenting at PGConf NYC 2023" | ||
| /> | ||
| <Image src={pgconf} alt="Presenting at PGConf NYC 2023" /> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| ParadeDB applies this same principle to search: **Move the compute to the data, not the data to the compute.** By treating personalization as a database query rather than an application workflow, we simplify the stack, reduce latency, and give you a unified engine for both search and recommendations. If there is any more recommender engine workload you would like to see in Postgres, do write to us link to paradedb community. | ||
| ParadeDB applies this same principle to search: **Move the compute to the data, not the data to the compute.** Treat personalization as a database query rather than an application workflow. You simplify the stack, reduce latency, and get a unified engine for both search and recommendations. | ||
|
|
||
| Have a recommender workload you'd like to see in Postgres? Write to us in the ParadeDB community. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would link here
No description provided.