Skip to content

Conversation

@jamessewell
Copy link
Collaborator

No description provided.

@vercel
Copy link

vercel bot commented Jan 12, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Review Updated (UTC)
website Ready Ready Preview, Comment Jan 13, 2026 0:32am


The architecture we built today hits a "Sweet Spot" for 95% of use cases. It gives you **hybrid search** and **personalization** with **zero added infrastructure**. You aren't managing separate inference clusters (like Ray), paying for Re-ranker API calls, or maintaining ETL pipelines. You just have Postgres.

The main consideration? **Resource Management**. Because search runs on your database, it shares CPU and RAM with your transactional workload. For most applications, this is a non-issue given Postgres's efficiency. For high-scale deployments, you can simply run this on a **dedicated read replica** to isolate inference load from your primary writes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is worth expanding upon. It's an interesting concept to have dedicated replicas and we want to educate the market. We can mention that the dedicated replica can be used both via:

  • physical replication --- in which case the index is also on the primary and you pay a small write price, but you can then only serve read queries from a subset of your read replicas. The benefit is you get full transactionality and schema replication
  • logical replication -- properly isolated, the index is only on the replica/subset of replicas, but you lose transactionality.

Best approach depends on your infra and requirements

ankitml and others added 3 commits January 12, 2026 17:33
Co-authored-by: Philippe Noël <[email protected]>
Signed-off-by: Ankit  <[email protected]>
Co-authored-by: Philippe Noël <[email protected]>
Signed-off-by: Ankit  <[email protected]>
ankitml and others added 4 commits January 12, 2026 17:37
Co-authored-by: Philippe Noël <[email protected]>
Signed-off-by: Ankit  <[email protected]>
Co-authored-by: Philippe Noël <[email protected]>
Signed-off-by: Ankit  <[email protected]>
Co-authored-by: Philippe Noël <[email protected]>
Signed-off-by: Ankit  <[email protected]>
src={pgconf}
alt="Presenting at PGConf NYC 2023"
/>
<Image src={pgconf} alt="Presenting at PGConf NYC 2023" />
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. no?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ParadeDB applies this same principle to search: **Move the compute to the data, not the data to the compute.** By treating personalization as a database query rather than an application workflow, we simplify the stack, reduce latency, and give you a unified engine for both search and recommendations. If there is any more recommender engine workload you would like to see in Postgres, do write to us link to paradedb community.
ParadeDB applies this same principle to search: **Move the compute to the data, not the data to the compute.** Treat personalization as a database query rather than an application workflow. You simplify the stack, reduce latency, and get a unified engine for both search and recommendations.

Have a recommender workload you'd like to see in Postgres? Write to us in the ParadeDB community.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would link here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants