Skip to content

Trying out Axiom-Core for a restricted LLM setup — a couple of questions #37

@mike-labX

Description

@mike-labX

Hey

I’ve been poking around Axiom-Core a bit and really like the direction here, so figured I’d ask a couple of things while it’s fresh.

We’ve been trying to use cloud LLMs on internal data, but every time we get close, compliance shuts it down. Redaction helps a bit but usually kills the reasoning quality, and fully on-prem setups feel like a step backwards.

I tried running a few examples locally with axiom-core, and it’s interesting that the output still feels “reason-able” for an LLM while the raw input never leaves. That’s not something we’ve had much luck with using typical PII masking tools.

One thing I wanted to double-check - is the idea that Axiom basically makes it impossible (by design) for raw identifiers to leak downstream, rather than relying on best-effort filtering? That mental model would make a lot of sense for how we’re thinking about boundaries internally.

Also curious how you think about audits / reproducibility. Should the same input always map to the same transformed output? That’s usually something compliance folks care about.

Anyway, this feels more like a real infra primitive than another “privacy layer,” which is refreshing. Just trying to understand if this is the right building block for what we’re doing.

Nice work on this

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionFurther information is requested

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions