Type-safe data decoding and encoding for the minimalist.
npm install tiny-decoders
tiny-decoders requires TypeScript 5+ (because it uses const type parameters).
It is recommended to enable the exactOptionalPropertyTypes option in tsconfig.json
– see the note at the field function.
Note that it is possible to use tiny-decoders in plain JavaScript without type checking as well.
import {
array,
boolean,
field,
fields,
format,
type Infer,
number,
string,
} from "tiny-decoders";
// You can also import into a namespace if you want (conventionally called `Codec`):
import * as Codec from "tiny-decoders";
const userCodec = fields({
name: string,
active: field(boolean, { renameFrom: "is_active" }),
age: field(number, { optional: true }),
interests: array(string),
});
type User = Infer<typeof userCodec>;
// equivalent to:
type User = {
name: string;
active: boolean;
age?: number;
interests: Array<string>;
};
const payload: unknown = getSomeJSON();
const userResult: DecoderResult<User> = userCodec.decoder(payload);
switch (userResult.tag) {
case "DecoderError":
console.error(format(userResult.error));
break;
case "Valid":
console.log(userResult.value);
break;
}
Here’s an example error message:
At root["age"]:
Expected a number
Got: "30"
type Codec<Decoded, Encoded = unknown> = {
decoder: (value: unknown) => DecoderResult<Decoded>;
encoder: (value: Decoded) => Encoded;
};
type DecoderResult<Decoded> =
| {
tag: "DecoderError";
error: DecoderError;
}
| {
tag: "Valid";
value: Decoded;
};
A codec is an object with a decoder and an encoder.
A decoder is a function that:
- Takes an
unknown
value and refines it to any type you want (Decoded
). - Returns a
DecoderResult
: Either that refinedDecoded
or a DecoderError.
An encoder is a function that turns Decoded
back into what the input looked like. You can think of it as “turning Decoded
back into unknown
”, but usually the Encoded
type variable is inferred to something more precise.
That’s it!
tiny-decoders ships with a bunch of codecs, and a few functions to combine codecs. This way you can describe the shape of any data!
tiny-decoders used to only have decoders, and not encoders. That’s why it’s called tiny-decoders and not tiny-codecs. Decoders are still the most interesting part.
Here’s a summary of all codecs (with slightly simplified type annotations) and related functions.
- Codec type: Codec and DecoderResult
- Primitives: unknown, boolean, number, bigint, string
- Collections: array, record, tuple
- Object literals: fields with field
- Unions:
- Of primitive literals: primitiveUnion
- Of different types: multi
- Of tagged objects: taggedUnion with tag
- With undefined: undefinedOr
- With null: nullOr
- Other unions: untagged union example
- Intersections: intersection example
- Transformation: map, flatMap
- Recursion: recursive
- Errors: DecoderError, format, repr
- JSON: Replacement for JSON.parse and JSON.stringify
- Tips: Type inference, things left out
Codec | Type | JSON | TypeScript |
---|---|---|---|
unknown | Codec<unknown> |
any | unknown |
boolean | Codec<boolean> |
boolean | boolean |
number | Codec<number> |
number | number |
bigint | Codec<bigint> |
n/a | bigint |
string | Codec<string> |
string | string |
primitiveUnion | (variants: [ "string1", "string2", "stringN", 1, 2, true ]) => Codec< "string1" | "string2" | "stringN" | 1 | 2 | true > |
string, number, boolean, null | "string1" | "string2" | "stringN" | 1 | 2 | true |
array | (decoder: Codec<T>) => Codec<Array<T>> |
array | Array<T> |
record | (decoder: Codec<T>) => Codec<Record<string, T>> |
object | Record<string, T> |
fields | (mapping: { field1: Codec<T1>, field2: Field<T2, {optional: true}>, field3: Field<T3, {renameFrom: "field_3"}>, fieldN: Codec<TN> }) => Codec<{ field1: T1, field2?: T2, field3: T3, fieldN: TN }> |
{ "field1": ..., "field2": ..., "field_3": ..., "fieldN": ... }or: { "field1": ..., "field_3": ..., "fieldN": ... } |
{ field1: T1, field2?: T2, field3: T3, fieldN: TN } |
field | ( codec: Codec<Decoded>, meta: Meta, ): Field<Decoded, Meta> |
n/a | n/a, only used with fields |
taggedUnion | ( decodedCommonField: string, variants: Array< Parameters<typeof fields>[0] >, ) => Codec<T1 | T2 | TN> |
object | T1 | T2 | TN |
tag | ( decoded: "string literal", options?: Options, ): Field<"string literal", Meta> |
string | "string literal" |
tuple | (codecs: [ Codec<T1>, Codec<T2>, Codec<TN> ]) => Codec<[T1, T2, TN]> |
array | [T1, T2, TN] |
multi | (types: [ "type1", "type2", "type10" ]) => Codec< { type: "type1", value: type1 } | { type: "type2", value: type2 } | { type: "type10", value: type10 } > |
you decide | A subset of: { type: "undefined"; value: undefined } | { type: "null"; value: null } | { type: "boolean"; value: boolean } | { type: "number"; value: number } | { type: "bigint"; value: bigint } | { type: "string"; value: string } | { type: "symbol"; value: symbol } | { type: "function"; value: Function } | { type: "array"; value: Array } | { type: "object"; value: Record } |
recursive | (callback: () => Codec<T>) => Codec<T> |
n/a | T |
undefinedOr | (codec: Codec<T>) => Codec<T | undefined> |
undefined or … | T | undefined |
nullOr | (codec: Codec<T>) => Codec<T | null> |
null or … | T | null |
map | ( codec: Codec<T>, transform: { decoder: (value: T) => U; encoder: (value: U) => T; }, ) => Codec<U> |
n/a | U |
flatMap | ( decoder: Codec<T>, transform: { decoder: (value: T) => DecoderResult<U>; encoder: (value: U) => T; }, ) => Codec<U> |
n/a | U |
const unknown: Codec<unknown>;
Codec for any JSON value, and a TypeScript unknown
. Basically, both the decoder and encoder are identity functions.
const boolean: Codec<boolean, boolean>;
Codec for a JSON boolean, and a TypeScript boolean
.
const number: Codec<number, number>;
Codec for a JSON number, and a TypeScript number
.
const bigint: Codec<bigint, bigint>;
Codec for a JavaScript bigint
, and a TypeScript bigint
.
Note: JSON does not have bigint. You need to serialize them to strings, and then parse them to bigint. This function does not do that for you. It is only useful when you are decoding values that already are JavaScript bigint, but are unknown
to TypeScript.
const string: Codec<string, string>;
Codec for a JSON string, and a TypeScript string
.
function primitiveUnion<
const Variants extends readonly [primitive, ...Array<primitive>],
>(variants: Variants): Codec<Variants[number], Variants[number]>;
type primitive = bigint | boolean | number | string | symbol | null | undefined;
Codec for a set of specific primitive values, and a TypeScript union of those values.
The variants
is an array of the values you want. You must provide at least one variant. If you provide exactly one variant, you get a codec for a single, constant, exact value (a union with just one variant).
If you have an object and want to use its keys for a string union there’s an example of that in the type inference example.
Example:
type Color = "green" | "red";
const colorCodec: Codec<Color> = primitiveUnion(["green", "red"]);
function array<DecodedItem, EncodedItem>(
codec: Codec<DecodedItem, EncodedItem>,
): Codec<Array<DecodedItem>, Array<EncodedItem>>;
Codec for a JSON array, and a TypeScript Array
.
The passed codec
is for each item of the array.
For example, array(string)
is a codec for an array of strings (Array<string>
).
function record<DecodedValue, EncodedValue>(
codec: Codec<DecodedValue, EncodedValue>,
): Codec<Record<string, DecodedValue>, Record<string, EncodedValue>>;
Codec for a JSON object, and a TypeScript Record
. (Yes, this function is named after TypeScript’s type. Other languages call this a “dict”.)
The passed codec
is for each value of the object.
For example, record(number)
is a codec for an object where the keys can be anything and the values are numbers (Record<string, number>
).
function fields<Mapping extends FieldsMapping>(
mapping: Mapping,
{ allowExtraFields = true }: { allowExtraFields?: boolean } = {},
): Codec<InferFields<Mapping>, InferEncodedFields<Mapping>>;
type FieldsMapping = Record<string, Codec<any> | Field<any, any, FieldMeta>>;
type Field<Decoded, Encoded, Meta extends FieldMeta> = Meta & {
codec: Codec<Decoded, Encoded>;
};
type FieldMeta = {
renameFrom?: string | undefined;
optional?: boolean | undefined;
tag?: { decoded: primitive; encoded: primitive } | undefined;
};
type primitive = bigint | boolean | number | string | symbol | null | undefined;
type InferFields<Mapping extends FieldsMapping> = magic;
type InferEncodedFields<Mapping extends FieldsMapping> = magic;
Codec for a JSON object with certain fields, and a TypeScript object type/interface with known fields.
The mapping
parameter is an object with the keys you want in your TypeScript object. The values are either Codec
s or Field
s. A Field
is just a Codec
with some metadata: Whether the field is optional, and whether the field has a different name in the JSON object. Passing a plain Codec
instead of a Field
is just a convenience shortcut for passing a Field
with the default metadata (the field is required, and has the same name both in TypeScript and in JSON).
Use the field function to create a Field
– use it when you need to mark a field as optional, or when it has a different name in JSON than in TypeScript.
Example:
type User = {
name: string;
age?: number;
active: boolean;
};
const userCodec: Codec<User> = fields({
name: string,
age: field(number, { optional: true }),
active: field(boolean, { renameFrom: "is_active" }),
});
The allowExtraFields
option lets you choose between ignoring extraneous fields and making it an error.
true
(default) allows extra fields on the object.false
returns aDecoderError
for extra fields.
See also the Extra fields example.
function field<Decoded, Encoded, const Meta extends Omit<FieldMeta, "tag">>(
codec: Codec<Decoded, Encoded>,
meta: Meta,
): Field<Decoded, Encoded, Meta>;
type Field<Decoded, Encoded, Meta extends FieldMeta> = Meta & {
codec: Codec<Decoded, Encoded>;
};
type FieldMeta = {
renameFrom?: string | undefined;
optional?: boolean | undefined;
tag?: { decoded: primitive; encoded: primitive } | undefined;
};
type primitive = bigint | boolean | number | string | symbol | null | undefined;
This function takes a codec and lets you:
- Mark a field as optional:
field(string, { optional: true })
- Rename a field:
field(string, { renameFrom: "some_name" })
- Both:
field(string, { optional: true, renameFrom: "some_name" })
Use it with fields.
The tag
thing is handled by the tag function. It’s not something you’ll set manually using field
. (That’s why the type annotation says Omit<FieldMeta, "tag">
.)
Here’s an example illustrating the difference between field(string, { optional: true })
and undefinedOr(string)
:
const exampleCodec = fields({
// Required field.
a: string,
// Optional field.
b: field(string, { optional: true }),
// Required field that can be set to `undefined`:
c: undefinedOr(string),
// Optional field that can be set to `undefined`:
d: field(undefinedOr(string), { optional: true }),
});
The inferred type from exampleCodec
is:
type Example = {
a: string;
b?: string;
c: string | undefined;
d?: string | undefined;
};
Warning
It is recommended to enable the exactOptionalPropertyTypes option in tsconfig.json
.
Why? Let’s take this codec as an example:
const exampleCodec = fields({
name: field(string, { optional: true }),
});
With exactOptionalPropertyTypes
enabled, the inferred type for exampleCodec
is:
type Example = { name?: string };
That type allows constructing {}
or { name: "some string" }
. If you pass either of those to exampleCodec.decoder
(such as exampleCodec.decoder({ name: "some string" })
), the decoder will succeed. It makes sense that a decoder accepts things that it has produced itself (when no transformation is involved).
With exactOptionalPropertyTypes
turned off (which is the default), the inferred type for exampleCodec
is:
type Example = { name?: string | undefined };
Notice the added | undefined
. That allows also constructing { name: undefined }
. But if you run exampleCodec.decoder({ name: undefined })
, the decoder will fail. The decoder only supports name
existing and being set to a string, or name
being missing. It does not support it being set to undefined
explicitly. If you wanted to support that, use undefinedOr
:
const exampleCodec = fields({
name: field(undefinedOr(string), { optional: true }),
});
That gives the same inferred type, but also supports decoding the name
field being set to undefined
explicitly.
All in all, you avoid a slight gotcha with optional fields and inferred types if you enable exactOptionalPropertyTypes
.
function taggedUnion<
const DecodedCommonField extends keyof Variants[number],
Variants extends readonly [
Variant<DecodedCommonField>,
...Array<Variant<DecodedCommonField>>,
],
>(
decodedCommonField: DecodedCommonField,
variants: Variants,
{ allowExtraFields = true }: { allowExtraFields?: boolean } = {},
): Codec<
InferFieldsUnion<Variants[number]>,
InferEncodedFieldsUnion<Variants[number]>
>;
type Variant<DecodedCommonField extends number | string | symbol> = Record<
DecodedCommonField,
Field<any, any, { tag: { decoded: primitive; encoded: primitive } }>
> &
Record<string, Codec<any> | Field<any, any, FieldMeta>>;
type primitive = bigint | boolean | number | string | symbol | null | undefined;
type InferFieldsUnion<MappingsUnion extends FieldsMapping> = magic;
type InferEncodedFieldsUnion<MappingsUnion extends FieldsMapping> = magic;
// See `fields` for the definitions of `Field`, `FieldMeta` and `FieldsMapping`.
Codec for JSON objects with a common field (that tells them apart), and a TypeScript tagged union type.
The decodedCommonField
is the name of the common field.
variants
is an array of objects. Those objects are “fields
objects” – they fit when passed to fields
as well. All of those objects must have decodedCommonField
as a key, and use the tag function on that key.
type Shape =
| { tag: "Circle"; radius: number }
| { tag: "Rectangle"; width: number; height: number };
const shapeCodec: Codec<Shape> = taggedUnion("tag", [
{
tag: tag("Circle"),
radius: number,
},
{
tag: tag("Rectangle"),
width: field(number, { renameFrom: "width_px" }),
height: field(number, { renameFrom: "height_px" }),
},
]);
The allowExtraFields
option works just like for fields.
See also these examples:
Note: If you use the same tag value twice, the last one wins. TypeScript infers a type with two variants with the same tag (which is a valid type), but tiny-decoders can’t tell them apart. Nothing will ever decode to the first one, only the last one will succeed. Trying to encode the first one might result in bad data.
function tag<
const Decoded extends primitive,
const Encoded extends primitive,
const EncodedFieldName extends string,
>(
decoded: Decoded,
options: {
renameTagFrom?: Encoded;
renameFieldFrom?: EncodedFieldName;
} = {},
): Field<
Decoded,
Encoded,
{
renameFrom: EncodedFieldName | undefined;
tag: { decoded: primitive; encoded: primitive };
}
>;
type primitive = bigint | boolean | number | string | symbol | null | undefined;
Used with taggedUnion, once for each variant of the union.
tag("MyTag")
returns a Field
with a codec that requires the input "MyTag"
and returns "MyTag"
. The metadata of the Field
also advertises that the tag value is "MyTag"
, which taggedUnion
uses to know what to do.
tag("MyTag", { renameTagFrom: "my_tag" })
returns a Field
with a codec that requires the input "my_tag"
but returns "MyTag"
.
For renameFieldFrom
, see the Renaming union field example.
You will typically use string tags for your tagged unions, but other primitive types such as boolean
and number
are supported too.
function tuple<const Codecs extends ReadonlyArray<Codec<any>>>(
codecs: Codecs,
): Codec<InferTuple<Codecs>, InferEncodedTuple<Codecs>>;
type InferTuple<Codecs extends ReadonlyArray<Codec<any>>> = magic;
type InferEncodedTuple<Codecs extends ReadonlyArray<Codec<any>>> = magic;
Codec for a JSON array, and a TypeScript tuple. They both must have the exact same length, otherwise the decoder fails.
Example:
type Point = [number, number];
const pointCodec: Codec<Point> = tuple([number, number]);
See the tuples example for more details.
function multi<Types extends readonly [MultiTypeName, ...Array<MultiTypeName>]>(
types: Types,
): Codec<Multi<Types[number]>, Multi<Types[number]>["value"]>;
type MultiTypeName =
| "array"
| "bigint"
| "boolean"
| "function"
| "null"
| "number"
| "object"
| "string"
| "symbol"
| "undefined";
type Multi<Types> = Types extends any
? Types extends "undefined"
? { type: "undefined"; value: undefined }
: Types extends "null"
? { type: "null"; value: null }
: Types extends "boolean"
? { type: "boolean"; value: boolean }
: Types extends "number"
? { type: "number"; value: number }
: Types extends "bigint"
? { type: "bigint"; value: bigint }
: Types extends "string"
? { type: "string"; value: string }
: Types extends "symbol"
? { type: "symbol"; value: symbol }
: Types extends "function"
? { type: "function"; value: Function }
: Types extends "array"
? { type: "array"; value: Array<unknown> }
: Types extends "object"
? { type: "object"; value: Record<string, unknown> }
: never
: never;
Codec for multiple types, and a TypeScript tagged union for those types.
This is useful for supporting stuff that can be either a string or a number, for example. It lets you do a JavaScript typeof
, basically.
The type annotation for multi
is a bit wacky, but it’s not that complicated to use. The types
parameter is an array of strings – the wanted types. For example, you can say ["string", "number"]
. Then the decoder will give you back either { type: "string", value: string }
or { type: "number", value: number }
. You can use map to map that to some type of choice, or flatMap to decode further.
The types
strings are the same as the JavaScript typeof returns, with two exceptions:
null
is"null"
instead of"object"
(becausetypeof null === "object"
is a famous mistake).array
is"array"
instead of"object"
(because arrays are very common).
If you need to tell other objects apart, write a custom codec.
Example:
type Id = { tag: "Id"; id: string } | { tag: "LegacyId"; id: number };
const idCodec: Codec<Id> = map(multi(["string", "number"]), {
decoder: (value) => {
switch (value.type) {
case "string":
return { tag: "Id" as const, id: value.value };
case "number":
return { tag: "LegacyId" as const, id: value.value };
}
},
encoder: (id) => {
switch (id.tag) {
case "Id":
return { type: "string", value: id.id };
case "LegacyId":
return { type: "number", value: id.id };
}
},
});
function recursive<Decoded, Encoded>(
callback: () => Codec<Decoded, Encoded>,
): Codec<Decoded, Encoded>;
When you make a codec for a recursive data structure, you might end up with errors like:
ReferenceError: Cannot access 'myCodec' before initialization
The solution is to wrap myCodec
in recursive
: recursive(() => myCodec)
. The unnecessary-looking arrow function delays the reference to myCodec
so we’re able to define it.
See the recursive example for more information.
function undefinedOr<Decoded, Encoded>(
codec: Codec<Decoded, Encoded>,
): Codec<Decoded | undefined, Encoded | undefined>;
Returns a new codec that also accepts undefined
.
Notes:
- Using
undefinedOr
does not make a field in an object optional. It only allows the field to beundefined
. Similarly, using the field function to mark a field as optional does not allow setting the field toundefined
, only omitting it. - JSON does not have
undefined
(onlynull
). SoundefinedOr
is more useful when you are decoding something that does not come from JSON. However, even when working with JSONundefinedOr
still has a use: If you infer types from codecs, usingundefinedOr
on object fields results in| undefined
for the type of the field, which allows you to assignundefined
to it which is occasionally useful.
function nullOr<Decoded, Encoded>(
codec: Codec<Decoded, Encoded>,
): Codec<Decoded | null, Encoded | null>;
Returns a new codec that also accepts null
.
function map<const Decoded, Encoded, NewDecoded>(
codec: Codec<Decoded, Encoded>,
transform: {
decoder: (value: Decoded) => NewDecoded;
encoder: (value: NewDecoded) => Readonly<Decoded>;
},
): Codec<NewDecoded, Encoded>;
Run a function (transform.decoder
) after a decoder (if it succeeds). The function transforms the decoded data. transform.encoder
turns the transformed data back again.
Example:
const numberSetCodec: Codec<Set<number>> = map(array(number), {
decoder: (arr) => new Set(arr),
encoder: Array.from,
});
function flatMap<const Decoded, Encoded, NewDecoded>(
codec: Codec<Decoded, Encoded>,
transform: {
decoder: (value: Decoded) => DecoderResult<NewDecoded>;
encoder: (value: NewDecoded) => Readonly<Decoded>;
},
): Codec<NewDecoded, Encoded>;
Run a function (transform.decoder
) after a decoder (if it succeeds). The function decodes the decoded data further, returning another DecoderResult
which is then “flattened” (so you don’t end up with a DecoderResult
inside a DecoderResult
). transform.encoder
turns the transformed data back again.
Example:
const regexCodec: Codec<RegExp> = flatMap(string, {
decoder: (str) => {
try {
return { tag: "Valid", value: RegExp(str, "u") };
} catch (error) {
return {
tag: "DecoderError",
error: {
tag: "custom",
message: error instanceof Error ? error.message : String(error),
got: str,
path: [],
},
};
}
},
encoder: (regex) => regex.source,
});
Note: Sometimes TypeScript has trouble inferring the return type of the transform.decoder
function. No matter what you do, it keeps complaining. In such cases it helps to add return type annotation on the transform.decoder
function.
type DecoderError = {
path: Array<number | string>;
orExpected?: "null or undefined" | "null" | "undefined";
} & (
| {
tag: "custom";
message: string;
got: unknown;
}
| {
tag: "exact fields";
knownFields: Array<string>;
got: Array<string>;
}
| {
tag: "missing field";
field: string;
got: Record<string, unknown>;
}
| {
tag: "tuple size";
expected: number;
got: number;
}
| {
tag: "unknown taggedUnion tag";
knownTags: Array<primitive>;
got: unknown;
}
| {
tag: "unknown multi type";
knownTypes: Array<
| "array"
| "boolean"
| "null"
| "number"
| "object"
| "string"
| "undefined"
>;
got: unknown;
}
| {
tag: "unknown primitiveUnion variant";
knownVariants: Array<primitive>;
got: unknown;
}
| {
tag: "wrong tag";
expected: primitive;
got: unknown;
}
| { tag: "array"; got: unknown }
| { tag: "bigint"; got: unknown }
| { tag: "boolean"; got: unknown }
| { tag: "number"; got: unknown }
| { tag: "object"; got: unknown }
| { tag: "string"; got: unknown }
);
type primitive = bigint | boolean | number | string | symbol | null | undefined;
The error returned by all decoders. It keeps track of where in the JSON the error occurred.
Use the format function to get a nice string explaining what went wrong.
const myCodec = array(string);
const decoderResult = myCodec.decoder(someUnknownValue);
switch (decoderResult.tag) {
case "DecoderError":
console.error(format(decoderResult.error));
break;
case "Valid":
console.log(decoderResult.value);
break;
}
When creating your own DecoderError
, you probably want to do something like this:
const myError: DecoderError = {
tag: "custom", // You probably want "custom".
message: "my message", // What you expected, or what went wrong.
got: theValueYouTriedToDecode,
// Usually the empty array; put the object key or array index you’re at if
// that makes sense. This will show up as for example `At root["myKey"]`.
path: [],
};
orExpected
exists so that undefinedOr
and nullOr
can say that undefined
and/or null
also are expected values.
function format(error: DecoderError, options?: ReprOptions): string;
Turn the DecoderError
into a nicely formatted string. It uses repr under the hood and takes the same options.
type ReprOptions = {
depth?: number | undefined;
indent?: string | undefined;
maxArrayChildren?: number | undefined;
maxObjectChildren?: number | undefined;
maxLength?: number | undefined;
sensitive?: boolean | undefined;
};
function repr(
value: unknown,
{
depth = 0,
indent = " ",
maxArrayChildren = 5,
maxObjectChildren = 5,
maxLength = 100,
sensitive = false,
}: ReprOptions = {},
): string;
Takes any value, and returns a string representation of it for use in error messages. format uses it behind the scenes. If you want to do your own formatting, repr
can be useful.
Options:
name | type | default | description |
---|---|---|---|
depth | number |
0 |
How deep to recursively call repr on array items and object values. |
indent | string |
" " (two spaces) |
The indentation to use for nested values when depth is larger than 0. |
maxArrayChildren | number |
5 |
The number of array items to print. |
maxObjectChildren | number |
5 |
The number of object key-values to print. |
maxLength | number |
100 |
The maximum length of literals, such as strings, before truncating them. |
sensitive | boolean |
false |
Set it to true if you deal with sensitive data to avoid leaks. See below. |
format(someDecoderError)
example:
At root["details"]["ssn"]:
Expected a string
Got: 123456789
format(someDecoderError, { sensitive: true })
example:
At root["details"]["ssn"]:
Expected a string
Got: number
(Actual values are hidden in sensitive mode.)
It’s helpful when errors show you the actual values that failed decoding to make it easier to understand what happened. However, if you’re dealing with sensitive data, such as email addresses, passwords or social security numbers, you might not want that data to potentially appear in error logs.
const JSON: {
parse<Decoded>(
codec: Codec<Decoded>,
jsonString: string,
): DecoderResult<Decoded>;
stringify<Decoded, Encoded>(
codec: Codec<Decoded, Encoded>,
value: Decoded,
space?: number | string,
): string;
};
tiny-decoders exports a JSON
object with parse
and stringify
methods, similar to the standard global JSON
object. The difference is that tiny-decoder’s versions also take a Codec
, which makes them safer.
You can use ESLint’s no-restricted-globals rule to forbid the global JSON
object, for maximum safety:
{
"rules": {
"no-restricted-globals": [
"error",
{
"name": "JSON",
"message": "Import JSON from tiny-decoders and use its JSON.parse and JSON.stringify with a codec instead."
}
]
}
}
Note
The standard JSON.stringify
can return undefined
(if you try to stringify undefined
itself, or a function or a symbol). tiny-decoder’s JSON.stringify
always returns a string – it returns "null"
for undefined
, functions and symbols.
Rather than first defining the type and then defining the codec (which often feels like writing the type twice), you can only define the decoder and then infer the type.
const personCodec = fields({
name: string,
age: number,
});
type Person = Infer<typeof personCodec>;
// equivalent to:
type Person = {
name: string;
age: number;
};
This is a nice pattern (naming the type and the codec the same):
type Person = Infer<typeof Person>;
const Person = fields({
name: string,
age: number,
});
Note that if you don’t annotate a codec, TypeScript infers both type parameters of Codec<Decoded, Encoded>
. But if you annotate it with Codec<MyType>
, TypeScript does not infer Encoded
– it will become unknown
. If you specify one type parameter, TypeScript stops inferring them altogether and requires you to specify all of them – except the ones with defaults. Encoded
defaults to unknown
, which is usually fine, but occasionally you need to work with a more precise type for Encoded
. Then it might even be easier to leave out the type annotation!
See the type inference example for more details.
// 🚨 Does not exist!
function either<T, U>(codec1: Codec<T>, codec2: Codec<U>): Codec<T | U>;
The decoder of this codec would try codec1.decoder
first. If it fails, go on and try codec2.decoder
. If that fails, present both errors. I consider this a blunt tool.
- If you want either a string or a number, use multi. This let’s you switch between any JSON types.
- For objects that can be decoded in different ways, use taggedUnion. If that’s not possible, see the untagged union example for how you can approach the problem.
The above approaches result in a much simpler DecoderError type, and also results in much better error messages, since there’s never a need to present something like “decoding failed in the following 2 ways: …”