Using the Pglite service we can define two other services to handle read and write operations:
WriteApi: used to manage state changes with schema validation and encodingReadApi: used to read data inside loaders (reading requests in the UI are handled using reactive queries)
Both services include a dependency on Pglite:
import { Data, Effect } from "effect";
import { Pglite } from "./pglite";
export class ReadApi extends Effect.Service<ReadApi>()("ReadApi", {
effect: Effect.gen(function* () {
const { query } = yield* Pglite;
return {
// API definition here 👈
};
}),
}) {}The services use the query function exported from Pglite to execute queries:
Using
effectall errors are collected inside theEffecttype. In this case, executingqueryadds a possiblePgliteErrorto the error channel.
export class ReadApi extends Effect.Service<ReadApi>()("ReadApi", {
effect: Effect.gen(function* () {
const { query } = yield* Pglite;
return {
// 👇 `_` is the client instance using `drizzle-orm`
getCurrentPlan: query((_) =>
_.select().from(planTable).where(eq(planTable.isCurrent, true)).limit(1)
),
};
}),
}) {}In the example, getCurrentPlan will return a list of planTable rows, or a PgliteError if the query fails: Effect<typeof planTable.$inferSelect, PgliteError>.
Providing dependencies to services
Since the Pglite service has no other dependencies, we can provide it directly using dependencies:
An effect service defined with
Effect.Serviceexports aDefaultproperty containing aLayerwith the service default instance.
export class ReadApi extends Effect.Service<ReadApi>()("ReadApi", {
dependencies: [Pglite.Default],
effect: Effect.gen(function* () {
const { query } = yield* Pglite;
return {
// 👇 `_` is the client instance using `drizzle-orm`
getCurrentPlan: query((_) =>
_.select().from(planTable).where(eq(planTable.isCurrent, true)).limit(1)
),
};
}),
}) {}Queries validation with effect schema
We want to make sure that the data we are receiving from the UI is always valid before inserts.
Schema from effect allows defining schemas with both encoding and decoding:
- Encoding contains the original type of the data (from the UI). This is the value that we collect from the user
- Decoding is used to validate the data before inserting it into the database (adding filters, brands, etc.)
For each table of the database we define a separate file containing all the schemas (insert, update, remove, etc.):
The project also contains a
shared.tsfile with common schemas (likePrimaryKeyIndexandEmptyStringAsUndefined).
import { Schema } from "effect";
import {
EmptyStringAsUndefined,
FloatQuantityInsert,
FloatQuantityInsertPositive,
FloatQuantityOrUndefined,
PrimaryKeyIndex,
} from "./shared";
export class FoodInsert extends Schema.Class<FoodInsert>("FoodInsert")({
name: Schema.NonEmptyString,
brand: EmptyStringAsUndefined,
calories: FloatQuantityInsertPositive,
carbohydrates: FloatQuantityInsert,
proteins: FloatQuantityInsert,
fats: FloatQuantityInsert,
fatsSaturated: FloatQuantityOrUndefined,
salt: FloatQuantityOrUndefined,
fibers: FloatQuantityOrUndefined,
sugars: FloatQuantityOrUndefined,
}) {}
export class FoodUpdate extends Schema.Class<FoodUpdate>("FoodUpdate")({
id: PrimaryKeyIndex,
name: Schema.NonEmptyString,
brand: EmptyStringAsUndefined,
calories: FloatQuantityInsert,
carbohydrates: FloatQuantityInsert,
proteins: FloatQuantityInsert,
fats: FloatQuantityInsert,
fatsSaturated: FloatQuantityOrUndefined,
salt: FloatQuantityOrUndefined,
fibers: FloatQuantityOrUndefined,
sugars: FloatQuantityOrUndefined,
}) {}
export class FoodSelect extends Schema.Class<FoodSelect>("FoodSelect")({
id: PrimaryKeyIndex,
name: Schema.NonEmptyString,
brand: Schema.NullOr(Schema.NonEmptyString),
calories: FloatQuantityInsert,
carbohydrates: FloatQuantityInsert,
proteins: FloatQuantityInsert,
fats: FloatQuantityInsert,
fatsSaturated: FloatQuantityOrUndefined,
salt: FloatQuantityOrUndefined,
fibers: FloatQuantityOrUndefined,
sugars: FloatQuantityOrUndefined,
}) {}Inside WriteApi we implement a common middleware function that decodes the data (validation step) and then encodes it before executing the query:
By using
flowwe don't even need to manually define the parameters for each function. Those are automatically extracted from the schema 🪄
export class WriteApi extends Effect.Service<WriteApi>()("WriteApi", {
dependencies: [Pglite.Default],
effect: Effect.gen(function* () {
const { query } = yield* Pglite;
// 👇 Common middleware function to decode and encode data
const execute = <A, I, T, E>(
schema: Schema.Schema<A, I>,
exec: (values: I) => Effect.Effect<T, E>
) =>
flow(
// 👇 Decode the data
Schema.decode(schema),
// 👇 Encode the data (if decode succeeds)
Effect.flatMap(Schema.encode(schema)),
Effect.tap((encoded) => Effect.log("Insert", encoded)),
Effect.mapError((error) => new WriteApiError({ cause: error })),
// 👇 Execute the query
Effect.flatMap(exec)
);
return {
// 👇 Each function uses `execute` before executing the query
createFood: execute(FoodInsert, (values) =>
query((_) => _.insert(foodTable).values(values))
),
// ...
};
}),
}) {}This encapsulates all the complexity inside WriteApi, so that calling the API become as simple as calling a function, and all the validation and encoding is handled automatically.
It's also possible to collect both
ReadApiandWriteApiinside thePgliteservice.Having separate services (
ReadApi/WriteApi) makes testing each independent service easier, but it requires managing and providing two more services.
