Introduction
What is Nova?
Nova is a web framework for Erlang/OTP. It handles routing, request processing, template rendering, sessions, and WebSockets — the core pieces you need to build web applications and APIs. Nova sits on top of Cowboy, the battle-tested Erlang HTTP server, and adds a structured layer for organizing your application.
Who this book is for
This book is for anyone who wants to build web applications with Erlang — whether you are an experienced developer exploring a new stack or a newcomer picking up Erlang for the first time. If you have built anything with another web framework (Express, Rails, Django, Phoenix, etc.) you will feel right at home, but it is not a requirement. Basic familiarity with HTTP and databases is enough to get started.
No prior Erlang experience is needed. The Erlang Essentials appendix covers the language fundamentals you will use throughout the book, and Learn You Some Erlang is an excellent free companion if you want a deeper introduction. You can start the book right away and refer back to these resources as you go.
What you'll build
Throughout this book you will build a blog platform step by step:
- A Nova application from scratch — project structure, routing, and your first controller
- An HTML frontend — login page, views with ErlyDTL templates, authentication and sessions
- A database layer with Kura — schemas, migrations, changesets, and a repository for PostgreSQL
- A JSON API — RESTful endpoints with code generators, associations, preloading, and embedded schemas
- Real-time features — WebSockets and pub/sub for a live comment feed
- Production concerns — transactions, bulk operations, error handling, and deployment
- Developer tooling — OpenAPI documentation, security audits, custom plugins, and OpenTelemetry
The blog has users who write posts, readers who leave comments, and tags for organizing content. This naturally exercises Kura's key features: schemas with associations, enum types (post status), embedded schemas (post metadata as JSONB), changesets with validation, many-to-many relationships (posts and tags), transactions, and bulk operations.
Before starting, make sure you have:
- Erlang/OTP 27+ — install via mise (recommended), asdf, or your system package manager
- Rebar3 — the Erlang build tool, also installable via mise/asdf
- Docker — for running PostgreSQL (we use Docker Compose throughout)
- A text editor and a terminal
See the Erlang Essentials appendix for detailed setup instructions.
How to read this book
The chapters are designed to be read in order. Each one builds on the previous — the application grows progressively from a bare project to a full-featured, deployed service. Code examples accumulate, so what you build in Chapter 2 is extended in Chapter 6 and deployed in Chapter 17.
If you are already familiar with Nova, you can jump to specific chapters. The Cheat Sheet appendix is a useful standalone reference.
Let's get started by creating your first Nova application.
Create a New Application
The fastest way to get started with Nova is the rebar3_nova plugin. It provides project templates that scaffold a complete, runnable Nova application.
Installing the rebar3 plugin
Run the installer script to set up rebar3_nova:
sh -c "$(curl -fsSL https://raw.githubusercontent.com/novaframework/rebar3_nova/master/install.sh)"
This checks for rebar3 (installing it if needed) and adds the rebar3_nova plugin to your global rebar3 config.
Creating a new project
Rebar3's new command generates project scaffolding. With the Nova plugin installed, you have a nova template:
rebar3 new nova blog
This creates a directory with everything needed for a running Nova application:
===> Writing blog/config/dev_sys.config.src
===> Writing blog/config/prod_sys.config.src
===> Writing blog/src/blog.app.src
===> Writing blog/src/blog_app.erl
===> Writing blog/src/blog_sup.erl
===> Writing blog/src/blog_router.erl
===> Writing blog/src/controllers/blog_main_controller.erl
===> Writing blog/rebar.config
===> Writing blog/config/vm.args.src
===> Writing blog/priv/assets/favicon.ico
===> Writing blog/src/views/blog_main.dtl
===> Writing blog/.tool-versions
===> Writing blog/.gitignore
The generated .tool-versions file works with mise and asdf. Run mise install or asdf install to get the exact Erlang and rebar3 versions for this project.
Project structure
Here is what was generated:
src/— Your source codesrc/controllers/— Controller modules that handle request logicsrc/views/— ErlyDTL (Django-style) templates for HTML renderingblog_router.erl— Route definitionsblog_app.erl— OTP application callbackblog_sup.erl— Supervisor
config/— Configuration filesdev_sys.config.src— Development config (used byrebar3 shell)prod_sys.config.src— Production config (used in releases)vm.args.src— Erlang VM arguments
rebar.config— Build configuration, dependencies, and release settings
Running the application
Start the development server:
cd blog
rebar3 nova serve
This compiles your code, starts an Erlang shell, and watches for file changes — when you save a file, it is automatically recompiled and reloaded. No restart needed.
rebar3 nova serve requires enotify. On Linux, install inotify-tools from your package manager. On macOS, fsevent is used automatically.
If enotify is not available, use rebar3 shell instead. It works the same but without automatic recompilation.
Once the node is up, open your browser to http://localhost:8080. You should see the Nova welcome page.
You can also verify the application is running with curl:
curl -v localhost:8080/heartbeat
A 200 OK response means everything is working.
Listing routes
To see all registered routes:
rebar3 nova routes
Host: '_'
├─ /assets
└─ _ /[...] (blog, cowboy_static:init/1)
└─ GET / (blog, blog_main_controller:index/1)
This shows the static asset handler and the index route that renders the welcome page.
Now that you have a running application, let's look at how routing works in Nova.
Routing
In the previous chapter we created a Nova application and saw it running. Now let's understand how requests are matched to controller functions.
The router module
When Nova generated our project, it created blog_router.erl:
-module(blog_router).
-behaviour(nova_router).
-export([
routes/1
]).
routes(_Environment) ->
[#{prefix => "",
security => false,
routes => [
{"/", fun blog_main_controller:index/1, #{methods => [get]}},
{"/heartbeat", fun(_) -> {status, 200} end, #{methods => [get]}}
]
}].
The routes/1 function returns a list of route groups. Each group is a map with these keys:
| Key | Description |
|---|---|
prefix | Path prefix prepended to all routes in this group |
security | false or a fun reference to a security module |
routes | List of route tuples |
Each route tuple has the form {Path, Handler, Options}:
- Path — the URL pattern (e.g.
"/users/:id") - Handler — a fun reference like
fun Module:Function/1 - Options — a map, typically
#{methods => [get, post, ...]}
Adding a route
Let's add a login page route:
routes(_Environment) ->
[#{prefix => "",
security => false,
routes => [
{"/", fun blog_main_controller:index/1, #{methods => [get]}},
{"/heartbeat", fun(_) -> {status, 200} end, #{methods => [get]}},
{"/login", fun blog_main_controller:login/1, #{methods => [get]}}
]
}].
We will implement the login/1 function in the Views, Auth & Sessions chapter.
Prefixes for grouping
The prefix key groups related routes under a common path. For example, to build an API:
#{prefix => "/api/v1",
security => false,
routes => [
{"/users", fun blog_api_controller:list_users/1, #{methods => [get]}},
{"/users/:id", fun blog_api_controller:get_user/1, #{methods => [get]}}
]
}
These routes become /api/v1/users and /api/v1/users/:id.
Environment-based routing
The routes/1 function receives the environment atom configured in sys.config (dev or prod). You can use pattern matching to add development-only routes:
routes(prod) ->
prod_routes();
routes(dev) ->
prod_routes() ++ dev_routes().
prod_routes() ->
[#{prefix => "",
security => false,
routes => [
{"/", fun blog_main_controller:index/1, #{methods => [get]}},
{"/heartbeat", fun(_) -> {status, 200} end, #{methods => [get]}}
]
}].
dev_routes() ->
[#{prefix => "",
security => false,
routes => [
{"/dev-tools", fun blog_dev_controller:index/1, #{methods => [get]}}
]
}].
rebar3 nova routes shows production routes only. Development-only routes won't appear in the output.
Route parameters
Path segments starting with : are captured as bindings:
{"/users/:id", fun my_controller:show/1, #{methods => [get]}}
In the controller, access bindings from the request map:
show(#{bindings := #{<<"id">> := Id}}) ->
{json, #{id => binary_to_integer(Id)}}.
Inline handlers
For simple responses you can use an anonymous function directly in the route:
{"/heartbeat", fun(_) -> {status, 200} end, #{methods => [get]}}
This is useful for health checks and other trivial endpoints.
Next, let's look at plugins — the middleware layer that processes requests before and after your controllers.
Plugins
Plugins are Nova's middleware system. They run code before and after your controller handles a request — useful for decoding request bodies, adding headers, logging, rate limiting, and more.
How the plugin pipeline works
Every HTTP request flows through a pipeline:
- Pre-request plugins run in order (lowest priority number first)
- The controller handles the request
- Post-request plugins run in order
A plugin module implements the nova_plugin behaviour and exports pre_request/4, post_request/4, and plugin_info/0.
Here is an example — the nova_correlation_plugin that ships with Nova:
-module(nova_correlation_plugin).
-behaviour(nova_plugin).
-export([pre_request/4,
post_request/4,
plugin_info/0]).
pre_request(Req0, _Env, Opts, State) ->
CorrId = get_correlation_id(Req0, Opts),
ok = update_logger_metadata(CorrId, Opts),
Req1 = cowboy_req:set_resp_header(<<"X-Correlation-ID">>, CorrId, Req0),
Req = Req1#{correlation_id => CorrId},
{ok, Req, State}.
post_request(Req, _Env, _Opts, State) ->
{ok, Req, State}.
plugin_info() ->
#{title => <<"nova_correlation_plugin">>,
version => <<"0.2.0">>,
url => <<"https://github.com/novaframework/nova">>,
authors => [<<"Nova team">>],
description => <<"Add X-Correlation-ID headers to response">>}.
The pre_request callback picks up or generates a correlation ID and adds it to both the response headers and the request map. post_request is a no-op here. The State argument is global plugin state — see Custom Plugins for details on managing it with init/0 and stop/1.
Configuring plugins
Plugins are configured in sys.config under the nova application key:
{nova, [
{plugins, [
{pre_request, nova_request_plugin, #{decode_json_body => true}}
]}
]}
Each plugin entry is a tuple: {Phase, Module, Options} where Phase is pre_request or post_request.
nova_request_plugin is a built-in plugin that handles request body decoding. The options map controls what it decodes.
Setting up for our login form
In the next chapter we will build a login form that sends URL-encoded data. To have Nova decode this automatically, update the plugin config in dev_sys.config.src:
{plugins, [
{pre_request, nova_request_plugin, #{read_urlencoded_body => true}}
]}
With this setting, form POST data is decoded and placed in the params key of the request map, ready for your controller to use.
You can enable multiple decoders at once. We will add decode_json_body => true later when we build our JSON API.
Built-in plugins
Nova ships with several plugins. See the Nova documentation for the full list.
For now, the key one is nova_request_plugin — it handles JSON body decoding, URL-encoded body decoding, and multipart uploads.
With plugins configured to decode form data, we can now build our first view and login page.
Views, Auth & Sessions
In this chapter we will build a login page with ErlyDTL templates, add authentication to protect routes, and wire up sessions so users stay logged in across requests.
Views with ErlyDTL
Nova uses ErlyDTL for HTML templating — an Erlang implementation of Django's template language. Templates live in src/views/ and are compiled to Erlang modules at build time.
Creating a login template
Create src/views/login.dtl:
<html>
<body>
<div>
{% if error %}<p style="color:red">{{ error }}</p>{% endif %}
<form action="/login" method="post">
<label for="username">Username:</label>
<input type="text" id="username" name="username"><br>
<label for="password">Password:</label>
<input type="password" id="password" name="password"><br>
<input type="submit" value="Submit">
</form>
</div>
</body>
</html>
This form POSTs to /login with username and password fields. The URL-encoded body will be decoded by nova_request_plugin (which we configured in the Plugins chapter).
Adding a controller function
Our generated controller is in src/controllers/blog_main_controller.erl:
-module(blog_main_controller).
-export([
index/1,
login/1
]).
index(_Req) ->
{ok, [{message, "Hello world!"}]}.
login(_Req) ->
{ok, [], #{view => login}}.
The return tuple {ok, [], #{view => login}} tells Nova:
ok— render a template[]— no template variables#{view => login}— use thelogintemplate (matcheslogin.dtl)
How template resolution works
When a controller returns {ok, Variables} (without a view option), Nova looks for a template named after the controller module. For blog_main_controller:index/1, it looks for blog_main.dtl.
When you specify #{view => login}, Nova uses login.dtl instead.
Authentication
Now let's handle the login form submission with a security module.
Security in route groups
Authentication in Nova is configured per route group using the security key. It points to a function that receives the request and returns either {true, AuthData} (allow) or false (deny).
Creating a security module
Create src/blog_auth.erl:
-module(blog_auth).
-export([
username_password/1,
session_auth/1
]).
%% Used for the login POST
username_password(#{params := Params}) ->
case Params of
#{<<"username">> := Username,
<<"password">> := <<"password">>} ->
{true, #{authed => true, username => Username}};
_ ->
false
end.
%% Used for pages that need an active session
session_auth(Req) ->
case nova_session:get(Req, <<"username">>) of
{ok, Username} ->
{true, #{authed => true, username => Username}};
{error, _} ->
false
end.
username_password/1 checks the decoded form parameters. If the password matches, it returns {true, AuthData} — the auth data map is attached to the request and accessible in your controller as auth_data.
session_auth/1 checks for an existing session (we will set this up next).
This is a hardcoded password for demonstration only. In a real application you would validate credentials against a database with properly hashed passwords.
How security works
The security flow for each request is:
- Nova matches the request to a route group
- If
securityisfalse, skip to the controller - If
securityis a function, call it with the request map - If it returns
{true, AuthData}, mergeauth_data => AuthDatainto the request and continue to the controller - If it returns
false, trigger the 401 error handler
You can have different security functions for different route groups — one for API token auth, another for session auth, and so on.
Sessions
Nova has a built-in session system backed by ETS (Erlang Term Storage). Session IDs are stored in a session_id cookie.
The session API
nova_session:get(Req, <<"key">>) -> {ok, Value} | {error, not_found}.
nova_session:set(Req, <<"key">>, Value) -> ok.
nova_session:delete(Req) -> {ok, Req1}.
nova_session:delete(Req, <<"key">>) -> {ok, Req1}.
nova_session:generate_session_id() -> {ok, SessionId}.
The session manager is configured in sys.config:
{nova, [
{session_manager, nova_session_ets}
]}
nova_session_ets is the default. It stores session data in an ETS table and replicates changes across clustered nodes using nova_pubsub.
Wiring up the login flow
Update the controller to create a session on successful login:
-module(blog_main_controller).
-export([
index/1,
login/1,
login_post/1,
logout/1
]).
index(#{auth_data := #{authed := true, username := Username}}) ->
{ok, [{message, <<"Hello ", Username/binary>>}]};
index(_Req) ->
{redirect, "/login"}.
login(_Req) ->
{ok, [], #{view => login}}.
login_post(#{auth_data := #{authed := true, username := Username}} = Req) ->
{ok, SessionId} = nova_session:generate_session_id(),
Req1 = cowboy_req:set_resp_cookie(<<"session_id">>, SessionId, Req,
#{path => <<"/">>, http_only => true}),
nova_session_ets:set_value(SessionId, <<"username">>, Username),
{redirect, "/"};
login_post(_Req) ->
{ok, [{error, <<"Invalid username or password">>}], #{view => login}}.
logout(Req) ->
{ok, _Req1} = nova_session:delete(Req),
{redirect, "/login"}.
The login flow:
- Generate a session ID
- Set the
session_idcookie on the response - Store the username in the session
- Redirect to the home page
Updating the routes
routes(_Environment) ->
[
%% Public routes
#{prefix => "",
security => false,
routes => [
{"/login", fun blog_main_controller:login/1, #{methods => [get]}},
{"/heartbeat", fun(_) -> {status, 200} end, #{methods => [get]}}
]
},
%% Login POST (uses username/password auth)
#{prefix => "",
security => fun blog_auth:username_password/1,
routes => [
{"/login", fun blog_main_controller:login_post/1, #{methods => [post]}}
]
},
%% Protected pages (uses session auth)
#{prefix => "",
security => fun blog_auth:session_auth/1,
routes => [
{"/", fun blog_main_controller:index/1, #{methods => [get]}},
{"/logout", fun blog_main_controller:logout/1, #{methods => [get]}}
]
}
].
Now the flow is:
- User visits
/login— sees the login form - Form POSTs to
/login—username_password/1checks credentials - On success, a session is created and the user is redirected to
/ - On
/,session_auth/1checks the session cookie /logoutdeletes the session and redirects to/login
Cookie options
When setting the session cookie, control its behaviour with options:
cowboy_req:set_resp_cookie(<<"session_id">>, SessionId, Req, #{
path => <<"/">>, %% Cookie is valid for all paths
http_only => true, %% Not accessible from JavaScript
secure => true, %% Only sent over HTTPS
max_age => 86400 %% Expires after 24 hours (in seconds)
}).
Custom session backends
If you want to store sessions in a database or Redis instead of ETS, implement the nova_session behaviour:
-module(my_redis_session).
-behaviour(nova_session).
-export([start_link/0,
get_value/2,
set_value/3,
delete_value/1,
delete_value/2]).
start_link() ->
ignore.
get_value(SessionId, Key) ->
{ok, Value}.
set_value(SessionId, Key, Value) ->
ok.
delete_value(SessionId) ->
ok.
delete_value(SessionId, Key) ->
ok.
Then configure it:
{nova, [
{session_manager, my_redis_session}
]}
We now have a complete authentication and session system. Next, let's set up a database layer with Kura.
Database Setup
Nova does not include a built-in database layer — by design, you choose what fits your project. We will use Kura, an Ecto-inspired database abstraction for Erlang that targets PostgreSQL. Kura gives you schemas, changesets, a query builder, and migrations — no raw SQL required.
Adding dependencies
Add kura and the rebar3_kura plugin to rebar.config:
{deps, [
nova,
{flatlog, "0.1.2"},
{kura, "~> 1.0"}
]}.
{plugins, [
rebar3_nova,
{rebar3_kura, "~> 1.0"}
]}.
Also add kura to your application dependencies in src/blog.app.src:
{applications,
[kernel,
stdlib,
nova,
kura
]},
Setting up the repository
The rebar3_kura plugin provides a setup command that generates a repository module:
rebar3 kura setup --name blog_repo
This creates src/blog_repo.erl — a module that wraps all database operations:
-module(blog_repo).
-behaviour(kura_repo).
-export([config/0, start/0, all/1, get/2, get_by/2, one/1,
insert/1, insert/2, update/1, delete/1,
update_all/2, delete_all/1, insert_all/2,
preload/3, transaction/1, multi/1, query/2]).
config() ->
#{pool => blog_repo,
database => <<"blog_dev">>,
hostname => <<"localhost">>,
port => 5432,
username => <<"postgres">>,
password => <<>>,
pool_size => 10}.
start() -> kura_repo_worker:start(?MODULE).
all(Q) -> kura_repo_worker:all(?MODULE, Q).
get(Schema, Id) -> kura_repo_worker:get(?MODULE, Schema, Id).
get_by(Schema, Clauses) -> kura_repo_worker:get_by(?MODULE, Schema, Clauses).
one(Q) -> kura_repo_worker:one(?MODULE, Q).
insert(CS) -> kura_repo_worker:insert(?MODULE, CS).
insert(CS, Opts) -> kura_repo_worker:insert(?MODULE, CS, Opts).
update(CS) -> kura_repo_worker:update(?MODULE, CS).
delete(CS) -> kura_repo_worker:delete(?MODULE, CS).
update_all(Q, Updates) -> kura_repo_worker:update_all(?MODULE, Q, Updates).
delete_all(Q) -> kura_repo_worker:delete_all(?MODULE, Q).
insert_all(Schema, Entries) -> kura_repo_worker:insert_all(?MODULE, Schema, Entries).
preload(Schema, Records, Assocs) -> kura_repo_worker:preload(?MODULE, Schema, Records, Assocs).
transaction(Fun) -> kura_repo_worker:transaction(?MODULE, Fun).
multi(Multi) -> kura_repo_worker:multi(?MODULE, Multi).
query(SQL, Params) -> kura_repo_worker:query(?MODULE, SQL, Params).
Every function delegates to kura_repo_worker with the repo module as the first argument. The config/0 callback tells Kura how to connect to PostgreSQL.
The setup command also creates src/migrations/ for migration files.
PostgreSQL with Docker Compose
Create docker-compose.yml in your project root:
services:
db:
image: postgres:16
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: blog_dev
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
Start it:
docker compose up -d
Configuring the repo
Update config/dev_sys.config.src to include the repo config. The repo module reads its config from config/0, but you can also configure it through sys.config if you prefer environment variable substitution:
[
{kernel, [
{logger_level, debug},
{logger, [
{handler, default, logger_std_h,
#{formatter => {flatlog, #{
map_depth => 3,
term_depth => 50,
colored => true,
template => [colored_start, "[\033[1m", level, "\033[0m",
colored_start, "] ", msg, "\n", colored_end]
}}}}
]}
]},
{nova, [
{use_stacktrace, true},
{environment, dev},
{cowboy_configuration, #{port => 8080}},
{dev_mode, true},
{bootstrap_application, blog},
{plugins, [
{pre_request, nova_request_plugin, #{
read_urlencoded_body => true,
decode_json_body => true
}}
]}
]}
].
Starting the repo in the supervisor
The repo needs to be started when your application boots. Add it to your supervisor in src/blog_sup.erl:
-module(blog_sup).
-behaviour(supervisor).
-export([start_link/0]).
-export([init/1]).
start_link() ->
supervisor:start_link({local, ?MODULE}, ?MODULE, []).
init([]) ->
blog_repo:start(),
{ok, {#{strategy => one_for_one, intensity => 5, period => 10}, []}}.
blog_repo:start() creates the pgo connection pool using the config from config/0.
Adding the rebar3_kura compile hook
To get automatic migration generation (covered in the next chapter), add a provider hook to rebar.config:
{provider_hooks, [
{post, [{compile, {kura, compile}}]}
]}.
This runs rebar3 kura compile after every rebar3 compile, scanning your schemas and generating migrations for any changes.
Verifying the connection
Start the development server:
rebar3 nova serve
You should see the application start without errors. If the database is unreachable, you will see a connection error in the logs. Verify from the shell:
1> blog_repo:query("SELECT 1", []).
{ok, #{command => select, num_rows => 1, rows => [{1}]}}
Two commands and you have a database layer.
Now let's define our first schemas and watch Kura generate migrations automatically in Schemas and Migrations.
Schemas and Migrations
In the previous chapter we set up the database connection and repo. Now let's define schemas — Erlang modules that describe your data — and watch Kura generate migrations automatically.
Defining the user schema
Create src/schemas/user.erl:
-module(user).
-behaviour(kura_schema).
-include_lib("kura/include/kura.hrl").
-export([table/0, fields/0, primary_key/0]).
table() -> <<"users">>.
primary_key() -> id.
fields() ->
[
#kura_field{name = id, type = id, primary_key = true, nullable = false},
#kura_field{name = username, type = string, nullable = false},
#kura_field{name = email, type = string, nullable = false},
#kura_field{name = password_hash, type = string, nullable = false},
#kura_field{name = inserted_at, type = utc_datetime},
#kura_field{name = updated_at, type = utc_datetime}
].
A schema module implements the kura_schema behaviour and exports three required callbacks:
table/0— the PostgreSQL table nameprimary_key/0— the primary key field namefields/0— a list of#kura_field{}records describing each column
Each field has a name (atom), type (one of Kura's types), and optional properties like nullable and default.
Kura field types
| Type | PostgreSQL | Erlang |
|---|---|---|
id | BIGSERIAL | integer |
integer | INTEGER | integer |
float | DOUBLE PRECISION | float |
string | VARCHAR(255) | binary |
text | TEXT | binary |
boolean | BOOLEAN | boolean |
date | DATE | {Y, M, D} |
utc_datetime | TIMESTAMP | {{Y,M,D},{H,Mi,S}} |
uuid | UUID | binary |
jsonb | JSONB | map/list |
{enum, [atoms]} | VARCHAR(255) | atom |
{array, Type} | Type[] | list |
Auto-generating migrations
With the rebar3_kura compile hook we added in the previous chapter, compile the project:
rebar3 compile
===> [kura] Schema diff detected changes
===> [kura] Generated src/migrations/m20260223120000_create_users.erl
===> Compiling blog
Kura compared your schema definitions against the current database state (no migrations yet = empty database) and generated a migration file.
Walking through the migration
Open the generated file in src/migrations/:
-module(m20260223120000_create_users).
-behaviour(kura_migration).
-include_lib("kura/include/kura.hrl").
-export([up/0, down/0]).
up() ->
[{create_table, <<"users">>, [
#kura_column{name = id, type = id, primary_key = true, nullable = false},
#kura_column{name = username, type = string, nullable = false},
#kura_column{name = email, type = string, nullable = false},
#kura_column{name = password_hash, type = string, nullable = false},
#kura_column{name = inserted_at, type = utc_datetime},
#kura_column{name = updated_at, type = utc_datetime}
]}].
down() ->
[{drop_table, <<"users">>}].
The migration has two functions:
up/0— returns operations to apply (create the table)down/0— returns operations to reverse (drop the table)
Migration files are named with a timestamp prefix so they run in order.
Defining the post schema
Now let's add a post schema with an enum type for status. Create src/schemas/post.erl:
-module(post).
-behaviour(kura_schema).
-include_lib("kura/include/kura.hrl").
-export([table/0, fields/0, primary_key/0]).
table() -> <<"posts">>.
primary_key() -> id.
fields() ->
[
#kura_field{name = id, type = id, primary_key = true, nullable = false},
#kura_field{name = title, type = string, nullable = false},
#kura_field{name = body, type = text},
#kura_field{name = status, type = {enum, [draft, published, archived]}, default = <<"draft">>},
#kura_field{name = user_id, type = integer},
#kura_field{name = inserted_at, type = utc_datetime},
#kura_field{name = updated_at, type = utc_datetime}
].
The status field uses an enum type — Kura stores it as VARCHAR(255) in PostgreSQL but casts between atoms and binaries automatically. When you query a post, status comes back as an atom (draft, published, or archived).
Compile again:
rebar3 compile
===> [kura] Schema diff detected changes
===> [kura] Generated src/migrations/m20260223120100_create_posts.erl
===> Compiling blog
A second migration appears for the posts table.
Running migrations
Kura runs migrations when the repo starts. On application boot, blog_repo:start() checks the schema_migrations table and runs any pending migrations in order.
Start the application:
rebar3 nova serve
Check the logs — you should see the migrations being applied:
[info] [kura] Running migration: m20260223120000_create_users
[info] [kura] Running migration: m20260223120100_create_posts
The schema_migrations table
Kura creates a schema_migrations table to track which migrations have been applied:
blog_dev=# SELECT * FROM schema_migrations;
version | inserted_at
--------------------+-------------------
20260223120000 | 2026-02-23 12:00:00
20260223120100 | 2026-02-23 12:01:00
Each row records a migration version (the timestamp from the filename). Kura only runs migrations that are not in this table.
Modifying schemas
When you change a schema — add a field, remove one, or change a type — Kura detects the difference on the next compile and generates an alter_table migration.
For example, add a bio field to the user schema:
fields() ->
[
#kura_field{name = id, type = id, primary_key = true, nullable = false},
#kura_field{name = username, type = string, nullable = false},
#kura_field{name = email, type = string, nullable = false},
#kura_field{name = password_hash, type = string, nullable = false},
#kura_field{name = bio, type = text},
#kura_field{name = inserted_at, type = utc_datetime},
#kura_field{name = updated_at, type = utc_datetime}
].
Compile:
rebar3 compile
===> [kura] Schema diff detected changes
===> [kura] Generated src/migrations/m20260223120200_alter_users.erl
The generated migration adds the column:
up() ->
[{alter_table, <<"users">>, [
{add_column, #kura_column{name = bio, type = text}}
]}].
down() ->
[{alter_table, <<"users">>, [
{drop_column, bio}
]}].
Define your schema, compile, migration appears. No SQL files to maintain.
Now that we have tables, let's learn about changesets and validation — how Kura validates and tracks data changes before they hit the database.
Changesets and Validation
In the previous chapter we defined schemas and generated migrations. Before we can insert or update data, we need to validate it. Kura uses changesets — a data structure that tracks what fields changed, validates them, and accumulates errors. No exceptions, no side effects — just data in, data out.
The changeset concept
A changeset takes three inputs:
- Data — the existing record (or
#{}for a new one) - Params — the incoming data (typically from a request body)
- Allowed fields — which params are permitted (everything else is ignored)
It produces a #kura_changeset{} record with:
changes— a map of field → new valueerrors— a list of{field, message}tuplesvalid—trueorfalse
Adding changeset functions to schemas
Let's add a changeset/2 function to the post schema. Update src/schemas/post.erl:
-module(post).
-behaviour(kura_schema).
-include_lib("kura/include/kura.hrl").
-export([table/0, fields/0, primary_key/0, changeset/2]).
table() -> <<"posts">>.
primary_key() -> id.
fields() ->
[
#kura_field{name = id, type = id, primary_key = true, nullable = false},
#kura_field{name = title, type = string, nullable = false},
#kura_field{name = body, type = text},
#kura_field{name = status, type = {enum, [draft, published, archived]}, default = <<"draft">>},
#kura_field{name = user_id, type = integer},
#kura_field{name = inserted_at, type = utc_datetime},
#kura_field{name = updated_at, type = utc_datetime}
].
changeset(Data, Params) ->
CS = kura_changeset:cast(post, Data, Params, [title, body, status, user_id]),
CS1 = kura_changeset:validate_required(CS, [title, body]),
CS2 = kura_changeset:validate_length(CS1, title, [{min, 3}, {max, 200}]),
kura_changeset:validate_inclusion(CS2, status, [draft, published, archived]).
Here is what each step does:
cast/4— takes the schema module, existing data, incoming params, and a list of allowed fields. It converts param values to the correct Erlang types (binaries to atoms for enums, binaries to integers for IDs, etc.) and puts them inchanges.validate_required/2— ensures the listed fields are present and non-empty.validate_length/3— checks string length constraints.validate_inclusion/3— ensures the value is one of the allowed options.
User changeset with format and unique constraints
Update src/schemas/user.erl:
-module(user).
-behaviour(kura_schema).
-include_lib("kura/include/kura.hrl").
-export([table/0, fields/0, primary_key/0, changeset/2]).
table() -> <<"users">>.
primary_key() -> id.
fields() ->
[
#kura_field{name = id, type = id, primary_key = true, nullable = false},
#kura_field{name = username, type = string, nullable = false},
#kura_field{name = email, type = string, nullable = false},
#kura_field{name = password_hash, type = string, nullable = false},
#kura_field{name = inserted_at, type = utc_datetime},
#kura_field{name = updated_at, type = utc_datetime}
].
changeset(Data, Params) ->
CS = kura_changeset:cast(user, Data, Params, [username, email, password_hash]),
CS1 = kura_changeset:validate_required(CS, [username, email, password_hash]),
CS2 = kura_changeset:validate_format(CS1, email, "^[^@]+@[^@]+\\.[^@]+$"),
CS3 = kura_changeset:validate_length(CS2, username, [{min, 2}, {max, 50}]),
CS4 = kura_changeset:unique_constraint(CS3, email),
kura_changeset:unique_constraint(CS4, username).
New validations:
validate_format/3— checks the value against a regex. The email regex ensures it has@and a domain.unique_constraint/2— declares that this field has a unique index in the database. If an insert/update violates the constraint, Kura maps the PostgreSQL error to a friendly changeset error instead of crashing.
unique_constraint does not check uniqueness in Erlang — it tells Kura how to handle the PostgreSQL unique violation error. You still need a unique index on the column, which you would add to a migration.
Changeset errors as structured data
Errors are a list of {Field, Message} tuples on the changeset:
1> CS = post:changeset(#{}, #{}).
#kura_changeset{valid = false, errors = [{title, <<"can't be blank">>},
{body, <<"can't be blank">>}], ...}
2> CS#kura_changeset.valid.
false
3> CS#kura_changeset.errors.
[{title, <<"can't be blank">>}, {body, <<"can't be blank">>}]
4> CS2 = post:changeset(#{}, #{<<"title">> => <<"Hi">>, <<"body">> => <<"Hello">>}).
#kura_changeset{valid = false, errors = [{title, <<"must be at least 3 characters">>}], ...}
Rendering errors in JSON responses
Convert changeset errors to a JSON-friendly map:
changeset_errors_to_json(#kura_changeset{errors = Errors}) ->
maps:from_list([{atom_to_binary(Field), Msg} || {Field, Msg} <- Errors]).
Use it in controllers:
create(#{params := Params}) ->
CS = post:changeset(#{}, Params),
case blog_repo:insert(CS) of
{ok, Post} ->
{json, 201, #{}, post_to_json(Post)};
{error, #kura_changeset{} = CS1} ->
{json, 422, #{}, #{errors => changeset_errors_to_json(CS1)}}
end.
The response looks like:
{
"errors": {
"title": "can't be blank",
"body": "can't be blank"
}
}
Available validation functions
| Function | Purpose |
|---|---|
validate_required(CS, Fields) | Fields must be present and non-empty |
validate_format(CS, Field, Regex) | Value must match the regex |
validate_length(CS, Field, Opts) | String length: [{min,N}, {max,N}, {is,N}] |
validate_number(CS, Field, Opts) | Number range: [{greater_than,N}, {less_than,N}] |
validate_inclusion(CS, Field, List) | Value must be in the list |
validate_change(CS, Field, Fun) | Custom validation: fun(Val) -> ok | {error, Msg} |
unique_constraint(CS, Field) | Map PG unique violation to a changeset error |
foreign_key_constraint(CS, Field) | Map PG FK violation to a changeset error |
check_constraint(CS, Name, Field, Opts) | Map PG check constraint to a changeset error |
Schemaless changesets
For validating data that does not map to a database table (like search filters or contact forms), pass a types map instead of a schema module:
Types = #{query => string, page => integer, per_page => integer},
CS = kura_changeset:cast(Types, #{}, Params, [query, page, per_page]),
CS1 = kura_changeset:validate_required(CS, [query]),
CS2 = kura_changeset:validate_number(CS1, per_page, [{greater_than, 0}, {less_than, 101}]).
Schemaless changesets cannot be persisted via the repo — they are for validation only.
Validations are declarative and composable. Errors are data, not exceptions. Now let's use changesets to perform CRUD operations with the repository.
CRUD with the Repository
We have schemas, migrations, and changesets. Now let's use the repository to create, read, update, and delete records — and wire it all up to a controller.
Insert
Create a record by building a changeset and passing it to blog_repo:insert/1:
Params = #{<<"title">> => <<"My First Post">>,
<<"body">> => <<"Hello from Nova!">>,
<<"status">> => <<"draft">>,
<<"user_id">> => 1},
CS = post:changeset(#{}, Params),
{ok, Post} = blog_repo:insert(CS).
If the changeset is invalid, insert returns {error, Changeset} with the errors:
CS = post:changeset(#{}, #{}),
{error, #kura_changeset{errors = [{title, <<"can't be blank">>}, ...]}} = blog_repo:insert(CS).
Query all
Use the query builder to fetch records:
Q = kura_query:from(post),
{ok, Posts} = blog_repo:all(Q).
Posts is a list of maps, each representing a row:
[#{id => 1, title => <<"My First Post">>, body => <<"Hello from Nova!">>,
status => draft, user_id => 1,
inserted_at => {{2026,2,23},{12,0,0}}, updated_at => {{2026,2,23},{12,0,0}}}]
Notice status is the atom draft, not a binary — Kura handles the conversion.
Get by ID
Fetch a single record by primary key:
{ok, Post} = blog_repo:get(post, 1).
{error, not_found} = blog_repo:get(post, 999).
Update
To update a record, build a changeset from the existing data and new params:
{ok, Post} = blog_repo:get(post, 1),
CS = post:changeset(Post, #{<<"title">> => <<"Updated Title">>}),
{ok, UpdatedPost} = blog_repo:update(CS).
Only the changed fields are included in the UPDATE statement.
Delete
Delete takes a changeset built from the existing record:
{ok, Post} = blog_repo:get(post, 1),
CS = kura_changeset:cast(post, Post, #{}, []),
{ok, _} = blog_repo:delete(CS).
Query builder
The query builder composes — chain functions to build up complex queries:
%% Filter by status
Q = kura_query:from(post),
Q1 = kura_query:where(Q, {status, <<"published">>}),
{ok, Published} = blog_repo:all(Q1).
%% Order by insertion date, newest first
Q2 = kura_query:order_by(Q1, [{inserted_at, desc}]),
%% Limit and offset for pagination
Q3 = kura_query:limit(Q2, 10),
Q4 = kura_query:offset(Q3, 20),
{ok, Page3} = blog_repo:all(Q4).
Where conditions
%% Equality
kura_query:where(Q, {title, <<"Hello">>})
%% Comparison operators
kura_query:where(Q, {user_id, '>', 5})
kura_query:where(Q, {inserted_at, '>=', {{2026,1,1},{0,0,0}}})
%% IN clause
kura_query:where(Q, {status, in, [<<"draft">>, <<"published">>]})
%% LIKE / ILIKE
kura_query:where(Q, {title, ilike, <<"%nova%">>})
%% NULL checks
kura_query:where(Q, {body, is_nil})
kura_query:where(Q, {body, is_not_nil})
%% OR conditions
kura_query:where(Q, {'or', [{status, <<"draft">>}, {status, <<"archived">>}]})
%% AND conditions (multiple where calls are AND by default)
Q1 = kura_query:where(Q, {status, <<"published">>}),
Q2 = kura_query:where(Q1, {user_id, 1}).
Wiring up to a controller
Let's build a posts API controller that uses the repo. Create src/controllers/blog_posts_controller.erl:
-module(blog_posts_controller).
-include_lib("kura/include/kura.hrl").
-export([
index/1,
show/1,
create/1,
update/1,
delete/1
]).
index(_Req) ->
Q = kura_query:from(post),
Q1 = kura_query:order_by(Q, [{inserted_at, desc}]),
{ok, Posts} = blog_repo:all(Q1),
{json, #{posts => [post_to_json(P) || P <- Posts]}}.
show(#{bindings := #{<<"id">> := Id}}) ->
case blog_repo:get(post, binary_to_integer(Id)) of
{ok, Post} ->
{json, post_to_json(Post)};
{error, not_found} ->
{status, 404, #{}, #{error => <<"post not found">>}}
end.
create(#{params := Params}) ->
CS = post:changeset(#{}, Params),
case blog_repo:insert(CS) of
{ok, Post} ->
{json, 201, #{}, post_to_json(Post)};
{error, #kura_changeset{} = CS1} ->
{json, 422, #{}, #{errors => changeset_errors_to_json(CS1)}}
end;
create(_Req) ->
{status, 422, #{}, #{error => <<"request body required">>}}.
update(#{bindings := #{<<"id">> := Id}, params := Params}) ->
case blog_repo:get(post, binary_to_integer(Id)) of
{ok, Post} ->
CS = post:changeset(Post, Params),
case blog_repo:update(CS) of
{ok, Updated} ->
{json, post_to_json(Updated)};
{error, #kura_changeset{} = CS1} ->
{json, 422, #{}, #{errors => changeset_errors_to_json(CS1)}}
end;
{error, not_found} ->
{status, 404, #{}, #{error => <<"post not found">>}}
end.
delete(#{bindings := #{<<"id">> := Id}}) ->
case blog_repo:get(post, binary_to_integer(Id)) of
{ok, Post} ->
CS = kura_changeset:cast(post, Post, #{}, []),
{ok, _} = blog_repo:delete(CS),
{status, 204};
{error, not_found} ->
{status, 404, #{}, #{error => <<"post not found">>}}
end.
%% Helpers
post_to_json(#{id := Id, title := Title, body := Body, status := Status,
user_id := UserId, inserted_at := InsertedAt}) ->
#{id => Id, title => Title, body => Body,
status => atom_to_binary(Status), user_id => UserId,
inserted_at => format_datetime(InsertedAt)}.
changeset_errors_to_json(#kura_changeset{errors = Errors}) ->
maps:from_list([{atom_to_binary(Field), Msg} || {Field, Msg} <- Errors]).
format_datetime({{Y,Mo,D},{H,Mi,S}}) ->
list_to_binary(io_lib:format("~4..0B-~2..0B-~2..0BT~2..0B:~2..0B:~2..0B",
[Y, Mo, D, H, Mi, S]));
format_datetime(_) ->
null.
Adding the routes
#{prefix => "/api",
security => false,
routes => [
{"/posts", fun blog_posts_controller:index/1, #{methods => [get]}},
{"/posts/:id", fun blog_posts_controller:show/1, #{methods => [get]}},
{"/posts", fun blog_posts_controller:create/1, #{methods => [post]}},
{"/posts/:id", fun blog_posts_controller:update/1, #{methods => [put]}},
{"/posts/:id", fun blog_posts_controller:delete/1, #{methods => [delete]}}
]
}
Testing with curl
Start the node and test:
# Create a post
curl -s -X POST localhost:8080/api/posts \
-H "Content-Type: application/json" \
-d '{"title": "My First Post", "body": "Hello from Nova!", "status": "draft", "user_id": 1}' \
| python3 -m json.tool
# List all posts
curl -s localhost:8080/api/posts | python3 -m json.tool
# Get a single post
curl -s localhost:8080/api/posts/1 | python3 -m json.tool
# Update a post
curl -s -X PUT localhost:8080/api/posts/1 \
-H "Content-Type: application/json" \
-d '{"title": "Updated Title", "status": "published"}' \
| python3 -m json.tool
# Delete a post
curl -s -X DELETE localhost:8080/api/posts/1 -w "%{http_code}\n"
# Try creating with invalid data
curl -s -X POST localhost:8080/api/posts \
-H "Content-Type: application/json" \
-d '{"title": "Hi"}' \
| python3 -m json.tool
The last command returns a 422 with validation errors.
No SQL strings anywhere. The query builder composes, the repo executes.
This gives us a working API for a single resource. Next, let's use the code generators to scaffold resources faster and add JSON schemas for documentation.
JSON API with Generators
In the previous chapter we built a posts controller by hand. The rebar3_nova plugin includes generators that scaffold controllers, JSON schemas, and test suites so you can skip the boilerplate.
Generate a resource
The nova gen_resource command creates a controller, a JSON schema, and prints route definitions:
rebar3 nova gen_resource --name posts
===> Writing src/controllers/blog_posts_controller.erl
===> Writing priv/schemas/post.json
Add these routes to your router:
{<<"/posts">>, {blog_posts_controller, list}, #{methods => [get]}}
{<<"/posts/:id">>, {blog_posts_controller, show}, #{methods => [get]}}
{<<"/posts">>, {blog_posts_controller, create}, #{methods => [post]}}
{<<"/posts/:id">>, {blog_posts_controller, update}, #{methods => [put]}}
{<<"/posts/:id">>, {blog_posts_controller, delete}, #{methods => [delete]}}
The generated controller
-module(blog_posts_controller).
-export([
list/1,
show/1,
create/1,
update/1,
delete/1
]).
list(_Req) ->
{json, #{<<"message">> => <<"TODO">>}}.
show(_Req) ->
{json, #{<<"message">> => <<"TODO">>}}.
create(_Req) ->
{status, 201, #{}, #{<<"message">> => <<"TODO">>}}.
update(_Req) ->
{json, #{<<"message">> => <<"TODO">>}}.
delete(_Req) ->
{status, 204}.
Every action returns a valid Nova response tuple so you can compile and run immediately.
The generated JSON schema
priv/schemas/post.json:
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"properties": {
"id": { "type": "integer" },
"name": { "type": "string" }
},
"required": ["id", "name"]
}
Edit this to match your actual data model:
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"properties": {
"id": { "type": "integer", "description": "Unique identifier" },
"title": { "type": "string", "description": "Post title" },
"body": { "type": "string", "description": "Post body" },
"status": { "type": "string", "enum": ["draft", "published", "archived"] },
"user_id": { "type": "integer", "description": "Author ID" }
},
"required": ["title", "body"]
}
This schema is picked up by the OpenAPI generator to produce API documentation automatically.
Filling in Kura calls
Replace the TODO stubs with actual Kura repo calls. Since we already wrote a full posts controller in the CRUD chapter, here is the pattern — generate, then fill in:
-module(blog_posts_controller).
-include_lib("kura/include/kura.hrl").
-export([
index/1,
show/1,
create/1,
update/1,
delete/1
]).
index(_Req) ->
Q = kura_query:from(post),
Q1 = kura_query:order_by(Q, [{inserted_at, desc}]),
{ok, Posts} = blog_repo:all(Q1),
{json, #{posts => [post_to_json(P) || P <- Posts]}}.
show(#{bindings := #{<<"id">> := Id}}) ->
case blog_repo:get(post, binary_to_integer(Id)) of
{ok, Post} ->
{json, post_to_json(Post)};
{error, not_found} ->
{status, 404, #{}, #{error => <<"post not found">>}}
end.
create(#{params := Params}) ->
CS = post:changeset(#{}, Params),
case blog_repo:insert(CS) of
{ok, Post} ->
{json, 201, #{}, post_to_json(Post)};
{error, #kura_changeset{} = CS1} ->
{json, 422, #{}, #{errors => changeset_errors_to_json(CS1)}}
end;
create(_Req) ->
{status, 422, #{}, #{error => <<"request body required">>}}.
update(#{bindings := #{<<"id">> := Id}, params := Params}) ->
case blog_repo:get(post, binary_to_integer(Id)) of
{ok, Post} ->
CS = post:changeset(Post, Params),
case blog_repo:update(CS) of
{ok, Updated} ->
{json, post_to_json(Updated)};
{error, #kura_changeset{} = CS1} ->
{json, 422, #{}, #{errors => changeset_errors_to_json(CS1)}}
end;
{error, not_found} ->
{status, 404, #{}, #{error => <<"post not found">>}}
end.
delete(#{bindings := #{<<"id">> := Id}}) ->
case blog_repo:get(post, binary_to_integer(Id)) of
{ok, Post} ->
CS = kura_changeset:cast(post, Post, #{}, []),
{ok, _} = blog_repo:delete(CS),
{status, 204};
{error, not_found} ->
{status, 404, #{}, #{error => <<"post not found">>}}
end.
%% Helpers
post_to_json(#{id := Id, title := Title, body := Body, status := Status,
user_id := UserId}) ->
#{id => Id, title => Title, body => Body,
status => atom_to_binary(Status), user_id => UserId}.
changeset_errors_to_json(#kura_changeset{errors = Errors}) ->
maps:from_list([{atom_to_binary(Field), Msg} || {Field, Msg} <- Errors]).
Generate a test suite
The nova gen_test command scaffolds a Common Test suite:
rebar3 nova gen_test --name posts
===> Writing test/blog_posts_controller_SUITE.erl
The generated suite has test cases for each CRUD action that make HTTP requests against your running application:
-module(blog_posts_controller_SUITE).
-include_lib("common_test/include/ct.hrl").
-export([all/0, init_per_suite/1, end_per_suite/1]).
-export([test_list/1, test_show/1, test_create/1, test_update/1, test_delete/1]).
all() ->
[test_list, test_show, test_create, test_update, test_delete].
init_per_suite(Config) ->
application:ensure_all_started(blog),
Config.
end_per_suite(_Config) ->
ok.
test_list(_Config) ->
{ok, {{_, 200, _}, _, _Body}} =
httpc:request(get, {"http://localhost:8080/posts", []}, [], []).
test_show(_Config) ->
{ok, {{_, 200, _}, _, _Body}} =
httpc:request(get, {"http://localhost:8080/posts/1", []}, [], []).
test_create(_Config) ->
{ok, {{_, 201, _}, _, _Body}} =
httpc:request(post, {"http://localhost:8080/posts", [],
"application/json", "{}"}, [], []).
test_update(_Config) ->
{ok, {{_, 200, _}, _, _Body}} =
httpc:request(put, {"http://localhost:8080/posts/1", [],
"application/json", "{}"}, [], []).
test_delete(_Config) ->
{ok, {{_, 204, _}, _, _Body}} =
httpc:request(delete, {"http://localhost:8080/posts/1", []}, [], []).
Update the request bodies and assertions to match your actual API. We will cover testing in detail in the Testing chapter.
Other generators
Generate a controller with specific actions:
rebar3 nova gen_controller --name comments --actions list,create
===> Writing src/controllers/blog_comments_controller.erl
Typical workflow
Adding a new resource to your API:
# 1. Define the Kura schema
vi src/schemas/comment.erl
# 2. Compile to generate the migration
rebar3 compile
# 3. Generate the resource (controller + schema + route hints)
rebar3 nova gen_resource --name comments
# 4. Copy the printed routes into your router
# 5. Fill in the Kura repo calls in the controller
# 6. Generate a test suite
rebar3 nova gen_test --name comments
# 7. Run the tests
rebar3 ct
Generate, fill in the Kura calls, test. Three steps to a working API.
Our posts API works with flat data. Next, let's add associations and preloading to connect posts to users and comments.
Associations and Preloading
So far our posts exist in isolation. In a real blog, posts belong to users and have comments. Kura supports belongs_to, has_many, has_one, and many_to_many associations with automatic preloading.
Adding associations to schemas
Post belongs to user
Update src/schemas/post.erl to add associations:
-module(post).
-behaviour(kura_schema).
-include_lib("kura/include/kura.hrl").
-export([table/0, fields/0, primary_key/0, associations/0, changeset/2]).
table() -> <<"posts">>.
primary_key() -> id.
fields() ->
[
#kura_field{name = id, type = id, primary_key = true, nullable = false},
#kura_field{name = title, type = string, nullable = false},
#kura_field{name = body, type = text},
#kura_field{name = status, type = {enum, [draft, published, archived]}, default = <<"draft">>},
#kura_field{name = user_id, type = integer},
#kura_field{name = inserted_at, type = utc_datetime},
#kura_field{name = updated_at, type = utc_datetime}
].
associations() ->
[
#kura_assoc{name = author, type = belongs_to, schema = user, foreign_key = user_id},
#kura_assoc{name = comments, type = has_many, schema = comment, foreign_key = post_id}
].
changeset(Data, Params) ->
CS = kura_changeset:cast(post, Data, Params, [title, body, status, user_id]),
CS1 = kura_changeset:validate_required(CS, [title, body]),
CS2 = kura_changeset:validate_length(CS1, title, [{min, 3}, {max, 200}]),
CS3 = kura_changeset:validate_inclusion(CS2, status, [draft, published, archived]),
kura_changeset:foreign_key_constraint(CS3, user_id).
The associations/0 callback returns a list of #kura_assoc{} records:
belongs_to— the foreign key (user_id) is on this table.schemais the associated module,foreign_keyis the column.has_many— the foreign key (post_id) is on the other table.
We also added foreign_key_constraint/2 to the changeset — if an insert fails because the user doesn't exist, Kura maps the PostgreSQL foreign key error to a friendly changeset error.
Comment schema
Create src/schemas/comment.erl:
-module(comment).
-behaviour(kura_schema).
-include_lib("kura/include/kura.hrl").
-export([table/0, fields/0, primary_key/0, associations/0, changeset/2]).
table() -> <<"comments">>.
primary_key() -> id.
fields() ->
[
#kura_field{name = id, type = id, primary_key = true, nullable = false},
#kura_field{name = body, type = text, nullable = false},
#kura_field{name = post_id, type = integer, nullable = false},
#kura_field{name = user_id, type = integer, nullable = false},
#kura_field{name = inserted_at, type = utc_datetime},
#kura_field{name = updated_at, type = utc_datetime}
].
associations() ->
[
#kura_assoc{name = post, type = belongs_to, schema = post, foreign_key = post_id},
#kura_assoc{name = author, type = belongs_to, schema = user, foreign_key = user_id}
].
changeset(Data, Params) ->
CS = kura_changeset:cast(comment, Data, Params, [body, post_id, user_id]),
CS1 = kura_changeset:validate_required(CS, [body, post_id, user_id]),
CS2 = kura_changeset:foreign_key_constraint(CS1, post_id),
kura_changeset:foreign_key_constraint(CS2, user_id).
User has many posts
Update src/schemas/user.erl to add the has_many side:
-export([table/0, fields/0, primary_key/0, associations/0, changeset/2]).
%% ... fields() unchanged ...
associations() ->
[
#kura_assoc{name = posts, type = has_many, schema = post, foreign_key = user_id}
].
%% ... changeset/2 unchanged ...
Generate the migration
Compile to generate the comments table migration:
rebar3 compile
===> [kura] Schema diff detected changes
===> [kura] Generated src/migrations/m20260223130000_create_comments.erl
The migration creates the comments table with foreign keys to posts and users.
Preloading associations
By default, fetching a post returns only its own fields — associations are not loaded. Use kura_query:preload/2 to eagerly load them.
Preload via query
Q = kura_query:from(post),
Q1 = kura_query:preload(Q, [author, comments]),
{ok, Posts} = blog_repo:all(Q1).
Each post in Posts now has author and comments keys:
#{id => 1,
title => <<"My First Post">>,
author => #{id => 1, username => <<"alice">>, email => <<"alice@example.com">>, ...},
comments => [
#{id => 1, body => <<"Great post!">>, user_id => 2, ...},
#{id => 2, body => <<"Thanks!">>, user_id => 1, ...}
],
...}
Nested preloading
Load the author of each comment too:
Q = kura_query:from(post),
Q1 = kura_query:preload(Q, [author, {comments, [author]}]),
{ok, Posts} = blog_repo:all(Q1).
Now each comment also has its author loaded.
Standalone preload
If you already have records and want to preload associations after the fact:
{ok, Post} = blog_repo:get(post, 1),
Post1 = blog_repo:preload(post, Post, [author, comments]).
%% Works with lists too
{ok, Posts} = blog_repo:all(kura_query:from(post)),
Posts1 = blog_repo:preload(post, Posts, [author]).
Kura uses WHERE IN queries for preloading — not JOINs. This means one extra query per association, which keeps things predictable and avoids N+1 problems.
Creating with associations (cast_assoc)
You can create a post with comments in a single request using cast_assoc:
Params = #{<<"title">> => <<"New Post">>,
<<"body">> => <<"Content here">>,
<<"comments">> => [
#{<<"body">> => <<"First comment">>, <<"user_id">> => 2}
]},
CS = kura_changeset:cast(post, #{}, Params, [title, body, user_id]),
CS1 = kura_changeset:validate_required(CS, [title, body]),
CS2 = kura_changeset:cast_assoc(CS1, comments),
{ok, Post} = blog_repo:insert(CS2).
cast_assoc reads the comments key from the params, builds child changesets using comment:changeset/2, and wraps everything in a transaction. The parent is inserted first, then each child gets the parent's ID set as its foreign key.
Custom cast function
If you need different validation for nested creates:
CS2 = kura_changeset:cast_assoc(CS1, comments, #{
with => fun(Data, ChildParams) ->
comment:changeset(Data, ChildParams)
end
}).
API endpoint with preloading
Update the posts controller to return posts with their author and comments:
show(#{bindings := #{<<"id">> := Id}}) ->
case blog_repo:get(post, binary_to_integer(Id)) of
{ok, Post} ->
Post1 = blog_repo:preload(post, Post, [author, {comments, [author]}]),
{json, post_with_assocs_to_json(Post1)};
{error, not_found} ->
{status, 404, #{}, #{error => <<"post not found">>}}
end.
post_with_assocs_to_json(#{id := Id, title := Title, body := Body,
status := Status, author := Author,
comments := Comments}) ->
#{id => Id,
title => Title,
body => Body,
status => atom_to_binary(Status),
author => #{id => maps:get(id, Author),
username => maps:get(username, Author)},
comments => [#{id => maps:get(id, C),
body => maps:get(body, C),
author => #{id => maps:get(id, maps:get(author, C)),
username => maps:get(username, maps:get(author, C))}}
|| C <- Comments]}.
Test it:
curl -s localhost:8080/api/posts/1 | python3 -m json.tool
{
"id": 1,
"title": "My First Post",
"body": "Hello from Nova!",
"status": "draft",
"author": {
"id": 1,
"username": "alice"
},
"comments": [
{
"id": 1,
"body": "Great post!",
"author": {
"id": 2,
"username": "bob"
}
}
]
}
Next, let's add tags, many-to-many relationships, and embedded schemas for post metadata.
Tags, Many-to-Many & Embedded Schemas
Our blog has users, posts, and comments. Now let's add tags (many-to-many through a join table) and post metadata (embedded schema stored as JSONB).
Tag schema
Create src/schemas/tag.erl:
-module(tag).
-behaviour(kura_schema).
-include_lib("kura/include/kura.hrl").
-export([table/0, fields/0, primary_key/0, associations/0, changeset/2]).
table() -> <<"tags">>.
primary_key() -> id.
fields() ->
[
#kura_field{name = id, type = id, primary_key = true, nullable = false},
#kura_field{name = name, type = string, nullable = false},
#kura_field{name = inserted_at, type = utc_datetime}
].
associations() ->
[
#kura_assoc{name = posts, type = many_to_many, schema = post,
join_through = <<"posts_tags">>, join_keys = {tag_id, post_id}}
].
changeset(Data, Params) ->
CS = kura_changeset:cast(tag, Data, Params, [name]),
CS1 = kura_changeset:validate_required(CS, [name]),
kura_changeset:unique_constraint(CS1, name).
Join table schema
The many-to-many relationship needs a join table. Create src/schemas/posts_tags.erl:
-module(posts_tags).
-behaviour(kura_schema).
-include_lib("kura/include/kura.hrl").
-export([table/0, fields/0, primary_key/0]).
table() -> <<"posts_tags">>.
primary_key() -> id.
fields() ->
[
#kura_field{name = id, type = id, primary_key = true, nullable = false},
#kura_field{name = post_id, type = integer, nullable = false},
#kura_field{name = tag_id, type = integer, nullable = false}
].
Adding many-to-many to posts
Update the associations/0 in src/schemas/post.erl:
associations() ->
[
#kura_assoc{name = author, type = belongs_to, schema = user, foreign_key = user_id},
#kura_assoc{name = comments, type = has_many, schema = comment, foreign_key = post_id},
#kura_assoc{name = tags, type = many_to_many, schema = tag,
join_through = <<"posts_tags">>, join_keys = {post_id, tag_id}}
].
The many_to_many association specifies:
join_through— the join table namejoin_keys—{this_side_fk, other_side_fk}on the join table
Generate the migrations
Compile to generate the new tables:
rebar3 compile
===> [kura] Schema diff detected changes
===> [kura] Generated src/migrations/m20260223140000_create_tags.erl
===> [kura] Generated src/migrations/m20260223140100_create_posts_tags.erl
Tagging posts with put_assoc
Use put_assoc to set tags on a post:
%% Get existing tags (or create new ones first)
{ok, Erlang} = blog_repo:get_by(tag, [{name, <<"erlang">>}]),
{ok, Nova} = blog_repo:get_by(tag, [{name, <<"nova">>}]),
%% Assign tags to a post
{ok, Post} = blog_repo:get(post, 1),
CS = kura_changeset:cast(post, Post, #{}, []),
CS1 = kura_changeset:put_assoc(CS, tags, [Erlang, Nova]),
{ok, _} = blog_repo:update(CS1).
put_assoc replaces the entire association — under the hood it deletes existing join table rows and inserts new ones, all in a transaction.
Preloading tags
Q = kura_query:from(post),
Q1 = kura_query:preload(Q, [author, tags]),
{ok, Posts} = blog_repo:all(Q1).
Each post now has a tags key with a list of tag maps:
#{id => 1, title => <<"My First Post">>,
tags => [#{id => 1, name => <<"erlang">>}, #{id => 2, name => <<"nova">>}],
...}
Embedded schemas
Sometimes you need structured data that doesn't deserve its own table. Kura's embedded schemas store nested structures as JSONB columns.
Post metadata
Create src/schemas/post_metadata.erl:
-module(post_metadata).
-behaviour(kura_schema).
-include_lib("kura/include/kura.hrl").
-export([table/0, fields/0, primary_key/0, changeset/2]).
table() -> <<"embedded">>.
primary_key() -> undefined.
fields() ->
[
#kura_field{name = meta_title, type = string},
#kura_field{name = meta_description, type = string},
#kura_field{name = og_image, type = string}
].
changeset(Data, Params) ->
CS = kura_changeset:cast(post_metadata, Data, Params,
[meta_title, meta_description, og_image]),
kura_changeset:validate_length(CS, meta_description, [{max, 160}]).
The embedded schema looks like a regular schema but with table() returning a placeholder (it's never queried directly) and primary_key() returning undefined.
Adding the embed to posts
Update src/schemas/post.erl to add an embeds/0 callback and a metadata JSONB field:
-export([table/0, fields/0, primary_key/0, associations/0, embeds/0, changeset/2]).
fields() ->
[
#kura_field{name = id, type = id, primary_key = true, nullable = false},
#kura_field{name = title, type = string, nullable = false},
#kura_field{name = body, type = text},
#kura_field{name = status, type = {enum, [draft, published, archived]}, default = <<"draft">>},
#kura_field{name = user_id, type = integer},
#kura_field{name = metadata, type = {embed, embeds_one, post_metadata}},
#kura_field{name = inserted_at, type = utc_datetime},
#kura_field{name = updated_at, type = utc_datetime}
].
embeds() ->
[
#kura_embed{name = metadata, type = embeds_one, schema = post_metadata}
].
Compile to generate a migration that adds the metadata JSONB column:
rebar3 compile
Using embedded schemas
Cast the embed in your changeset:
changeset(Data, Params) ->
CS = kura_changeset:cast(post, Data, Params, [title, body, status, user_id, metadata]),
CS1 = kura_changeset:validate_required(CS, [title, body]),
CS2 = kura_changeset:validate_length(CS1, title, [{min, 3}, {max, 200}]),
CS3 = kura_changeset:validate_inclusion(CS2, status, [draft, published, archived]),
CS4 = kura_changeset:foreign_key_constraint(CS3, user_id),
kura_changeset:cast_embed(CS4, metadata).
cast_embed reads the metadata key from params and builds a nested changeset using post_metadata:changeset/2. Create a post with metadata:
curl -s -X POST localhost:8080/api/posts \
-H "Content-Type: application/json" \
-d '{
"title": "SEO Optimized Post",
"body": "Great content here",
"user_id": 1,
"metadata": {
"meta_title": "Best Post Ever",
"meta_description": "A post about great things",
"og_image": "https://example.com/image.jpg"
}
}' | python3 -m json.tool
The metadata is stored as JSONB in PostgreSQL and loaded back as a nested map:
#{id => 5,
title => <<"SEO Optimized Post">>,
metadata => #{meta_title => <<"Best Post Ever">>,
meta_description => <<"A post about great things">>,
og_image => <<"https://example.com/image.jpg">>},
...}
Filtering by tag
To find posts with a specific tag, use a raw SQL fragment or build the query through the join table:
%% Find all post IDs for a given tag
find_posts_by_tag(TagName) ->
{ok, Tag} = blog_repo:get_by(tag, [{name, TagName}]),
TagId = maps:get(id, Tag),
Q = kura_query:from(posts_tags),
Q1 = kura_query:where(Q, {tag_id, TagId}),
{ok, JoinRows} = blog_repo:all(Q1),
PostIds = [maps:get(post_id, R) || R <- JoinRows],
Q2 = kura_query:from(post),
Q3 = kura_query:where(Q2, {id, in, PostIds}),
Q4 = kura_query:preload(Q3, [author, tags]),
blog_repo:all(Q4).
API endpoint for tags
Add a simple tags controller:
-module(blog_tags_controller).
-export([index/1, create/1]).
index(_Req) ->
Q = kura_query:from(tag),
Q1 = kura_query:order_by(Q, [{name, asc}]),
{ok, Tags} = blog_repo:all(Q1),
{json, #{tags => Tags}}.
create(#{params := Params}) ->
CS = tag:changeset(#{}, Params),
case blog_repo:insert(CS) of
{ok, Tag} ->
{json, 201, #{}, Tag};
{error, _CS} ->
{status, 422, #{}, #{error => <<"invalid tag">>}}
end.
We now have a rich data model with associations, many-to-many relationships, and embedded schemas. Next, let's write proper tests for our application.
Testing
Nova applications can be tested with Erlang's built-in frameworks: EUnit for unit tests and Common Test for integration tests. The nova_test library adds helpers: a request builder for unit testing controllers, an HTTP client for integration tests, and assertion macros.
Adding nova_test
Add nova_test as a test dependency in rebar.config:
{profiles, [
{test, [
{deps, [
{nova_test, "0.1.0"}
]}
]}
]}.
Database setup for tests
Tests need a running PostgreSQL. Use the same docker-compose.yml from the Database Setup chapter:
docker compose up -d
Your test configuration should point at the test database. You can use the same development database for simplicity, or create a separate one for isolation.
EUnit — Unit testing controllers
Nova controllers are regular Erlang functions that receive a request map and return a tuple. The nova_test_req module builds well-formed request maps so you don't have to construct them by hand.
Create test/blog_posts_controller_tests.erl:
-module(blog_posts_controller_tests).
-include_lib("nova_test/include/nova_test.hrl").
show_existing_post_test() ->
Req = nova_test_req:new(get, "/api/posts/1"),
Req1 = nova_test_req:with_bindings(#{<<"id">> => <<"1">>}, Req),
Result = blog_posts_controller:show(Req1),
?assertJsonResponse(#{id := 1, title := _}, Result).
show_missing_post_test() ->
Req = nova_test_req:new(get, "/api/posts/999999"),
Req1 = nova_test_req:with_bindings(#{<<"id">> => <<"999999">>}, Req),
Result = blog_posts_controller:show(Req1),
?assertStatusResponse(404, Result).
create_post_test() ->
Req = nova_test_req:new(post, "/api/posts"),
Req1 = nova_test_req:with_json(#{<<"title">> => <<"Test Post">>,
<<"body">> => <<"Test body">>,
<<"user_id">> => 1}, Req),
Result = blog_posts_controller:create(Req1),
?assertJsonResponse(201, #{id := _}, Result).
create_invalid_post_test() ->
Req = nova_test_req:new(post, "/api/posts"),
Req1 = nova_test_req:with_json(#{}, Req),
Result = blog_posts_controller:create(Req1),
?assertStatusResponse(422, Result).
Request builder functions
| Function | Purpose |
|---|---|
nova_test_req:new/2 | Create a request with method and path |
nova_test_req:with_bindings/2 | Set path bindings (e.g. #{<<"id">> => <<"1">>}) |
nova_test_req:with_json/2 | Set a JSON body (auto-encodes, sets content-type) |
nova_test_req:with_header/3 | Add a request header |
nova_test_req:with_query/2 | Set query string parameters |
nova_test_req:with_body/2 | Set a raw body |
nova_test_req:with_auth_data/2 | Set auth data (for testing authenticated controllers) |
nova_test_req:with_peer/2 | Set the client peer address |
Run EUnit tests:
rebar3 eunit
Testing changesets
Changesets are pure functions — no database needed. Test them directly:
-module(post_changeset_tests).
-include_lib("kura/include/kura.hrl").
-include_lib("eunit/include/eunit.hrl").
valid_changeset_test() ->
CS = post:changeset(#{}, #{<<"title">> => <<"Good Title">>,
<<"body">> => <<"Some content">>}),
?assert(CS#kura_changeset.valid).
missing_title_test() ->
CS = post:changeset(#{}, #{<<"body">> => <<"Some content">>}),
?assertNot(CS#kura_changeset.valid),
?assertMatch([{title, _} | _], CS#kura_changeset.errors).
title_too_short_test() ->
CS = post:changeset(#{}, #{<<"title">> => <<"Hi">>,
<<"body">> => <<"Content">>}),
?assertNot(CS#kura_changeset.valid),
?assertMatch([{title, _}], CS#kura_changeset.errors).
invalid_status_test() ->
CS = post:changeset(#{}, #{<<"title">> => <<"Good Title">>,
<<"body">> => <<"Content">>,
<<"status">> => <<"invalid">>}),
?assertNot(CS#kura_changeset.valid).
valid_email_format_test() ->
CS = user:changeset(#{}, #{<<"username">> => <<"alice">>,
<<"email">> => <<"alice@example.com">>,
<<"password_hash">> => <<"hashed">>}),
?assert(CS#kura_changeset.valid).
invalid_email_format_test() ->
CS = user:changeset(#{}, #{<<"username">> => <<"alice">>,
<<"email">> => <<"not-an-email">>,
<<"password_hash">> => <<"hashed">>}),
?assertNot(CS#kura_changeset.valid).
Testing security modules
Test your security functions directly:
-module(blog_auth_tests).
-include_lib("nova_test/include/nova_test.hrl").
valid_login_test() ->
Req = nova_test_req:new(post, "/login"),
Req1 = nova_test_req:with_json(#{<<"username">> => <<"admin">>,
<<"password">> => <<"password">>}, Req),
?assertMatch({true, #{authed := true, username := <<"admin">>}},
blog_auth:username_password(Req1)).
invalid_password_test() ->
Req = nova_test_req:new(post, "/login"),
Req1 = nova_test_req:with_json(#{<<"username">> => <<"admin">>,
<<"password">> => <<"wrong">>}, Req),
?assertEqual(false, blog_auth:username_password(Req1)).
missing_params_test() ->
Req = nova_test_req:new(post, "/login"),
?assertEqual(false, blog_auth:username_password(Req)).
Common Test — Integration testing
Common Test is better for full-stack tests where you need the application running. nova_test provides an HTTP client that handles startup and port discovery.
Create test/blog_api_SUITE.erl:
-module(blog_api_SUITE).
-include_lib("common_test/include/ct.hrl").
-include_lib("nova_test/include/nova_test.hrl").
-export([
all/0,
init_per_suite/1,
end_per_suite/1,
test_list_posts/1,
test_create_post/1,
test_create_invalid_post/1,
test_get_post/1,
test_update_post/1,
test_delete_post/1,
test_get_post_not_found/1
]).
all() ->
[test_list_posts,
test_create_post,
test_create_invalid_post,
test_get_post,
test_update_post,
test_delete_post,
test_get_post_not_found].
init_per_suite(Config) ->
nova_test:start(blog, Config).
end_per_suite(Config) ->
nova_test:stop(Config).
test_list_posts(Config) ->
{ok, Resp} = nova_test:get("/api/posts", Config),
?assertStatus(200, Resp),
?assertJson(#{<<"posts">> := _}, Resp).
test_create_post(Config) ->
{ok, Resp} = nova_test:post("/api/posts",
#{json => #{<<"title">> => <<"Test Post">>,
<<"body">> => <<"Test body">>,
<<"user_id">> => 1}},
Config),
?assertStatus(201, Resp),
?assertJson(#{<<"title">> := <<"Test Post">>}, Resp).
test_create_invalid_post(Config) ->
{ok, Resp} = nova_test:post("/api/posts",
#{json => #{<<"title">> => <<"Hi">>}},
Config),
?assertStatus(422, Resp),
?assertJson(#{<<"errors">> := _}, Resp).
test_get_post(Config) ->
%% Create a post first
{ok, CreateResp} = nova_test:post("/api/posts",
#{json => #{<<"title">> => <<"Get Test">>,
<<"body">> => <<"Body">>,
<<"user_id">> => 1}},
Config),
?assertStatus(201, CreateResp),
#{<<"id">> := Id} = nova_test:json(CreateResp),
%% Fetch it
{ok, Resp} = nova_test:get("/api/posts/" ++ integer_to_list(Id), Config),
?assertStatus(200, Resp),
?assertJson(#{<<"title">> := <<"Get Test">>}, Resp).
test_update_post(Config) ->
%% Create a post first
{ok, CreateResp} = nova_test:post("/api/posts",
#{json => #{<<"title">> => <<"Before Update">>,
<<"body">> => <<"Body">>,
<<"user_id">> => 1}},
Config),
#{<<"id">> := Id} = nova_test:json(CreateResp),
%% Update it
{ok, Resp} = nova_test:put("/api/posts/" ++ integer_to_list(Id),
#{json => #{<<"title">> => <<"After Update">>}},
Config),
?assertStatus(200, Resp),
?assertJson(#{<<"title">> := <<"After Update">>}, Resp).
test_delete_post(Config) ->
%% Create a post first
{ok, CreateResp} = nova_test:post("/api/posts",
#{json => #{<<"title">> => <<"To Delete">>,
<<"body">> => <<"Body">>,
<<"user_id">> => 1}},
Config),
#{<<"id">> := Id} = nova_test:json(CreateResp),
%% Delete it
{ok, Resp} = nova_test:delete("/api/posts/" ++ integer_to_list(Id), Config),
?assertStatus(204, Resp).
test_get_post_not_found(Config) ->
{ok, Resp} = nova_test:get("/api/posts/999999", Config),
?assertStatus(404, Resp).
Assertion macros
| Macro | Purpose |
|---|---|
?assertStatus(Code, Resp) | Assert the HTTP status code |
?assertJson(Pattern, Resp) | Pattern-match the decoded JSON body |
?assertBody(Expected, Resp) | Assert the raw response body |
?assertHeader(Name, Expected, Resp) | Assert a response header value |
Run Common Test suites:
rebar3 ct
Test structure
test/
├── blog_posts_controller_tests.erl %% EUnit — controller unit tests
├── post_changeset_tests.erl %% EUnit — changeset validation
├── blog_auth_tests.erl %% EUnit — security functions
└── blog_api_SUITE.erl %% Common Test — integration tests
- Use EUnit for fast unit tests of individual functions and changesets
- Use Common Test for integration tests that need the full application running
- Run both with
rebar3 do eunit, ct
With testing in place, let's look at how to handle errors gracefully in Error Handling.
Error Handling
When something goes wrong, you want to show a useful error page instead of a cryptic response. Let's look at how Nova handles errors and how to create custom error pages.
Nova's default error handling
Nova comes with default handlers for 404 (not found) and 500 (server error) responses. In development mode, 500 errors show crash details. In production they return a bare status code.
Status code routes
Nova lets you register custom handlers for specific HTTP status codes directly in your router. Use a status code integer instead of a path:
routes(_Environment) ->
[
#{routes => [
{404, fun blog_error_controller:not_found/1, #{}},
{500, fun blog_error_controller:server_error/1, #{}}
]},
#{prefix => "",
security => false,
routes => [
{"/", fun blog_main_controller:index/1, #{methods => [get]}},
{"/heartbeat", fun(_) -> {status, 200} end, #{methods => [get]}}
]
}
].
Your status code handlers override Nova's defaults because your routes are compiled after Nova's built-in routes.
Creating an error controller
Create src/controllers/blog_error_controller.erl:
-module(blog_error_controller).
-export([
not_found/1,
server_error/1
]).
not_found(_Req) ->
{ok, [{title, <<"404 - Not Found">>},
{message, <<"The page you are looking for does not exist.">>}],
#{view => error_page, status_code => 404}}.
server_error(_Req) ->
{ok, [{title, <<"500 - Server Error">>},
{message, <<"Something went wrong. Please try again later.">>}],
#{view => error_page, status_code => 500}}.
The status_code option in the return map sets the HTTP status code on the response.
Error view template
Create src/views/error_page.dtl:
<html>
<head><title>{{ title }}</title></head>
<body>
<h1>{{ title }}</h1>
<p>{{ message }}</p>
<a href="/">Go back home</a>
</body>
</html>
JSON error responses
For APIs, return JSON instead of HTML. Check the Accept header to decide:
not_found(Req) ->
case cowboy_req:header(<<"accept">>, Req) of
<<"application/json">> ->
{json, 404, #{}, #{error => <<"not_found">>,
message => <<"Resource not found">>}};
_ ->
{ok, [{title, <<"404">>}, {message, <<"Page not found">>}],
#{view => error_page, status_code => 404}}
end.
Rendering changeset errors as JSON
When using Kura, changeset validation errors are structured data. A helper function makes it easy to return them as JSON:
changeset_errors_to_json(#kura_changeset{errors = Errors}) ->
maps:from_list([{atom_to_binary(Field), Msg} || {Field, Msg} <- Errors]).
Use it in your controllers:
create(#{params := Params}) ->
CS = post:changeset(#{}, Params),
case blog_repo:insert(CS) of
{ok, Post} ->
{json, 201, #{}, post_to_json(Post)};
{error, #kura_changeset{} = CS1} ->
{json, 422, #{}, #{errors => changeset_errors_to_json(CS1)}}
end.
This returns errors like {"errors": {"title": "can't be blank", "email": "has already been taken"}}.
Handling controller crashes
When a controller crashes, Nova catches the exception and triggers the 500 handler. The request map passed to your error controller will contain crash_info:
server_error(#{crash_info := CrashInfo} = _Req) ->
logger:error("Controller crash: ~p", [CrashInfo]),
{ok, [{title, <<"500">>},
{message, <<"Internal server error">>}],
#{view => error_page, status_code => 500}};
server_error(_Req) ->
{ok, [{title, <<"500">>},
{message, <<"Internal server error">>}],
#{view => error_page, status_code => 500}}.
More status codes
Register handlers for any HTTP status code:
#{routes => [
{400, fun blog_error_controller:bad_request/1, #{}},
{401, fun blog_error_controller:unauthorized/1, #{}},
{403, fun blog_error_controller:forbidden/1, #{}},
{404, fun blog_error_controller:not_found/1, #{}},
{500, fun blog_error_controller:server_error/1, #{}}
]}
bad_request(_Req) ->
{json, 400, #{}, #{error => <<"bad_request">>}}.
unauthorized(_Req) ->
{json, 401, #{}, #{error => <<"unauthorized">>}}.
forbidden(_Req) ->
{json, 403, #{}, #{error => <<"forbidden">>}}.
Error flow in the pipeline
Here is how errors flow through Nova:
- Route not found — triggers the 404 handler
- Security function returns false — triggers the 401 handler
- Controller crashes — Nova catches the exception, triggers the 500 handler
- Plugin returns
{error, Reason}— triggers the 500 handler - Controller returns
{status, Code}— if a handler is registered for that code, it is used
For each case, Nova looks up your registered status code handler. If none is registered, it falls back to its own default.
Fallback controllers
If a controller returns an unrecognized value, Nova can delegate to a fallback controller:
-module(blog_posts_controller).
-fallback_controller(blog_error_controller).
index(_Req) ->
case do_something() of
{ok, Data} -> {json, Data};
unexpected_value -> unexpected_value %% Goes to fallback
end.
The fallback module needs resolve/2:
resolve(Req, InvalidReturn) ->
logger:warning("Unexpected controller return: ~p", [InvalidReturn]),
{status, 500, #{}, #{error => <<"internal server error">>}}.
Disabling error page rendering
To skip Nova's error page rendering entirely:
{nova, [
{render_error_pages, false}
]}
With error handling in place, our application is more robust. Next, let's add real-time features with WebSockets.
WebSockets
HTTP request-response works well for most operations, but sometimes you need real-time, bidirectional communication. Nova has built-in WebSocket support through the nova_websocket behaviour. We will use it to build a live comments handler for our blog.
Creating a WebSocket handler
A WebSocket handler implements three callbacks: init/1, websocket_handle/2, and websocket_info/2.
Create src/controllers/blog_ws_handler.erl:
-module(blog_ws_handler).
-behaviour(nova_websocket).
-export([
init/1,
websocket_handle/2,
websocket_info/2
]).
init(State) ->
{ok, State}.
websocket_handle({text, Msg}, State) ->
{reply, {text, <<"Echo: ", Msg/binary>>}, State};
websocket_handle(_Frame, State) ->
{ok, State}.
websocket_info(_Info, State) ->
{ok, State}.
The callbacks:
init/1— called when the WebSocket connection is established. Return{ok, State}to accept.websocket_handle/2— called when a message arrives from the client. Return{reply, Frame, State}to send a response,{ok, State}to do nothing, or{stop, State}to close.websocket_info/2— called when the handler process receives an Erlang message (not a WebSocket frame). Useful for receiving pub/sub notifications from other processes.
Adding the route
WebSocket routes use the module name as an atom (not a fun reference) and set protocol => ws:
{"/ws", blog_ws_handler, #{protocol => ws}}
Add it to your public routes:
#{prefix => "",
security => false,
routes => [
{"/login", fun blog_main_controller:login/1, #{methods => [get]}},
{"/heartbeat", fun(_) -> {status, 200} end, #{methods => [get]}},
{"/ws", blog_ws_handler, #{protocol => ws}}
]
}
Testing the WebSocket
Start the node with rebar3 nova serve and test from a browser console:
let ws = new WebSocket("ws://localhost:8080/ws");
ws.onmessage = (e) => console.log(e.data);
ws.onopen = () => ws.send("Hello Nova!");
// Should log: "Echo: Hello Nova!"
A live comments handler
Let's build something more practical — a handler that broadcasts new comments to all connected clients using nova_pubsub.
Create src/controllers/blog_comments_ws_handler.erl:
-module(blog_comments_ws_handler).
-behaviour(nova_websocket).
-export([
init/1,
websocket_handle/2,
websocket_info/2
]).
init(State) ->
nova_pubsub:join(comments),
{ok, State}.
websocket_handle({text, Msg}, State) ->
nova_pubsub:broadcast(comments, "new_comment", Msg),
{ok, State};
websocket_handle(_Frame, State) ->
{ok, State}.
websocket_info({nova_pubsub, comments, _Sender, "new_comment", Msg}, State) ->
{reply, {text, Msg}, State};
websocket_info(_Info, State) ->
{ok, State}.
In init/1 we join the comments channel. When a client sends a message, we broadcast it to all channel members. When a pub/sub message arrives via websocket_info/2, we forward it to the connected client. We will explore pub/sub in depth in the Pub/Sub chapter.
Custom handlers
Nova uses a handler registry that maps return tuple atoms to handler functions. The built-in handlers:
| Return atom | What it does |
|---|---|
json | Encodes data as JSON |
ok | Renders an ErlyDTL template |
status | Returns a status code |
redirect | Redirects to another URL |
sendfile | Sends a file |
view | Renders a specific view template |
You can register custom handlers:
nova_handlers:register_handler(xml, fun my_xml_handler:handle/3).
Then return from controllers:
my_action(_Req) ->
{xml, <<"<user><name>Alice</name></user>">>}.
The handler function receives (StatusCode, ExtraHeaders, ControllerPayload) and must return a Cowboy request.
Fallback controllers
If a controller returns an unrecognized value, Nova can delegate to a fallback controller:
-module(my_controller).
-fallback_controller(my_fallback).
index(_Req) ->
something_unexpected.
The fallback module needs resolve/2:
-module(my_fallback).
-export([resolve/2]).
resolve(Req, InvalidReturn) ->
logger:warning("Invalid return from controller: ~p", [InvalidReturn]),
{status, 500, #{}, #{error => <<"internal server error">>}}.
With WebSockets in place, let's build a real-time comment feed using Pub/Sub.
Pub/Sub and Real-Time Feed
In the WebSockets chapter we used nova_pubsub to broadcast comments. Now let's dive deeper into Nova's pub/sub system and build a real-time feed for our blog — live notifications when posts are published and comments are added.
How nova_pubsub works
Nova's pub/sub is built on OTP's pg module (process groups). It starts automatically with Nova — no configuration needed. Any Erlang process can join channels, and messages are delivered to all members.
%% Join a channel
nova_pubsub:join(channel_name).
%% Leave a channel
nova_pubsub:leave(channel_name).
%% Broadcast to all members on all nodes
nova_pubsub:broadcast(channel_name, Topic, Payload).
%% Broadcast to members on the local node only
nova_pubsub:local_broadcast(channel_name, Topic, Payload).
%% Get all members of a channel
nova_pubsub:get_members(channel_name).
%% Get members on the local node
nova_pubsub:get_local_members(channel_name).
Channels are atoms. Topics can be lists or binaries. Payloads can be anything.
Message format
When a process receives a pub/sub message, it arrives as:
{nova_pubsub, Channel, SenderPid, Topic, Payload}
In a gen_server, handle this in handle_info/2. In a WebSocket handler, use websocket_info/2.
Building the real-time feed
Notification WebSocket handler
Create src/controllers/blog_feed_handler.erl:
-module(blog_feed_handler).
-behaviour(nova_websocket).
-export([
init/1,
websocket_handle/2,
websocket_info/2
]).
init(State) ->
nova_pubsub:join(posts),
nova_pubsub:join(comments),
{ok, State}.
websocket_handle({text, <<"ping">>}, State) ->
{reply, {text, <<"pong">>}, State};
websocket_handle(_Frame, State) ->
{ok, State}.
websocket_info({nova_pubsub, Channel, _Sender, Topic, Payload}, State) ->
Msg = thoas:encode(#{
channel => Channel,
event => list_to_binary(Topic),
data => Payload
}),
{reply, {text, Msg}, State};
websocket_info(_Info, State) ->
{ok, State}.
On connect, the handler joins both the posts and comments channels. Any pub/sub message is encoded as JSON and forwarded to the client.
Broadcasting from controllers
Update the posts controller to broadcast on changes:
create(#{params := Params}) ->
CS = post:changeset(#{}, Params),
case blog_repo:insert(CS) of
{ok, Post} ->
nova_pubsub:broadcast(posts, "post_created", post_to_json(Post)),
{json, 201, #{}, post_to_json(Post)};
{error, #kura_changeset{} = CS1} ->
{json, 422, #{}, #{errors => changeset_errors_to_json(CS1)}}
end;
Do the same for updates and deletes:
%% After a successful update:
nova_pubsub:broadcast(posts, "post_updated", post_to_json(Updated)),
%% After a successful delete:
nova_pubsub:broadcast(posts, "post_deleted", #{id => binary_to_integer(Id)}),
And for comments:
%% After creating a comment:
nova_pubsub:broadcast(comments, "comment_created", comment_to_json(Comment)),
Adding the route
{"/feed", blog_feed_handler, #{protocol => ws}}
Client-side JavaScript
const ws = new WebSocket("ws://localhost:8080/feed");
ws.onmessage = (event) => {
const msg = JSON.parse(event.data);
console.log(`[${msg.channel}] ${msg.event}:`, msg.data);
switch (msg.event) {
case "post_created":
// Add the new post to the feed
break;
case "post_updated":
// Update the post in the feed
break;
case "post_deleted":
// Remove the post from the feed
break;
case "comment_created":
// Append the new comment
break;
}
};
// Keep-alive
setInterval(() => ws.send("ping"), 30000);
Per-post comment feeds
For a live comment section on a specific post, use dynamic channel names:
-module(blog_post_comments_handler).
-behaviour(nova_websocket).
-export([init/1, websocket_handle/2, websocket_info/2]).
init(#{bindings := #{<<"post_id">> := PostId}} = State) ->
Channel = list_to_atom("post_comments_" ++ binary_to_list(PostId)),
nova_pubsub:join(Channel),
{ok, State#{channel => Channel}};
init(State) ->
{ok, State}.
websocket_handle(_Frame, State) ->
{ok, State}.
websocket_info({nova_pubsub, _Channel, _Sender, _Topic, Payload}, State) ->
{reply, {text, thoas:encode(Payload)}, State};
websocket_info(_Info, State) ->
{ok, State}.
Route:
{"/posts/:post_id/comments/ws", blog_post_comments_handler, #{protocol => ws}}
When creating a comment, broadcast to the post-specific channel:
Channel = list_to_atom("post_comments_" ++ integer_to_list(PostId)),
nova_pubsub:broadcast(Channel, "new_comment", comment_to_json(Comment)).
Using pub/sub in gen_servers
Any Erlang process can join a channel. This is useful for background workers like search indexing:
-module(blog_search_indexer).
-behaviour(gen_server).
-export([start_link/0]).
-export([init/1, handle_info/2, handle_cast/2, handle_call/3]).
start_link() ->
gen_server:start_link({local, ?MODULE}, ?MODULE, [], []).
init([]) ->
nova_pubsub:join(posts),
{ok, #{}}.
handle_info({nova_pubsub, posts, _Sender, "post_created", Post}, State) ->
logger:info("Indexing new post: ~p", [maps:get(title, Post)]),
%% Add to search index
{noreply, State};
handle_info({nova_pubsub, posts, _Sender, "post_deleted", #{id := Id}}, State) ->
logger:info("Removing post ~p from index", [Id]),
%% Remove from search index
{noreply, State};
handle_info(_Info, State) ->
{noreply, State}.
handle_cast(_Msg, State) ->
{noreply, State}.
handle_call(_Req, _From, State) ->
{reply, ok, State}.
Add it to your supervisor to start automatically.
Distributed pub/sub
nova_pubsub works across Erlang nodes. If you have multiple instances connected in a cluster, broadcast/3 delivers to all members on all nodes.
For local-only messaging (e.g., clearing a local cache):
nova_pubsub:local_broadcast(posts, "cache_invalidated", #{id => PostId}).
Organizing channels and topics
%% Different channels for different domains
nova_pubsub:join(posts).
nova_pubsub:join(comments).
nova_pubsub:join(users).
nova_pubsub:join(system).
%% Topics within channels for filtering
nova_pubsub:broadcast(posts, "created", Post).
nova_pubsub:broadcast(posts, "published", Post).
nova_pubsub:broadcast(comments, "created", Comment).
nova_pubsub:broadcast(users, "logged_in", #{username => User}).
nova_pubsub:broadcast(system, "deploy", #{version => <<"1.2.0">>}).
Processes can join multiple channels and pattern match on channel and topic in their handlers.
Next, let's look at transactions, multi, and bulk operations for atomic and efficient data operations.
Transactions, Multi & Bulk Operations
For simple CRUD, the repo functions are enough. But some operations need atomicity (all-or-nothing), multi-step pipelines, or bulk efficiency. Kura provides transactions, multi, and bulk operations for these cases.
Transactions
Wrap multiple operations in a transaction — if any step fails, everything rolls back:
blog_repo:transaction(fun() ->
CS1 = user:changeset(#{}, #{<<"username">> => <<"alice">>,
<<"email">> => <<"alice@example.com">>,
<<"password_hash">> => <<"hashed">>}),
{ok, User} = blog_repo:insert(CS1),
CS2 = post:changeset(#{}, #{<<"title">> => <<"Welcome">>,
<<"body">> => <<"Hello world">>,
<<"user_id">> => maps:get(id, User)}),
{ok, _Post} = blog_repo:insert(CS2),
ok
end).
If the second insert fails, the user creation is rolled back too. The transaction function returns {ok, ReturnValue} on success or {error, Reason} on failure.
Multi: named transaction pipelines
For complex multi-step operations, kura_multi provides a pipeline where each step has a name and can reference results from previous steps:
M = kura_multi:new(),
%% Step 1: Create a user
M1 = kura_multi:insert(M, create_user,
user:changeset(#{}, #{<<"username">> => <<"alice">>,
<<"email">> => <<"alice@example.com">>,
<<"password_hash">> => <<"hashed">>})),
%% Step 2: Create a first draft, using the user ID from step 1
M2 = kura_multi:insert(M1, create_draft,
fun(#{create_user := User}) ->
post:changeset(#{}, #{<<"title">> => <<"My First Draft">>,
<<"body">> => <<"Coming soon...">>,
<<"user_id">> => maps:get(id, User)})
end),
%% Step 3: Run a custom function
M3 = kura_multi:run(M2, send_welcome,
fun(#{create_user := User}) ->
logger:info("Welcome ~s!", [maps:get(username, User)]),
{ok, sent}
end),
%% Execute everything atomically
case blog_repo:multi(M3) of
{ok, #{create_user := User, create_draft := Post, send_welcome := sent}} ->
logger:info("User ~p created with draft post ~p",
[maps:get(id, User), maps:get(id, Post)]);
{error, FailedStep, FailedValue, _Completed} ->
logger:error("Multi failed at step ~p: ~p", [FailedStep, FailedValue])
end.
Multi API
| Function | Purpose |
|---|---|
kura_multi:new() | Create a new multi |
kura_multi:insert(M, Name, CS) | Insert a record (changeset or fun returning changeset) |
kura_multi:update(M, Name, CS) | Update a record |
kura_multi:delete(M, Name, CS) | Delete a record |
kura_multi:run(M, Name, Fun) | Run a custom function |
Steps that take a fun receive a map of all completed steps so far:
fun(#{step1 := Result1, step2 := Result2}) -> ...
Error handling
When a multi fails, you get the name of the failed step, the error value, and a map of steps that completed before the failure:
case blog_repo:multi(M) of
{ok, Results} ->
%% All steps succeeded, Results is a map of step_name => result
ok;
{error, FailedStep, FailedValue, CompletedSteps} ->
%% FailedStep: atom name of the step that failed
%% FailedValue: the error (e.g., a changeset with errors)
%% CompletedSteps: map of steps that succeeded (then rolled back)
ok
end.
Bulk operations
insert_all — batch inserts
Insert many records at once:
Posts = [
#{title => <<"Post 1">>, body => <<"Body 1">>, status => <<"draft">>, user_id => 1},
#{title => <<"Post 2">>, body => <<"Body 2">>, status => <<"draft">>, user_id => 1},
#{title => <<"Post 3">>, body => <<"Body 3">>, status => <<"published">>, user_id => 2}
],
{ok, 3} = blog_repo:insert_all(post, Posts).
insert_all bypasses changesets — it inserts raw maps directly. Use it for imports and seeding where you trust the data. The return value is the number of rows inserted.
update_all — batch updates
Update many records matching a query:
%% Publish all drafts
Q = kura_query:from(post),
Q1 = kura_query:where(Q, {status, <<"draft">>}),
{ok, Count} = blog_repo:update_all(Q1, #{status => <<"published">>}).
update_all returns the count of rows affected. It applies the updates in a single SQL statement.
delete_all — batch deletes
Delete all records matching a query:
%% Delete all archived posts
Q = kura_query:from(post),
Q1 = kura_query:where(Q, {status, <<"archived">>}),
{ok, Count} = blog_repo:delete_all(Q1).
Upserts with on_conflict
Import data without failing on duplicates:
%% Insert a tag, do nothing if it already exists
CS = tag:changeset(#{}, #{<<"name">> => <<"erlang">>}),
{ok, Tag} = blog_repo:insert(CS, #{on_conflict => {name, nothing}}).
The on_conflict option controls what happens when a unique constraint is violated:
%% Do nothing on conflict (skip the row)
#{on_conflict => {name, nothing}}
%% Replace all fields on conflict
#{on_conflict => {name, replace_all}}
%% Replace specific fields on conflict
#{on_conflict => {name, {replace, [updated_at]}}}
%% Use a named constraint instead of a field
#{on_conflict => {{constraint, <<"tags_name_key">>}, nothing}}
Practical example: importing posts
import_posts(Posts) ->
lists:foreach(fun(PostData) ->
CS = post:changeset(#{}, PostData),
blog_repo:insert(CS, #{on_conflict => {title, nothing}})
end, Posts).
Putting it all together
A controller action that publishes a post and notifies subscribers atomically:
publish(#{bindings := #{<<"id">> := Id}}) ->
case blog_repo:get(post, binary_to_integer(Id)) of
{ok, #{status := draft} = Post} ->
M = kura_multi:new(),
M1 = kura_multi:update(M, publish_post,
post:changeset(Post, #{<<"status">> => <<"published">>})),
M2 = kura_multi:run(M1, notify,
fun(#{publish_post := Published}) ->
nova_pubsub:broadcast(posts, "post_published", Published),
{ok, notified}
end),
case blog_repo:multi(M2) of
{ok, #{publish_post := Published}} ->
{json, post_to_json(Published)};
{error, _Step, _Value, _} ->
{status, 422, #{}, #{error => <<"failed to publish">>}}
end;
{ok, _} ->
{status, 422, #{}, #{error => <<"only drafts can be published">>}};
{error, not_found} ->
{status, 404, #{}, #{error => <<"post not found">>}}
end.
With transactions and bulk operations covered, let's prepare the application for deployment.
Deployment
In development we use rebar3 nova serve with hot-reloading and debug logging. For production we need a proper OTP release — a self-contained package with your application, all dependencies, and optionally the Erlang runtime.
Release basics
Rebar3 uses relx to build releases. The generated rebar.config includes a release configuration:
{relx, [{release, {blog, "0.1.0"},
[blog,
sasl]},
{dev_mode, true},
{include_erts, false},
{extended_start_script, true},
{sys_config_src, "config/dev_sys.config.src"},
{vm_args_src, "config/vm.args.src"}
]}.
This is the development release config — dev_mode symlinks to source, and ERTS is not included.
Production profile
Override settings for production using a rebar3 profile:
{profiles, [
{prod, [
{relx, [
{dev_mode, false},
{include_erts, true},
{sys_config_src, "config/prod_sys.config.src"}
]}
]}
]}.
Key differences:
dev_modeisfalse— files are copied into the releaseinclude_ertsistrue— the Erlang runtime is bundled- Uses
prod_sys.config.srcwith production settings
Production configuration
config/prod_sys.config.src:
[
{kernel, [
{logger_level, info},
{logger, [
{handler, default, logger_std_h,
#{config => #{file => "log/erlang.log"},
formatter => {flatlog, #{
map_depth => 3,
term_depth => 50,
colored => false,
template => ["[", level, "] ", msg, "\n"]
}}}}
]}
]},
{nova, [
{use_stacktrace, false},
{environment, prod},
{cowboy_configuration, #{port => 8080}},
{dev_mode, false},
{bootstrap_application, blog},
{plugins, [
{pre_request, nova_request_plugin, #{
decode_json_body => true,
read_urlencoded_body => true
}}
]}
]},
{blog, [
{repo, #{
hostname => "${DB_HOST}",
port => 5432,
database => "${DB_NAME}",
username => "${DB_USER}",
password => "${DB_PASSWORD}"
}}
]}
].
- Logger level is
infoinstead ofdebug use_stacktraceisfalse— don't leak stack traces to users- Environment variables use
${VAR}syntax — rebar3 substitutes these at release build time
VM arguments
config/vm.args.src controls Erlang VM settings. For production:
-name blog@${HOSTNAME}
-setcookie ${RELEASE_COOKIE}
+K true
+A30
+sbwt very_long
+swt very_low
-nameinstead of-snamefor full node names (needed for clustering)+sbwtand+swttune scheduler busy-wait for lower latency
Building and running
Build a production release:
rebar3 as prod release
If you have JSON schemas in priv/schemas/, you can use nova release instead. It automatically regenerates the OpenAPI spec before building:
rebar3 nova release
===> Generated priv/assets/openapi.json
===> Generated priv/assets/swagger.html
===> Release successfully assembled: _build/prod/rel/blog
This ensures your deployed application always ships with up-to-date API documentation. See OpenAPI, Inspection & Audit for details.
Start it:
_build/prod/rel/blog/bin/blog foreground
Or as a daemon:
_build/prod/rel/blog/bin/blog daemon
Other commands:
# Check if the node is running
_build/prod/rel/blog/bin/blog ping
# Attach a remote shell
_build/prod/rel/blog/bin/blog remote_console
# Stop the node
_build/prod/rel/blog/bin/blog stop
Building a tarball
For deployment to another machine:
rebar3 as prod tar
This creates blog-0.1.0.tar.gz. Since ERTS is included, the target server does not need Erlang installed:
# On the server
mkdir -p /opt/blog
tar -xzf blog-0.1.0.tar.gz -C /opt/blog
/opt/blog/bin/blog daemon
SSL/TLS
Configure HTTPS in Nova:
{nova, [
{cowboy_configuration, #{
use_ssl => true,
ssl_port => 8443,
ssl_options => #{
certfile => "/etc/letsencrypt/live/myblog.com/fullchain.pem",
keyfile => "/etc/letsencrypt/live/myblog.com/privkey.pem"
}
}}
]}
Alternatively, put a reverse proxy (Nginx, Caddy) in front and let it handle SSL termination. This is the more common approach.
Systemd service
Run as a system service:
[Unit]
Description=Blog Application
After=network.target postgresql.service
[Service]
Type=forking
User=blog
Group=blog
WorkingDirectory=/opt/blog
ExecStart=/opt/blog/bin/blog daemon
ExecStop=/opt/blog/bin/blog stop
Restart=on-failure
RestartSec=5
Environment=DB_HOST=localhost
Environment=DB_NAME=blog_prod
Environment=DB_USER=blog
Environment=DB_PASSWORD=secret
Environment=RELEASE_COOKIE=my_secret_cookie
[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable blog
sudo systemctl start blog
Docker
A multi-stage Dockerfile:
FROM erlang:27 AS builder
WORKDIR /app
COPY . .
RUN rebar3 as prod tar
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y libssl3 libncurses6 && rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY --from=builder /app/_build/prod/rel/blog/*.tar.gz .
RUN tar -xzf *.tar.gz && rm *.tar.gz
EXPOSE 8080
CMD ["/app/bin/blog", "foreground"]
Build and run:
docker build -t blog .
docker run -p 8080:8080 \
-e DB_HOST=host.docker.internal \
-e DB_NAME=blog_prod \
-e DB_USER=blog \
-e DB_PASSWORD=secret \
blog
For sub-applications like Nova Admin, add them to your release deps and nova_apps config. They are bundled automatically in the release. See Custom Plugins and CORS for plugin configuration that carries over to production.
Summary
Deploying a Nova application follows standard OTP release practices:
- Configure a production profile in
rebar.config - Set up production config with proper logging and secrets
- Build with
rebar3 as prod releaseorrebar3 as prod tar - Deploy using systemd, Docker, or any process manager
OTP releases are self-contained — once built, everything you need is in a single directory or archive.
Now let's explore more advanced features, starting with OpenAPI, Inspection & Audit.
OpenAPI, Inspection & Audit
The rebar3_nova plugin includes tools for generating API documentation, inspecting your application's configuration, and auditing security. This chapter covers all three.
OpenAPI documentation
Prerequisites
For the OpenAPI generator to produce schema definitions, you need JSON schema files in priv/schemas/. If you used nova gen_resource (see JSON API with Generators) these were created for you. Otherwise create them by hand:
mkdir -p priv/schemas
priv/schemas/post.json:
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"properties": {
"id": { "type": "integer", "description": "Unique identifier" },
"title": { "type": "string", "description": "Post title" },
"body": { "type": "string", "description": "Post body" },
"status": { "type": "string", "enum": ["draft", "published", "archived"] }
},
"required": ["title", "body"]
}
Generating the spec
Run the OpenAPI generator:
rebar3 nova openapi
===> Generated openapi.json
===> Generated swagger.html
This reads your compiled routes and JSON schemas, then produces two files:
openapi.json— the OpenAPI 3.0.3 specificationswagger.html— a standalone Swagger UI page
Customize the output:
rebar3 nova openapi \
--output priv/assets/openapi.json \
--title "Blog API" \
--api-version 1.0.0
| Flag | Default | Description |
|---|---|---|
--output | openapi.json | Output file path |
--title | app name | API title in the spec |
--api-version | 0.1.0 | API version string |
What gets generated
The generator inspects every route registered with Nova. For each route it creates a path entry with the correct HTTP method, operation ID, path parameters, and response schema. It skips static file handlers and error controllers.
A snippet from a generated spec:
{
"openapi": "3.0.3",
"info": {
"title": "Blog API",
"version": "1.0.0"
},
"paths": {
"/api/posts": {
"get": {
"operationId": "blog_posts_controller.index",
"responses": {
"200": {
"description": "OK",
"content": {
"application/json": {
"schema": { "$ref": "#/components/schemas/post" }
}
}
}
}
},
"post": {
"operationId": "blog_posts_controller.create",
"requestBody": {
"content": {
"application/json": {
"schema": { "$ref": "#/components/schemas/post" }
}
}
},
"responses": {
"201": { "description": "Created" }
}
}
}
}
}
Swagger UI
The generated swagger.html loads the Swagger UI from a CDN and points it at your openapi.json. If you place both files in priv/assets/, you can serve them through Nova by adding a static route:
{"/docs/[...]", cowboy_static, {priv_dir, blog, "assets"}}
Then navigate to http://localhost:8080/docs/swagger.html to browse your API interactively.
Auto-generating on release
The nova release command automatically regenerates the OpenAPI spec before building a release:
rebar3 nova release
===> Generated priv/assets/openapi.json
===> Generated priv/assets/swagger.html
===> Release successfully assembled: _build/prod/rel/blog
This means your deployed application always has up-to-date API documentation bundled in.
Inspection tools
View configuration
The nova config command displays all Nova configuration values with their defaults:
rebar3 nova config
=== Nova Configuration ===
bootstrap_application blog
environment dev
cowboy_configuration #{port => 8080}
plugins [{pre_request,nova_request_plugin,
#{decode_json_body => true,
read_urlencoded_body => true}}]
json_lib thoas (default)
use_stacktrace true
dispatch_backend persistent_term (default)
Keys showing (default) are using the built-in default rather than an explicit setting.
| Key | Default | Description |
|---|---|---|
bootstrap_application | (required) | Main application to bootstrap |
environment | dev | Current environment |
cowboy_configuration | #{port => 8080} | Cowboy listener settings |
plugins | [] | Global middleware plugins |
json_lib | thoas | JSON encoding library |
use_stacktrace | false | Include stacktraces in error responses |
dispatch_backend | persistent_term | Backend for route dispatch storage |
Inspect middleware chains
The nova middleware command shows the global and per-route-group plugin chains:
rebar3 nova middleware
=== Global Plugins ===
pre_request: nova_request_plugin #{decode_json_body => true,
read_urlencoded_body => true}
=== Route Groups (blog_router) ===
Group: prefix= security=false
Plugins:
(inherits global)
Routes:
GET /login -> blog_main_controller:login
GET /heartbeat -> (inline fun)
Group: prefix=/api security=false
Plugins:
(inherits global)
Routes:
GET /posts -> blog_posts_controller:index
POST /posts -> blog_posts_controller:create
GET /posts/:id -> blog_posts_controller:show
PUT /posts/:id -> blog_posts_controller:update
DELETE /posts/:id -> blog_posts_controller:delete
Listing routes
The nova routes command displays the compiled routing tree:
rebar3 nova routes
Host: '_'
├─ /api
│ ├─ GET /posts (blog, blog_posts_controller:index/1)
│ ├─ GET /posts/:id (blog, blog_posts_controller:show/1)
│ ├─ POST /posts (blog, blog_posts_controller:create/1)
│ ├─ PUT /posts/:id (blog, blog_posts_controller:update/1)
│ └─ DELETE /posts/:id (blog, blog_posts_controller:delete/1)
├─ GET /login (blog, blog_main_controller:login/1)
└─ GET /heartbeat
Security audit
The nova audit command scans your routes and flags potential security issues:
rebar3 nova audit
=== Security Audit ===
WARNINGS:
POST /api/posts (blog_posts_controller) has no security
PUT /api/posts/:id (blog_posts_controller) has no security
DELETE /api/posts/:id (blog_posts_controller) has no security
INFO:
GET /login (blog_main_controller) has no security
GET /heartbeat has no security
GET /api/posts (blog_posts_controller) has no security
Summary: 3 warning(s), 3 info(s)
The audit classifies findings into two levels:
- WARNINGS — mutation methods (POST, PUT, DELETE, PATCH) without security, wildcard method handlers
- INFO — GET routes without security (common for public endpoints but worth reviewing)
Run rebar3 nova audit before deploying to make sure you haven't left endpoints unprotected by mistake.
To fix the warnings, add a security callback to the route group:
#{prefix => "/api",
security => fun blog_auth:validate_token/1,
routes => [
{"/posts", fun blog_posts_controller:index/1, #{methods => [get]}},
{"/posts/:id", fun blog_posts_controller:show/1, #{methods => [get]}},
{"/posts", fun blog_posts_controller:create/1, #{methods => [post]}},
{"/posts/:id", fun blog_posts_controller:update/1, #{methods => [put]}},
{"/posts/:id", fun blog_posts_controller:delete/1, #{methods => [delete]}}
]}
Command summary
| Command | Purpose |
|---|---|
rebar3 nova openapi | Generate OpenAPI 3.0.3 spec + Swagger UI |
rebar3 nova config | Show Nova configuration with defaults |
rebar3 nova middleware | Show global and per-group plugin chains |
rebar3 nova audit | Find routes missing security callbacks |
rebar3 nova routes | Display the compiled routing tree |
rebar3 nova release | Build release with auto-generated OpenAPI |
Use config to verify settings, middleware to trace request processing, audit to check security coverage, and routes to see the endpoint map.
Next, let's learn how to write custom plugins and handle CORS.
Custom Plugins and CORS
In the Plugins chapter we saw how Nova's built-in plugins work. Now let's build custom plugins and set up CORS for our blog API.
The nova_plugin behaviour
Every callback in nova_plugin is optional — implement only what you need. A plugin registered as pre_request must export pre_request/4; one registered as post_request must export post_request/4.
Request callbacks
-callback pre_request(Req, Env, Options, State) ->
{ok, Req, State} | %% Continue to the next plugin
{break, Req, State} | %% Skip remaining plugins, go to controller
{stop, Req, State} | %% Stop entirely, plugin handles the response
{error, Reason}. %% Trigger a 500 error
-callback post_request(Req, Env, Options, State) ->
{ok, Req, State} |
{break, Req, State} |
{stop, Req, State} |
{error, Reason}.
-callback plugin_info() ->
#{title := binary(), version := binary(), url := binary(),
authors := [binary()], description := binary(),
options => [{atom(), binary()}]}.
Lifecycle callbacks: init/0 and stop/1
Two optional callbacks manage global, long-lived state that persists across requests:
-callback init() -> State :: any().
-callback stop(State :: any()) -> ok.
init/0 is called once when the plugin is loaded. The state it returns is passed as the State argument to every pre_request/4 and post_request/4 call. stop/1 is called when the application shuts down and receives the current state for cleanup.
This is useful when a plugin needs a long-lived resource — an ETS table, a connection pool reference, or a background process:
-module(blog_stats_plugin).
-behaviour(nova_plugin).
-export([init/0,
stop/1,
pre_request/4,
post_request/4,
plugin_info/0]).
init() ->
Tab = ets:new(request_stats, [public, set]),
ets:insert(Tab, {total_requests, 0}),
#{table => Tab}.
stop(#{table := Tab}) ->
ets:delete(Tab),
ok.
pre_request(Req, _Env, _Options, #{table := Tab} = State) ->
ets:update_counter(Tab, total_requests, 1),
{ok, Req, State}.
post_request(Req, _Env, _Options, State) ->
{ok, Req, State}.
plugin_info() ->
#{title => <<"blog_stats_plugin">>,
version => <<"1.0.0">>,
url => <<"https://github.com/novaframework/nova">>,
authors => [<<"Blog">>],
description => <<"Tracks total request count in ETS">>}.
Without init/0, the plugin state starts as undefined. Without stop/1, no cleanup runs on shutdown.
Example: Request logger
A plugin that logs every request with method, path, and response time.
Create src/plugins/blog_logger_plugin.erl:
-module(blog_logger_plugin).
-behaviour(nova_plugin).
-include_lib("kernel/include/logger.hrl").
-export([pre_request/4,
post_request/4,
plugin_info/0]).
pre_request(Req, _Env, _Options, State) ->
StartTime = erlang:monotonic_time(millisecond),
{ok, Req#{start_time => StartTime}, State}.
post_request(Req, _Env, _Options, State) ->
StartTime = maps:get(start_time, Req, 0),
Duration = erlang:monotonic_time(millisecond) - StartTime,
Method = cowboy_req:method(Req),
Path = cowboy_req:path(Req),
?LOG_INFO("~s ~s completed in ~pms", [Method, Path, Duration]),
{ok, Req, State}.
plugin_info() ->
{<<"blog_logger_plugin">>,
<<"1.0.0">>,
<<"Blog">>,
<<"Logs request method, path and duration">>,
[]}.
Register it as both pre-request and post-request in sys.config:
{plugins, [
{pre_request, nova_request_plugin, #{decode_json_body => true,
read_urlencoded_body => true}},
{pre_request, blog_logger_plugin, #{}},
{post_request, blog_logger_plugin, #{}}
]}
Output:
[info] GET /api/posts completed in 3ms
[info] POST /api/posts completed in 12ms
Example: Rate limiter
A plugin that limits requests per IP address using ETS:
-module(blog_rate_limit_plugin).
-behaviour(nova_plugin).
-export([pre_request/4,
post_request/4,
plugin_info/0]).
pre_request(Req, _Env, Options, State) ->
MaxRequests = maps:get(max_requests, Options, 100),
WindowMs = maps:get(window_ms, Options, 60000),
{IP, _Port} = cowboy_req:peer(Req),
Key = {rate_limit, IP},
Now = erlang:monotonic_time(millisecond),
case ets:lookup(blog_rate_limits, Key) of
[{Key, Count, WindowStart}] when Now - WindowStart < WindowMs ->
if Count >= MaxRequests ->
Reply = cowboy_req:reply(429,
#{<<"content-type">> => <<"application/json">>},
<<"{\"error\":\"too many requests\"}">>,
Req),
{stop, Reply, State};
true ->
ets:update_element(blog_rate_limits, Key, {2, Count + 1}),
{ok, Req, State}
end;
_ ->
ets:insert(blog_rate_limits, {Key, 1, Now}),
{ok, Req, State}
end.
post_request(Req, _Env, _Options, State) ->
{ok, Req, State}.
plugin_info() ->
{<<"blog_rate_limit_plugin">>,
<<"1.0.0">>,
<<"Blog">>,
<<"Simple IP-based rate limiting">>,
[max_requests, window_ms]}.
Create the ETS table on application start in src/blog_app.erl:
start(_StartType, _StartArgs) ->
ets:new(blog_rate_limits, [named_table, public, set]),
blog_sup:start_link().
When the limit is exceeded, the plugin returns {stop, Reply, State} — a 429 response is sent and the controller is never called.
CORS
If your API is consumed by a frontend on a different domain, the browser blocks requests unless your server sends the right CORS (Cross-Origin Resource Sharing) headers. Nova includes a CORS plugin.
Using nova_cors_plugin
Add it to your plugin configuration:
{plugins, [
{pre_request, nova_cors_plugin, #{allow_origins => <<"*">>}},
{pre_request, nova_request_plugin, #{decode_json_body => true}}
]}
Using <<"*">> allows requests from any origin. For production, restrict this to your frontend's domain:
{pre_request, nova_cors_plugin, #{allow_origins => <<"https://myblog.com">>}}
The plugin adds CORS headers to every response and handles preflight OPTIONS requests automatically.
Per-route CORS
Apply CORS only to API routes:
routes(_Environment) ->
[
%% API routes with CORS
#{prefix => "/api",
plugins => [
{pre_request, nova_cors_plugin, #{allow_origins => <<"https://myblog.com">>}},
{pre_request, nova_request_plugin, #{decode_json_body => true}}
],
routes => [
{"/posts", fun blog_posts_controller:index/1, #{methods => [get]}},
{"/posts", fun blog_posts_controller:create/1, #{methods => [post]}},
{"/posts/:id", fun blog_posts_controller:show/1, #{methods => [get]}},
{"/posts/:id", fun blog_posts_controller:update/1, #{methods => [put]}},
{"/posts/:id", fun blog_posts_controller:delete/1, #{methods => [delete]}}
]
},
%% HTML routes without CORS
#{prefix => "",
plugins => [
{pre_request, nova_request_plugin, #{read_urlencoded_body => true}}
],
routes => [
{"/login", fun blog_main_controller:login/1, #{methods => [get, post]}}
]
}
].
When plugins is set on a route group, it overrides the global plugin configuration for those routes.
Custom CORS plugin
The built-in plugin hardcodes Allow-Headers and Allow-Methods to *. For more control:
-module(blog_cors_plugin).
-behaviour(nova_plugin).
-export([pre_request/4,
post_request/4,
plugin_info/0]).
pre_request(Req, _Env, Options, State) ->
Origins = maps:get(allow_origins, Options, <<"*">>),
Methods = maps:get(allow_methods, Options, <<"GET, POST, PUT, DELETE, OPTIONS">>),
Headers = maps:get(allow_headers, Options, <<"Content-Type, Authorization">>),
MaxAge = maps:get(max_age, Options, <<"86400">>),
Req1 = cowboy_req:set_resp_header(<<"access-control-allow-origin">>, Origins, Req),
Req2 = cowboy_req:set_resp_header(<<"access-control-allow-methods">>, Methods, Req1),
Req3 = cowboy_req:set_resp_header(<<"access-control-allow-headers">>, Headers, Req2),
Req4 = cowboy_req:set_resp_header(<<"access-control-max-age">>, MaxAge, Req3),
Req5 = case maps:get(allow_credentials, Options, false) of
true ->
cowboy_req:set_resp_header(
<<"access-control-allow-credentials">>, <<"true">>, Req4);
false ->
Req4
end,
case cowboy_req:method(Req5) of
<<"OPTIONS">> ->
Reply = cowboy_req:reply(204, Req5),
{stop, Reply, State};
_ ->
{ok, Req5, State}
end.
post_request(Req, _Env, _Options, State) ->
{ok, Req, State}.
plugin_info() ->
{<<"blog_cors_plugin">>,
<<"1.0.0">>,
<<"Blog">>,
<<"Configurable CORS plugin">>,
[allow_origins, allow_methods, allow_headers, max_age, allow_credentials]}.
Configure with all options:
{pre_request, blog_cors_plugin, #{
allow_origins => <<"https://myblog.com">>,
allow_methods => <<"GET, POST, PUT, DELETE">>,
allow_headers => <<"Content-Type, Authorization, X-Request-ID">>,
max_age => <<"3600">>,
allow_credentials => true
}}
Testing CORS
Verify headers with curl:
# Check preflight response
curl -v -X OPTIONS localhost:8080/api/posts \
-H "Origin: https://myblog.com" \
-H "Access-Control-Request-Method: POST"
# Check actual response headers
curl -v localhost:8080/api/posts \
-H "Origin: https://myblog.com"
You should see the Access-Control-Allow-Origin header in the response.
Plugin return values
| Return | Effect |
|---|---|
{ok, Req, State} | Continue to the next plugin or controller |
{break, Req, State} | Skip remaining plugins in this phase, go to controller |
{stop, Req, State} | Stop everything — plugin must have already sent a response |
{error, Reason} | Trigger a 500 error page |
For the final chapter, let's add observability with OpenTelemetry.
OpenTelemetry
When your Nova application is in production, you need visibility into what it is doing. OpenTelemetry is the industry standard for collecting traces and metrics. The opentelemetry_nova library gives you automatic instrumentation — every HTTP request gets a trace span and metrics are recorded without manual instrumentation code.
What you get
Once configured, opentelemetry_nova provides:
Distributed traces — Every incoming request creates a span with attributes like method, path, status code, controller, and action. If the caller sends a W3C traceparent header, the span is linked to the upstream trace.
HTTP metrics — Four metrics recorded for every request:
| Metric | Type | Description |
|---|---|---|
http.server.request.duration | Histogram | Request duration in seconds |
http.server.active_requests | Gauge | Number of in-flight requests |
http.server.request.body.size | Histogram | Request body size in bytes |
http.server.response.body.size | Histogram | Response body size in bytes |
Adding the dependency
Add opentelemetry_nova and the OpenTelemetry SDK to rebar.config:
{deps, [
nova,
{kura, "~> 1.0"},
{opentelemetry, "~> 1.5"},
{opentelemetry_experimental, "~> 0.5"},
{opentelemetry_exporter, "~> 1.8"},
opentelemetry_nova
]}.
Configuring the stream handler
opentelemetry_nova uses a Cowboy stream handler to intercept requests. Add otel_nova_stream_h to the Nova cowboy configuration:
{nova, [
{cowboy_configuration, #{
port => 8080,
stream_handlers => [otel_nova_stream_h, cowboy_stream_h]
}}
]}
The order matters — otel_nova_stream_h must come before cowboy_stream_h to wrap the full request lifecycle.
Setting up tracing
Configure the SDK to export traces via OTLP HTTP:
{opentelemetry, [
{span_processor, batch},
{traces_exporter, {opentelemetry_exporter, #{
protocol => http_protobuf,
endpoints => [#{host => "localhost", port => 4318, path => "/v1/traces"}]
}}}
]},
{opentelemetry_exporter, [
{otlp_protocol, http_protobuf},
{otlp_endpoint, "http://localhost:4318"}
]}
This sends traces to any OTLP-compatible backend — Grafana Tempo, Jaeger, or any OpenTelemetry Collector.
Setting up Prometheus metrics
Configure a metric reader with the Prometheus exporter:
{opentelemetry_experimental, [
{readers, [
#{module => otel_metric_reader,
config => #{
export_interval_ms => 5000,
exporter => {otel_nova_prom_exporter, #{}}
}}
]}
]}
In your application's start/2, initialize metrics and start the Prometheus HTTP server:
start(_StartType, _StartArgs) ->
opentelemetry_nova:setup(#{prometheus => #{port => 9464}}),
blog_sup:start_link().
This starts a Prometheus endpoint at http://localhost:9464/metrics. Point your Prometheus server or Grafana Agent at it.
If you only want metrics without the Prometheus HTTP server (e.g., pushing via OTLP instead), call opentelemetry_nova:setup() with no arguments.
Span enrichment with the Nova plugin
The stream handler creates spans with basic HTTP attributes. To also get the controller and action on each span, add the otel_nova_plugin as a pre-request plugin:
routes(_Environment) ->
[#{
plugins => [{pre_request, otel_nova_plugin, #{}}],
routes => [
{"/posts", fun blog_posts_controller:index/1, #{methods => [get]}},
{"/posts/:id", fun blog_posts_controller:show/1, #{methods => [get]}}
]
}].
Spans get enriched with nova.app, nova.controller, and nova.action attributes, and the span name becomes GET blog_posts_controller:index instead of just HTTP GET.
Kura query telemetry
Kura has its own telemetry for database queries. Enable it in sys.config:
{kura, [{log, true}]}
This logs every query with its SQL, parameters, duration, and row count. For custom handling, pass an {M, F} tuple:
{kura, [{log, {blog_telemetry, log_query}}]}
Combined with OpenTelemetry HTTP spans, you get end-to-end visibility from the HTTP request through the database query and back.
Full sys.config example
[
{nova, [
{cowboy_configuration, #{
port => 8080,
stream_handlers => [otel_nova_stream_h, cowboy_stream_h]
}}
]},
{kura, [{log, true}]},
{opentelemetry, [
{span_processor, batch},
{traces_exporter, {opentelemetry_exporter, #{
protocol => http_protobuf,
endpoints => [#{host => "localhost", port => 4318, path => "/v1/traces"}]
}}}
]},
{opentelemetry_experimental, [
{readers, [
#{module => otel_metric_reader,
config => #{
export_interval_ms => 5000,
exporter => {otel_nova_prom_exporter, #{}}
}}
]}
]},
{opentelemetry_exporter, [
{otlp_protocol, http_protobuf},
{otlp_endpoint, "http://localhost:4318"}
]}
].
Verifying it works
Make some requests:
curl http://localhost:8080/api/posts
curl -X POST -H "Content-Type: application/json" \
-d '{"title":"Test","body":"Hello"}' http://localhost:8080/api/posts
Check the Prometheus endpoint:
curl http://localhost:9464/metrics
You should see output like:
# HELP http_server_request_duration_seconds Duration of HTTP server requests
# TYPE http_server_request_duration_seconds histogram
http_server_request_duration_seconds_bucket{method="GET",...,le="0.005"} 1
...
For traces, check your configured backend (Tempo, Jaeger, etc.).
How it works under the hood
The otel_nova_stream_h stream handler sits in Cowboy's stream pipeline. When a request arrives it:
- Extracts trace context from the
traceparentheader - Creates a server span named
HTTP <method> - Sets request attributes (method, path, scheme, host, port, peer address, user agent)
- Increments the active requests counter
When the request terminates it:
- Sets the response status code attribute
- Marks the span as error if status >= 500
- Ends the span
- Records duration, request body size, and response body size metrics
- Decrements the active requests counter
Running with a full observability stack
The nova_otel_demo repository has a complete example with Docker Compose including:
- OpenTelemetry Collector — receives traces and metrics via OTLP
- Grafana Tempo — stores and queries traces
- Grafana Mimir — stores Prometheus metrics
- Grafana — dashboards and trace exploration
Clone it and run docker-compose up from the docker/ directory.
That wraps up the main content. For quick reference, see the Erlang Essentials appendix and the Cheat Sheet.
Erlang Essentials
This appendix is not a full Erlang tutorial. It provides a quick reference for the Erlang concepts used in this book and links to comprehensive learning resources.
Learning resources
- Learn You Some Erlang for Great Good! — The best free online book for learning Erlang from scratch. Covers everything from syntax to OTP.
- Adopting Erlang — Practical guide for teams adopting Erlang, covering development setup, building, and running in production.
- Erlang/OTP Documentation — Official reference documentation.
Installing Erlang and Rebar3
We recommend mise for managing tool versions:
# Install mise (if not already installed)
curl https://mise.run | sh
# Install Erlang and rebar3
mise use erlang@26
mise use rebar@3.23
# Verify
erl -eval 'io:format("~s~n", [erlang:system_info(otp_release)]), halt().' -noshell
rebar3 version
Alternatively, use asdf:
asdf plugin add erlang
asdf plugin add rebar
asdf install erlang 26.2.2
asdf install rebar 3.22.1
Quick reference
Atoms
Atoms are constants. They start with a lowercase letter or are quoted with single quotes:
ok, error, true, false, undefined
'Content-Type', 'my-atom'
Binaries and strings
Erlang has two string types. Binaries (double quotes with <<>>) are what you will use most:
<<"hello">> %% binary string
"hello" %% list of integers (less common in Nova)
Tuples
Fixed-size containers, often used for tagged return values:
{ok, Value}
{error, not_found}
{json, #{users => []}}
Maps
Key-value data structures. Nova uses maps extensively for requests, responses, and configuration:
%% Creating
#{name => <<"Alice">>, age => 30}
%% Pattern matching
#{name := Name} = Map
%% Updating
Map#{age => 31}
Pattern matching
Erlang's most powerful feature. Used in function heads, case expressions, and assignments:
%% Function clause matching
handle(#{method := <<"GET">>} = Req) -> get_handler(Req);
handle(#{method := <<"POST">>} = Req) -> post_handler(Req).
%% Case expression
case blog_repo:get(post, Id) of
{ok, Post} -> handle_post(Post);
{error, not_found} -> not_found
end.
Lists and list comprehensions
[1, 2, 3]
[Head | Tail] = [1, 2, 3] %% Head = 1, Tail = [2, 3]
%% List comprehension
[X * 2 || X <- [1, 2, 3]] %% [2, 4, 6]
%% With maps
[row_to_map(R) || R <- Rows]
Modules and functions
-module(my_module).
-export([my_function/1]).
my_function(Arg) ->
%% function body
ok.
Anonymous functions (funs)
Used extensively in Nova for route handlers and security functions:
fun my_module:my_function/1 %% Reference to named function
fun(X) -> X + 1 end %% Anonymous function
fun(_) -> {status, 200} end %% Ignore argument
OTP in 5 minutes
Applications
An OTP application is a component with a defined start/stop lifecycle. Your Nova project is an application. It has:
- An
.app.srcfile describing metadata and dependencies - An
_app.erlmodule implementing theapplicationbehaviour - A
_sup.erlmodule implementing thesupervisorbehaviour
Supervisors
Supervisors manage child processes and restart them if they crash. The generated blog_sup.erl is your application's supervisor.
gen_server
A generic server process. Used for stateful workers:
-module(my_server).
-behaviour(gen_server).
-export([start_link/0, init/1, handle_call/3, handle_cast/2, handle_info/2]).
start_link() ->
gen_server:start_link({local, ?MODULE}, ?MODULE, [], []).
init([]) ->
{ok, #{}}. %% Initial state
handle_call(Request, _From, State) ->
{reply, ok, State}.
handle_cast(_Msg, State) ->
{noreply, State}.
handle_info(_Info, State) ->
{noreply, State}.
Rebar3 basics
rebar3 compile # Compile the project
rebar3 shell # Start an interactive shell
rebar3 eunit # Run EUnit tests
rebar3 ct # Run Common Test suites
rebar3 as prod release # Build a production release
rebar3 as prod tar # Build a release tarball
rebar3 nova serve # Development server with hot-reload
rebar3 nova routes # List all registered routes
Cheat Sheet
Quick reference for Nova's APIs, return values, configuration, and Kura's database layer.
Controller return tuples
| Return | Description |
|---|---|
{ok, Variables} | Render the default template with variables |
{ok, Variables, #{view => Name}} | Render a specific template |
{ok, Variables, #{view => Name, status_code => Code}} | Render template with custom status |
{json, Data} | JSON response (status 200) |
{json, StatusCode, Headers, Body} | JSON response with custom status and headers |
{status, StatusCode} | Bare status code response |
{status, StatusCode, Headers, Body} | Status with headers and body |
{redirect, Path} | HTTP redirect |
{sendfile, StatusCode, Headers, FilePath, Offset, Length} | Send a file |
Route configuration
#{
prefix => "/api", %% Path prefix (string)
security => false | fun Module:Function/1, %% Security function
plugins => [{Phase, Module, Options}], %% Per-route plugins (optional)
routes => [
{Path, fun Module:Function/1, #{methods => [get, post, put, delete]}},
{Path, WebSocketModule, #{protocol => ws}}, %% WebSocket route
{StatusCode, fun Module:Function/1, #{}} %% Error handler
]
}
Path parameters
{"/users/:id", fun my_controller:show/1, #{methods => [get]}}
%% Access in controller:
show(#{bindings := #{<<"id">> := Id}}) -> ...
Security functions
%% Return {true, AuthData} to allow, false to deny
my_security(#{params := Params}) ->
case check_credentials(Params) of
ok -> {true, #{user => <<"alice">>}};
_ -> false
end.
%% AuthData is available in the controller as auth_data
index(#{auth_data := #{user := User}}) -> ...
Plugin callbacks
-behaviour(nova_plugin).
pre_request(Req, Env, Options, State) ->
{ok, Req, State} | %% Continue
{break, Req, State} | %% Skip remaining plugins
{stop, Req, State} | %% Stop — plugin sent response
{error, Reason}. %% 500 error
post_request(Req, Env, Options, State) ->
%% Same return values as pre_request
plugin_info() ->
{Title, Version, Author, Description, OptionKeys}.
Plugin configuration
%% Global (sys.config)
{plugins, [
{pre_request, Module, Options},
{post_request, Module, Options}
]}
%% Per-route (in router)
#{plugins => [{pre_request, Module, Options}],
routes => [...]}
Session API
nova_session:get(Req, <<"key">>) -> {ok, Value} | {error, not_found}
nova_session:set(Req, <<"key">>, Value) -> ok
nova_session:delete(Req) -> {ok, Req1}
nova_session:delete(Req, <<"key">>) -> {ok, Req1}
nova_session:generate_session_id() -> {ok, SessionId}
Cookie setup
Req1 = cowboy_req:set_resp_cookie(<<"session_id">>, SessionId, Req, #{
path => <<"/">>,
http_only => true,
secure => true,
max_age => 86400
}).
WebSocket callbacks
-behaviour(nova_websocket).
init(State) ->
{ok, State}. %% Accept connection
websocket_handle({text, Msg}, State) ->
{ok, State} | %% Do nothing
{reply, {text, Response}, State} | %% Send message
{stop, State}. %% Close connection
websocket_info(ErlangMsg, State) ->
%% Same return values as websocket_handle
WebSocket route
{"/ws", my_ws_handler, #{protocol => ws}}
Pub/Sub API
nova_pubsub:join(Channel)
nova_pubsub:leave(Channel)
nova_pubsub:broadcast(Channel, Topic, Payload)
nova_pubsub:local_broadcast(Channel, Topic, Payload)
nova_pubsub:get_members(Channel)
nova_pubsub:get_local_members(Channel)
%% Message format received by processes:
{nova_pubsub, Channel, SenderPid, Topic, Payload}
Nova request plugin options
{pre_request, nova_request_plugin, #{
decode_json_body => true, %% Decode JSON request bodies
read_urlencoded_body => true, %% Decode URL-encoded form data
read_body => true %% Read raw body
}}
Nova configuration (sys.config)
{nova, [
{environment, dev | prod},
{bootstrap_application, my_app},
{dev_mode, true | false},
{use_stacktrace, true | false},
{session_manager, nova_session_ets},
{render_error_pages, true | false},
{cowboy_configuration, #{
port => 8080,
use_ssl => false,
ssl_port => 8443,
ssl_options => #{certfile => "...", keyfile => "..."},
stream_handlers => [cowboy_stream_h]
}},
{plugins, [...]}
]}
Sub-applications
{my_app, [
{nova_apps, [
{nova_admin, #{prefix => "/admin"}},
{other_app, #{prefix => "/other"}}
]}
]}
Kura — Schema definition
-module(my_schema).
-behaviour(kura_schema).
-include_lib("kura/include/kura.hrl").
-export([table/0, fields/0, primary_key/0, associations/0, embeds/0]).
table() -> <<"my_table">>.
primary_key() -> id.
fields() ->
[
#kura_field{name = id, type = id, primary_key = true, nullable = false},
#kura_field{name = name, type = string, nullable = false},
#kura_field{name = status, type = {enum, [active, inactive]}},
#kura_field{name = metadata, type = {embed, embeds_one, metadata_schema}},
#kura_field{name = inserted_at, type = utc_datetime},
#kura_field{name = updated_at, type = utc_datetime}
].
associations() ->
[
#kura_assoc{name = author, type = belongs_to, schema = user, foreign_key = author_id},
#kura_assoc{name = comments, type = has_many, schema = comment, foreign_key = post_id},
#kura_assoc{name = tags, type = many_to_many, schema = tag,
join_through = <<"posts_tags">>, join_keys = {post_id, tag_id}}
].
embeds() ->
[#kura_embed{name = metadata, type = embeds_one, schema = metadata_schema}].
Kura field types
| Type | PostgreSQL | Erlang |
|---|---|---|
id | BIGSERIAL | integer |
integer | INTEGER | integer |
float | DOUBLE PRECISION | float |
string | VARCHAR(255) | binary |
text | TEXT | binary |
boolean | BOOLEAN | boolean |
date | DATE | {Y, M, D} |
utc_datetime | TIMESTAMP | {{Y,M,D},{H,Mi,S}} |
uuid | UUID | binary |
jsonb | JSONB | map/list |
{enum, [atoms]} | VARCHAR(255) | atom |
{array, Type} | Type[] | list |
{embed, embeds_one, Mod} | JSONB | map |
{embed, embeds_many, Mod} | JSONB | list of maps |
Kura — Changeset API
%% Create a changeset
CS = kura_changeset:cast(SchemaModule, ExistingData, Params, AllowedFields).
%% Validations
kura_changeset:validate_required(CS, [field1, field2])
kura_changeset:validate_format(CS, field, "regex")
kura_changeset:validate_length(CS, field, [{min, 3}, {max, 200}])
kura_changeset:validate_number(CS, field, [{greater_than, 0}])
kura_changeset:validate_inclusion(CS, field, [val1, val2, val3])
kura_changeset:validate_change(CS, field, fun(Val) -> ok | {error, Msg} end)
%% Constraint declarations
kura_changeset:unique_constraint(CS, field)
kura_changeset:foreign_key_constraint(CS, field)
kura_changeset:check_constraint(CS, ConstraintName, field, #{message => Msg})
%% Association/embed casting
kura_changeset:cast_assoc(CS, assoc_name)
kura_changeset:cast_assoc(CS, assoc_name, #{with => Fun})
kura_changeset:put_assoc(CS, assoc_name, Value)
kura_changeset:cast_embed(CS, embed_name)
%% Changeset helpers
kura_changeset:get_change(CS, field) -> Value | undefined
kura_changeset:get_field(CS, field) -> Value | undefined
kura_changeset:put_change(CS, field, Val) -> CS1
kura_changeset:add_error(CS, field, Msg) -> CS1
kura_changeset:apply_changes(CS) -> DataMap
kura_changeset:apply_action(CS, Action) -> {ok, Data} | {error, CS}
Schemaless changesets
Types = #{email => string, age => integer},
CS = kura_changeset:cast(Types, #{}, Params, [email, age]).
Kura — Query builder
Q = kura_query:from(schema_module),
%% Where conditions
Q1 = kura_query:where(Q, {field, value}), %% =
Q1 = kura_query:where(Q, {field, '>', value}), %% comparison
Q1 = kura_query:where(Q, {field, in, [val1, val2]}), %% IN
Q1 = kura_query:where(Q, {field, ilike, <<"%term%">>}), %% ILIKE
Q1 = kura_query:where(Q, {field, is_nil}), %% IS NULL
Q1 = kura_query:where(Q, {'or', [{f1, v1}, {f2, v2}]}), %% OR
%% Ordering, pagination
Q2 = kura_query:order_by(Q, [{field, asc}]),
Q3 = kura_query:limit(Q, 10),
Q4 = kura_query:offset(Q, 20),
%% Preloading associations
Q5 = kura_query:preload(Q, [author, {comments, [author]}]).
Kura — Repository API
%% Read
blog_repo:all(Query) -> {ok, [Map]}
blog_repo:get(Schema, Id) -> {ok, Map} | {error, not_found}
blog_repo:get_by(Schema, Clauses) -> {ok, Map} | {error, not_found}
blog_repo:one(Query) -> {ok, Map} | {error, not_found}
%% Write
blog_repo:insert(Changeset) -> {ok, Map} | {error, Changeset}
blog_repo:insert(Changeset, Opts) -> {ok, Map} | {error, Changeset}
blog_repo:update(Changeset) -> {ok, Map} | {error, Changeset}
blog_repo:delete(Changeset) -> {ok, Map} | {error, Changeset}
%% Bulk
blog_repo:insert_all(Schema, [Map]) -> {ok, Count}
blog_repo:update_all(Query, Updates) -> {ok, Count}
blog_repo:delete_all(Query) -> {ok, Count}
%% Preloading
blog_repo:preload(Schema, Records, Assocs) -> Records
%% Transactions
blog_repo:transaction(Fun) -> {ok, Result} | {error, Reason}
blog_repo:multi(Multi) -> {ok, Results} | {error, Step, Value, Completed}
Upsert options
blog_repo:insert(CS, #{on_conflict => {field, nothing}})
blog_repo:insert(CS, #{on_conflict => {field, replace_all}})
blog_repo:insert(CS, #{on_conflict => {field, {replace, [fields]}}})
Kura — Multi (transaction pipelines)
M = kura_multi:new(),
M1 = kura_multi:insert(M, step_name, Changeset),
M2 = kura_multi:update(M1, step_name, fun(Results) -> Changeset end),
M3 = kura_multi:delete(M2, step_name, Changeset),
M4 = kura_multi:run(M3, step_name, fun(Results) -> {ok, Value} end),
{ok, #{step1 := V1, step2 := V2}} = blog_repo:multi(M4).
Common rebar3 commands
| Command | Description |
|---|---|
rebar3 compile | Compile the project (also triggers kura migration generation) |
rebar3 shell | Start interactive shell |
rebar3 nova serve | Dev server with hot-reload |
rebar3 nova routes | List registered routes |
rebar3 eunit | Run EUnit tests |
rebar3 ct | Run Common Test suites |
rebar3 do eunit, ct | Run both |
rebar3 as prod release | Build production release |
rebar3 as prod tar | Build release tarball |
rebar3 dialyzer | Run type checker |
rebar3_nova commands
| Command | Description |
|---|---|
rebar3 nova gen_controller --name NAME | Generate a controller with stub actions |
rebar3 nova gen_resource --name NAME | Generate controller + JSON schema + route hints |
rebar3 nova gen_test --name NAME | Generate a Common Test suite |
rebar3 nova openapi | Generate OpenAPI 3.0.3 spec + Swagger UI |
rebar3 nova config | Show Nova configuration with defaults |
rebar3 nova middleware | Show global and per-group plugin chains |
rebar3 nova audit | Find routes missing security callbacks |
rebar3 nova release | Build release with auto-generated OpenAPI |
rebar3_kura commands
| Command | Description |
|---|---|
rebar3 kura setup --name REPO | Generate a repo module and migrations directory |
rebar3 kura compile | Diff schemas vs migrations and generate new migrations |
Generator options
# Controller with specific actions
rebar3 nova gen_controller --name products --actions list,show,create
# OpenAPI with custom output
rebar3 nova openapi --output priv/assets/openapi.json --title "My API" --api-version 1.0.0
# Kura setup with custom repo name
rebar3 kura setup --name my_repo