Introduction
What is Nova?
Nova is a web framework for Erlang/OTP. It handles routing, request processing, template rendering, sessions, and WebSockets — the core pieces you need to build web applications and APIs. Nova sits on top of Cowboy, the battle-tested Erlang HTTP server, and adds a structured layer for organizing your application.
Who this book is for
This book is for anyone who wants to build web applications with Erlang — whether you are an experienced developer exploring a new stack or a newcomer picking up Erlang for the first time. If you have built anything with another web framework (Express, Rails, Django, Phoenix, etc.) you will feel right at home, but it is not a requirement. Basic familiarity with HTTP and databases is enough to get started.
No prior Erlang experience is needed. The Erlang Essentials appendix covers the language fundamentals you will use throughout the book, and Learn You Some Erlang is an excellent free companion if you want a deeper introduction. You can start the book right away and refer back to these resources as you go.
What you'll build
Throughout this book you will build a blog platform step by step:
- A Nova application from scratch — project structure, routing, controllers, and plugins
- A database layer with Kura — schemas, migrations, changesets, associations, and advanced queries
- Authentication & sessions — login flows, security callbacks, role-based authorization
- An HTML frontend — ErlyDTL templates, layouts, forms with validation
- A JSON API — RESTful endpoints with code generators, OpenAPI documentation, error handling
- Real-time features with Arizona — live views, stateful components, differential rendering
- Real-time infrastructure — WebSockets, pub/sub, and a live comment section
- Email delivery with Hikyaku — registration confirmation, password reset, notifications
- Testing — unit tests with EUnit, integration tests with Common Test, real-time testing
- Production — configuration, OpenTelemetry observability, custom plugins, deployment
The blog has users who write posts, readers who leave comments, and tags for organizing content. This naturally exercises the full Nova ecosystem: Kura for the database layer, Arizona for real-time interactivity, Hikyaku for transactional email, and Nova's plugin system for cross-cutting concerns.
Before starting, make sure you have:
- Erlang/OTP 28+ — install via mise (recommended), asdf, or your system package manager. OTP 28 is required for Arizona.
- Rebar3 — the Erlang build tool, also installable via mise/asdf
- Docker — for running PostgreSQL (we use Docker Compose throughout)
- A text editor and a terminal
See the Erlang Essentials appendix for detailed setup instructions.
How to read this book
The chapters are designed to be read in order. Each one builds on the previous — the application grows progressively from a bare project to a full-featured, deployed service. Code examples accumulate, so what you build in Part I is extended in Part II and brought to life in Part VI.
If you are already familiar with Nova, you can jump to specific parts. The Cheat Sheet appendix is a useful standalone reference.
Let's get started by creating your first Nova application.
Create a New Application
The fastest way to get started with Nova is the rebar3_nova plugin. It provides project templates that scaffold a complete, runnable Nova application.
Installing the rebar3 plugin
Run the installer script to set up rebar3_nova:
sh -c "$(curl -fsSL https://raw.githubusercontent.com/novaframework/rebar3_nova/master/install.sh)"
This checks for rebar3 (installing it if needed) and adds the rebar3_nova plugin to your global rebar3 config.
Creating a new project
Rebar3's new command generates project scaffolding. With the Nova plugin installed, you have a nova template:
rebar3 new nova blog
This creates a directory with everything needed for a running Nova application:
===> Writing blog/config/dev_sys.config.src
===> Writing blog/config/prod_sys.config.src
===> Writing blog/src/blog.app.src
===> Writing blog/src/blog_app.erl
===> Writing blog/src/blog_sup.erl
===> Writing blog/src/blog_router.erl
===> Writing blog/src/controllers/blog_main_controller.erl
===> Writing blog/rebar.config
===> Writing blog/config/vm.args.src
===> Writing blog/priv/assets/favicon.ico
===> Writing blog/src/views/blog_main.dtl
===> Writing blog/.tool-versions
===> Writing blog/.gitignore
The generated .tool-versions file works with mise and asdf. Run mise install or asdf install to get the exact Erlang and rebar3 versions for this project.
Project structure
Here is what was generated:
src/— Your source codesrc/controllers/— Controller modules that handle request logicsrc/views/— ErlyDTL (Django-style) templates for HTML renderingblog_router.erl— Route definitionsblog_app.erl— OTP application callbackblog_sup.erl— Supervisor
config/— Configuration filesdev_sys.config.src— Development config (used byrebar3 shell)prod_sys.config.src— Production config (used in releases)vm.args.src— Erlang VM arguments
rebar.config— Build configuration, dependencies, and release settings
Running the application
Start the development server:
cd blog
rebar3 nova serve
This compiles your code, starts an Erlang shell, and watches for file changes — when you save a file, it is automatically recompiled and reloaded. No restart needed.
rebar3 nova serve requires enotify. On Linux, install inotify-tools from your package manager. On macOS, fsevent is used automatically.
If enotify is not available, use rebar3 shell instead. It works the same but without automatic recompilation.
Once the node is up, open your browser to http://localhost:8080. You should see the Nova welcome page.
You can also verify the application is running with curl:
curl -v localhost:8080/heartbeat
A 200 OK response means everything is working.
Listing routes
To see all registered routes:
rebar3 nova routes
Host: '_'
├─ /assets
└─ _ /[...] (blog, cowboy_static:init/1)
└─ GET / (blog, blog_main_controller:index/1)
This shows the static asset handler and the index route that renders the welcome page.
Now that you have a running application, let's look at how routing works in Nova.
Routing
In the previous chapter we created a Nova application and saw it running. Now let's understand how requests are matched to controller functions.
The router module
When Nova generated our project, it created blog_router.erl:
-module(blog_router).
-behaviour(nova_router).
-export([
routes/1
]).
routes(_Environment) ->
[#{prefix => "",
security => false,
routes => [
{"/", fun blog_main_controller:index/1, #{methods => [get]}},
{"/heartbeat", fun(_) -> {status, 200} end, #{methods => [get]}}
]
}].
The routes/1 function returns a list of route groups. Each group is a map with these keys:
| Key | Description |
|---|---|
prefix | Path prefix prepended to all routes in this group |
security | false or a fun reference to a security handler |
routes | List of route tuples |
plugins | (optional) Plugin list — overrides global plugins for this group |
Each route tuple has the form {Path, Handler, Options}:
- Path — the URL pattern (e.g.
"/users/:id") - Handler — a fun reference like
fun Module:Function/1 - Options — a map, typically
#{methods => [get, post, ...]}
Adding a route
Let's add a login page route:
routes(_Environment) ->
[#{prefix => "",
security => false,
routes => [
{"/", fun blog_main_controller:index/1, #{methods => [get]}},
{"/heartbeat", fun(_) -> {status, 200} end, #{methods => [get]}},
{"/login", fun blog_main_controller:login/1, #{methods => [get]}}
]
}].
We will implement the login/1 function in the Sessions chapter.
Route parameters
Path segments starting with : are captured as bindings:
{"/users/:id", fun my_controller:show/1, #{methods => [get]}}
In the controller, access bindings from the request map:
show(#{bindings := #{<<"id">> := Id}}) ->
{json, #{id => binary_to_integer(Id)}}.
Bindings are always binary strings — convert them as needed.
HTTP methods
The methods option takes a list of atoms: get, post, put, delete, patch, options, head, connect, trace.
The default is ['_'], which matches all HTTP methods. Use this for routes where you handle the method inside the controller:
{"/login", fun blog_main_controller:login/1, #{methods => ['_']}}
A route can handle multiple specific methods:
{"/login", fun blog_main_controller:login/1, #{methods => [get, post]}}
login(#{method := <<"GET">>}) ->
{ok, [{message, <<"Please log in">>}]};
login(#{method := <<"POST">>}) ->
%% process login form
{redirect, "/"}.
Note that the method field in the request map is an uppercase binary (<<"GET">>, <<"POST">>, etc.) even though you define routes with lowercase atoms.
Controller return values
Every controller function receives a request map and returns a tuple. The first element of the tuple tells Nova which handler to use. Here are the return types you'll use most often:
| Return | Description |
|---|---|
{json, Data} | Encode Data as JSON. Status is 201 for POST, 200 otherwise. |
{ok, Variables} | Render the default template with Variables (list or map). |
{view, Variables} | Same as {ok, Variables} — an alias. |
{status, Code} | Return an HTTP status code with no body. |
{redirect, Path} | Send a 302 redirect to Path. |
Quick examples:
%% Return JSON
index(_Req) ->
{json, #{message => <<"hello">>}}.
%% Render a template
index(_Req) ->
{ok, [{title, <<"My Blog">>}]}.
%% Return 204 No Content
delete(_Req) ->
{status, 204}.
%% Redirect to another page
logout(_Req) ->
{redirect, "/login"}.
Each of these has extended forms for setting custom status codes and headers (e.g. {json, StatusCode, Headers, Data}). We'll use those in the JSON API and Sessions chapters.
Prefixes for grouping
The prefix key groups related routes under a common path. For example, to build an API:
#{prefix => "/api/v1",
security => false,
routes => [
{"/users", fun blog_api_controller:list_users/1, #{methods => [get]}},
{"/users/:id", fun blog_api_controller:get_user/1, #{methods => [get]}}
]
}
These routes become /api/v1/users and /api/v1/users/:id.
Security
So far every route group has security => false, meaning no authentication check. When security is set to a fun reference, Nova calls that function before the controller for every route in the group.
The security function receives the request map and must return one of:
| Return | Effect |
|---|---|
true | Allow — request proceeds to the controller. |
{true, AuthData} | Allow — AuthData is added to the request map as auth_data. |
{redirect, Path} | Deny — redirect the user (e.g. to a login page). |
{false, Headers} | Deny — return 401 with the given headers. |
A basic example:
#{prefix => "/admin",
security => fun blog_auth:check/1,
routes => [
{"/dashboard", fun blog_admin_controller:index/1, #{methods => [get]}}
]
}
-module(blog_auth).
-export([check/1]).
check(#{auth_data := _User}) ->
true;
check(_Req) ->
{redirect, "/login"}.
When {true, AuthData} is returned, the controller can access it:
index(#{auth_data := User}) ->
{ok, [{username, maps:get(name, User)}]}.
We'll build a full authentication flow in Authentication.
Error routes
Nova provides default pages for error status codes (404, 500, etc.). You can override them by adding error routes — tuples where the path is an integer status code:
routes(_Environment) ->
[#{prefix => "",
security => false,
routes => [
{"/", fun blog_main_controller:index/1, #{methods => [get]}},
{404, fun blog_error_controller:not_found/1, #{}},
{500, fun blog_error_controller:server_error/1, #{}}
]
}].
The error controller works like any other controller:
not_found(_Req) ->
{status, 404, #{}, #{error => <<"not found">>}}.
See the Error Handling chapter for rendering custom error templates.
Static file serving
Nova can serve static files directly from the router. Use a two-element string tuple {RemotePath, LocalPath} (no handler function):
Serve a directory — the path must end with /[...] to match all files underneath:
{"/assets/[...]", "priv/static", #{}}
This maps /assets/css/style.css to priv/static/css/style.css.
Serve a single file:
{"/favicon.ico", "priv/static/favicon.ico", #{}}
Nova resolves LocalPath relative to your application's priv directory. The third element is an options map (typically empty).
Inline handlers
For simple responses you can use an anonymous function directly in the route:
{"/heartbeat", fun(_) -> {status, 200} end, #{methods => [get]}}
This is useful for health checks and other trivial endpoints.
Environment-based routing
The routes/1 function receives the environment atom configured in sys.config (dev or prod). You can use pattern matching to add development-only routes:
routes(prod) ->
prod_routes();
routes(dev) ->
prod_routes() ++ dev_routes().
prod_routes() ->
[#{prefix => "",
security => false,
routes => [
{"/", fun blog_main_controller:index/1, #{methods => [get]}},
{"/heartbeat", fun(_) -> {status, 200} end, #{methods => [get]}}
]
}].
dev_routes() ->
[#{prefix => "",
security => false,
routes => [
{"/dev-tools", fun blog_dev_controller:index/1, #{methods => [get]}}
]
}].
rebar3 nova routes shows production routes only. Development-only routes won't appear in the output.
Next, let's look at controllers — the functions that handle requests and return responses.
Controllers & Responses
Every Nova controller function receives a request map and returns a tuple that tells Nova how to respond. This chapter covers all the response types and patterns you'll use.
The request map
When Nova calls your controller function, it passes a map containing everything about the request:
show(#{method := <<"GET">>,
bindings := #{<<"id">> := Id},
auth_data := #{username := User}} = Req) ->
...
Key fields available in the request map:
| Key | Description |
|---|---|
method | HTTP method as uppercase binary (<<"GET">>, <<"POST">>, etc.) |
bindings | Path parameters (e.g. #{<<"id">> => <<"42">>}) |
auth_data | Data from the security function (if any) |
json | Decoded JSON body (if nova_request_plugin is configured) |
params | Decoded form params (if nova_request_plugin is configured) |
parsed_qs | Parsed query string (if nova_request_plugin is configured) |
csrf_token | CSRF token (if nova_csrf_plugin is enabled) |
correlation_id | Request correlation ID (if nova_correlation_plugin is enabled) |
You can also access raw Cowboy request data using cowboy_req functions on the request map.
Response tuples
JSON responses
%% 200 OK with JSON body (201 for POST requests)
{json, #{message => <<"hello">>}}
%% Custom status, headers, and body
{json, 201, #{<<"location">> => <<"/api/posts/1">>}, #{id => 1, title => <<"New Post">>}}
Template rendering
%% Render the default template (derived from module name)
{ok, [{title, <<"My Blog">>}]}
%% Render a specific template
{ok, [{error, <<"Invalid">>}], #{view => login}}
%% Render with custom status
{ok, [{error, <<"Not Found">>}], #{view => error_page, status_code => 404}}
Status codes
%% Bare status code
{status, 204}
%% Status with headers and body
{status, 404, #{}, #{error => <<"not found">>}}
Redirects
%% 302 redirect
{redirect, "/login"}
%% Redirect with modified request (e.g. after deleting a session cookie)
{redirect, "/login", Req1}
File responses
{sendfile, 200, #{}, {0, FileSize, "/path/to/file.pdf"}, <<"application/pdf">>}
Complete reference
| Return | Description |
|---|---|
{ok, Variables} | Render the default template with variables |
{ok, Variables, #{view => Name}} | Render a specific template |
{ok, Variables, #{view => Name, status_code => Code}} | Render template with custom status |
{json, Data} | JSON response (status 200, or 201 for POST) |
{json, StatusCode, Headers, Body} | JSON response with custom status and headers |
{status, StatusCode} | Bare status code response |
{status, StatusCode, Headers, Body} | Status with headers and body |
{redirect, Path} | HTTP 302 redirect |
{redirect, Path, Req} | Redirect with modified request |
{sendfile, StatusCode, Headers, {Offset, Length, Path}, MimeType} | Send a file |
Handling multiple HTTP methods
A single controller function can handle different methods using pattern matching:
login(#{method := <<"GET">>}) ->
{ok, [], #{view => login}};
login(#{method := <<"POST">>, params := Params} = Req) ->
%% process login form
{redirect, "/"}.
Or use separate functions for clarity:
%% In the router
{"/login", fun blog_main_controller:login/1, #{methods => [get]}},
{"/login", fun blog_main_controller:login_post/1, #{methods => [post]}}
Custom response handlers
Nova uses a handler registry that maps return tuple atoms to handler functions:
nova_handlers:register_handler(xml, fun my_xml_handler:handle/3).
Then return from controllers:
my_action(_Req) ->
{xml, <<"<user><name>Alice</name></user>">>}.
The handler function receives (ReturnTuple, CallbackFun, Req) and must return {ok, Req2}.
Fallback controllers
If a controller returns an unrecognized value, Nova can delegate to a fallback controller:
-module(blog_posts_controller).
-fallback_controller(blog_error_controller).
index(_Req) ->
case do_something() of
{ok, Data} -> {json, Data};
unexpected_value -> unexpected_value %% Goes to fallback
end.
The fallback module needs resolve/2:
resolve(Req, InvalidReturn) ->
logger:warning("Unexpected controller return: ~p", [InvalidReturn]),
{status, 500, #{}, #{error => <<"internal server error">>}}.
Next, let's look at plugins and middleware — the layer that processes requests before and after your controllers.
Plugins
Plugins are Nova's middleware system. They run code before and after your controller handles a request — useful for decoding request bodies, adding headers, logging, rate limiting, and more.
How the plugin pipeline works
Every HTTP request flows through a pipeline:
- Pre-request plugins run in list definition order (first in the list runs first)
- The controller handles the request
- Post-request plugins run in list definition order
A plugin module implements the nova_plugin behaviour and exports pre_request/4, post_request/4, and plugin_info/0.
Each callback receives (Req, Env, Options, State) and returns {ok, Req, State} to pass control to the next plugin. Plugins can also enrich the request map — adding keys like json, params, or correlation_id — so that later plugins and controllers can use them.
Configuring plugins
Plugins are configured in sys.config under the nova application key:
{nova, [
{plugins, [
{pre_request, nova_request_plugin, #{decode_json_body => true}}
]}
]}
Each plugin entry is a tuple: {Phase, Module, Options} where Phase is pre_request or post_request.
nova_request_plugin
This built-in plugin handles request body decoding and query string parsing. It supports three options:
| Option | Type | Request map key | Description |
|---|---|---|---|
decode_json_body | true | json | Decodes JSON request bodies |
read_urlencoded_body | true | params | Decodes URL-encoded form bodies |
parse_qs | true | list | parsed_qs | Parses the URL query string |
decode_json_body
When enabled, requests with Content-Type: application/json have their body decoded and placed in the json key:
{pre_request, nova_request_plugin, #{decode_json_body => true}}
create(#{json := #{<<"title">> := Title}} = _Req) ->
%% Use the decoded JSON body
{json, #{created => Title}}.
If the content type is application/json but the body is empty or malformed, the plugin returns a 400 response and the controller is never called.
decode_json_body is skipped for GET and DELETE requests since they typically have no body.
read_urlencoded_body
When enabled, requests with Content-Type: application/x-www-form-urlencoded have their body parsed into a map under the params key:
{pre_request, nova_request_plugin, #{read_urlencoded_body => true}}
login(#{params := #{<<"username">> := User, <<"password">> := Pass}} = _Req) ->
%% Use the decoded form params
...
parse_qs
Parses the URL query string (e.g. ?page=2&limit=10). The value controls the format:
true— returns a map in theparsed_qskeylist— returns a proplist of{Key, Value}tuples
{pre_request, nova_request_plugin, #{parse_qs => true}}
index(#{parsed_qs := #{<<"page">> := Page}} = _Req) ->
%% Use query params
...
Combining options
You can enable all three at once:
{pre_request, nova_request_plugin, #{
decode_json_body => true,
read_urlencoded_body => true,
parse_qs => true
}}
nova_correlation_plugin
This plugin assigns a unique correlation ID to every request — essential for tracing requests across services in your logs.
{pre_request, nova_correlation_plugin, #{
request_correlation_header => <<"x-correlation-id">>,
logger_metadata_key => correlation_id
}}
| Option | Default | Description |
|---|---|---|
request_correlation_header | (none — always generates) | Header to read an existing correlation ID from. Cowboy lowercases all header names. |
logger_metadata_key | <<"correlation-id">> | Key set in OTP logger process metadata |
The plugin:
- Reads the correlation ID from the configured header, or generates a v4 UUID if missing
- Sets the ID in OTP logger metadata (so all log messages for this request include it)
- Adds an
x-correlation-idresponse header - Stores the ID in the request map as
correlation_id
Access it in your controller:
show(#{correlation_id := CorrId} = _Req) ->
logger:info("Handling request ~s", [CorrId]),
...
nova_csrf_plugin
This plugin provides CSRF protection using the synchronizer token pattern. It generates a random token per session and validates it on state-changing requests.
{pre_request, nova_csrf_plugin, #{}}
| Option | Default | Description |
|---|---|---|
field_name | <<"_csrf_token">> | Form field name to check |
header_name | <<"x-csrf-token">> | Header name to check (for AJAX) |
session_key | <<"_csrf_token">> | Key used to store the token in the session |
excluded_paths | [] | List of path prefixes to skip protection for |
How it works
- Safe methods (GET, HEAD, OPTIONS): The plugin ensures a CSRF token exists in the session and injects it into the request map as
csrf_token. - Unsafe methods (POST, PUT, PATCH, DELETE): The plugin reads the submitted token from the form field or header and validates it against the session token. If the token is missing or doesn't match, the request is rejected with a 403 response.
Template integration
In your ErlyDTL templates, include the token in forms as a hidden field:
<form method="post" action="/login">
<input type="hidden" name="_csrf_token" value="{{ csrf_token }}" />
<!-- rest of form -->
<button type="submit">Log in</button>
</form>
The csrf_token variable is available because the plugin adds it to the request map, and Nova passes request map values to templates as template variables.
For AJAX requests, send the token in a header instead:
fetch('/api/resource', {
method: 'POST',
headers: {
'X-CSRF-Token': csrfToken,
'Content-Type': 'application/json'
},
body: JSON.stringify(data)
});
Excluding API paths
If you have API routes that use a different authentication scheme (e.g. bearer tokens), exclude them from CSRF checks:
{pre_request, nova_csrf_plugin, #{
excluded_paths => [<<"/api/">>]
}}
nova_request_plugin must run before nova_csrf_plugin so that form params are parsed into the params key. Plugin order matters — list nova_request_plugin first.
Setting up for our login form
In the next chapter we will build a login form that sends URL-encoded data. To have Nova decode this automatically, update the plugin config in dev_sys.config.src:
{plugins, [
{pre_request, nova_request_plugin, #{read_urlencoded_body => true}}
]}
With this setting, form POST data is decoded and placed in the params key of the request map, ready for your controller to use.
You can enable multiple decoders at once. We will add decode_json_body => true later when we build our JSON API.
Per-route plugins
So far we've configured plugins globally in sys.config. You can also set plugins per route group by adding a plugins key to the group map in your router:
routes(_Environment) ->
[
#{prefix => "/api",
plugins => [
{pre_request, nova_request_plugin, #{decode_json_body => true}}
],
routes => [
{"/posts", fun blog_posts_controller:list/1, #{methods => [get]}}
]
},
#{prefix => "",
plugins => [
{pre_request, nova_request_plugin, #{read_urlencoded_body => true}}
],
routes => [
{"/login", fun blog_main_controller:login/1, #{methods => [get, post]}}
]
}
].
When plugins is set on a route group, it overrides the global plugin configuration for those routes. This lets you use JSON decoding for API routes and form decoding for HTML routes without conflict.
See Custom Plugins and CORS for more examples, including per-route CORS.
Built-in plugins summary
| Plugin | Phase | Purpose | Key request map additions |
|---|---|---|---|
nova_request_plugin | pre_request | Decodes JSON/form bodies, parses query strings | json, params, parsed_qs |
nova_correlation_plugin | pre_request | Assigns correlation IDs for request tracing | correlation_id |
nova_csrf_plugin | pre_request | CSRF protection via synchronizer token | csrf_token |
nova_cors_plugin | pre_request | Adds CORS headers, handles preflight requests | (headers only) |
A realistic configuration using multiple plugins:
{nova, [
{plugins, [
{pre_request, nova_correlation_plugin, #{
request_correlation_header => <<"x-correlation-id">>,
logger_metadata_key => correlation_id
}},
{pre_request, nova_request_plugin, #{
decode_json_body => true,
read_urlencoded_body => true,
parse_qs => true
}},
{pre_request, nova_csrf_plugin, #{
excluded_paths => [<<"/api/">>]
}}
]}
]}
Ordering matters: nova_correlation_plugin runs first so all subsequent log messages include the correlation ID. nova_request_plugin runs before nova_csrf_plugin so form params are available for token validation.
With plugins configured to decode form data, let's set up our database layer.
Database Setup
Nova does not include a built-in database layer — by design, you choose what fits your project. We will use Kura, an Ecto-inspired database abstraction for Erlang that targets PostgreSQL. Kura gives you schemas, changesets, a query builder, and migrations — no raw SQL required.
Adding dependencies
Add kura and the rebar3_kura plugin to rebar.config:
{deps, [
nova,
{flatlog, "0.1.2"},
{kura, "~> 1.0"}
]}.
{plugins, [
rebar3_nova,
{rebar3_kura, "~> 0.5"}
]}.
Also add kura to your application dependencies in src/blog.app.src:
{applications,
[kernel,
stdlib,
nova,
kura
]},
Setting up the repository
The rebar3_kura plugin provides a setup command that generates a repository module:
rebar3 kura setup --name blog_repo
This creates src/blog_repo.erl — a module that wraps all database operations:
-module(blog_repo).
-behaviour(kura_repo).
-export([config/0, start/0, all/1, get/2, get_by/2, one/1,
insert/1, insert/2, update/1, delete/1,
update_all/2, delete_all/1, insert_all/2,
preload/3, transaction/1, multi/1, query/2]).
config() ->
Database = application:get_env(blog, database, <<"blog_dev">>),
#{pool => ?MODULE,
database => Database,
hostname => <<"localhost">>,
port => 5432,
username => <<"postgres">>,
password => <<"postgres">>,
pool_size => 10}.
start() -> kura_repo_worker:start(?MODULE).
all(Q) -> kura_repo_worker:all(?MODULE, Q).
get(Schema, Id) -> kura_repo_worker:get(?MODULE, Schema, Id).
get_by(Schema, Clauses) -> kura_repo_worker:get_by(?MODULE, Schema, Clauses).
one(Q) -> kura_repo_worker:one(?MODULE, Q).
insert(CS) -> kura_repo_worker:insert(?MODULE, CS).
insert(CS, Opts) -> kura_repo_worker:insert(?MODULE, CS, Opts).
update(CS) -> kura_repo_worker:update(?MODULE, CS).
delete(CS) -> kura_repo_worker:delete(?MODULE, CS).
update_all(Q, Updates) -> kura_repo_worker:update_all(?MODULE, Q, Updates).
delete_all(Q) -> kura_repo_worker:delete_all(?MODULE, Q).
insert_all(Schema, Entries) -> kura_repo_worker:insert_all(?MODULE, Schema, Entries).
preload(Schema, Records, Assocs) -> kura_repo_worker:preload(?MODULE, Schema, Records, Assocs).
transaction(Fun) -> kura_repo_worker:transaction(?MODULE, Fun).
multi(Multi) -> kura_repo_worker:multi(?MODULE, Multi).
query(SQL, Params) -> kura_repo_worker:query(?MODULE, SQL, Params).
The kura_repo behaviour only requires one callback — config/0 — which tells Kura how to connect to PostgreSQL. Every other function is a convenience delegation to kura_repo_worker.
The setup command also creates src/migrations/ for migration files.
PostgreSQL with Docker Compose
Create docker-compose.yml in your project root:
services:
db:
image: postgres:16
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: blog_dev
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
Start it:
docker compose up -d
Configuring the repo
Notice that config/0 uses application:get_env(blog, database, <<"blog_dev">>) for the database name. This means you can override it per environment through sys.config without touching the module.
The blog_dev default in config/0 works without any sys.config entry. If you ever need a separate database for production or CI, override it with an application environment variable:
{blog, [
{database, <<"blog_prod">>}
]}
Starting the repo in the supervisor
The repo needs to be started when your application boots. Add it to your supervisor in src/blog_sup.erl:
-module(blog_sup).
-behaviour(supervisor).
-export([start_link/0]).
-export([init/1]).
start_link() ->
supervisor:start_link({local, ?MODULE}, ?MODULE, []).
init([]) ->
blog_repo:start(),
kura_migrator:migrate(blog_repo),
{ok, {#{strategy => one_for_one, intensity => 5, period => 10}, []}}.
blog_repo:start() creates the pgo connection pool using the config from config/0. kura_migrator:migrate/1 then runs any pending migrations — it tracks which versions have been applied in a schema_migrations table.
Auto-migrating on startup is convenient during development. For production, run migrations as a separate step before deploying (e.g. a release command or CI job) so that failures don't prevent the application from starting.
Adding the rebar3_kura compile hook
To get automatic migration generation (covered in the next chapter), add a provider hook to rebar.config:
{provider_hooks, [
{pre, [{compile, {kura, compile}}]}
]}.
This runs rebar3 kura compile before every rebar3 compile, scanning your schemas and generating migrations for any changes.
Verifying the connection
Start the development server:
rebar3 nova serve
You should see the application start without errors. If the database is unreachable, you will see a connection error in the logs. Verify from the shell:
1> blog_repo:query("SELECT 1 AS result", []).
{ok, [#{result => 1}]}
query/2 returns {ok, Rows} where each row is a map with atom keys — the same format you will see from all Kura query functions.
Now let's define our first schemas and watch Kura generate migrations automatically in Schemas and Migrations.
Schemas and Migrations
In the previous chapter we set up the database connection and repo. Now let's define schemas — Erlang modules that describe your data — and watch Kura generate migrations automatically.
Defining the user schema
Create src/schemas/user.erl:
-module(user).
-behaviour(kura_schema).
-include_lib("kura/include/kura.hrl").
-export([table/0, fields/0]).
table() -> <<"users">>.
fields() ->
[
#kura_field{name = id, type = id, primary_key = true, nullable = false},
#kura_field{name = username, type = string, nullable = false},
#kura_field{name = email, type = string, nullable = false},
#kura_field{name = password_hash, type = string, nullable = false},
#kura_field{name = inserted_at, type = utc_datetime},
#kura_field{name = updated_at, type = utc_datetime}
].
A schema module implements the kura_schema behaviour and exports two required callbacks:
table/0— the PostgreSQL table namefields/0— a list of#kura_field{}records describing each column. Mark one field withprimary_key = true.
Each field has a name (atom), type (one of Kura's types), and optional properties like nullable and default.
Auto-timestamps: When a schema includes
inserted_atandupdated_atfields, Kura automatically sets them on insert and update operations — no need to manage them in your changesets.
Kura field types
| Type | PostgreSQL | Erlang |
|---|---|---|
id | BIGSERIAL | integer |
integer | INTEGER | integer |
float | DOUBLE PRECISION | float |
string | VARCHAR(255) | binary |
text | TEXT | binary |
boolean | BOOLEAN | boolean |
date | DATE | {Y, M, D} |
utc_datetime | TIMESTAMPTZ | {{Y,M,D},{H,Mi,S}} |
uuid | UUID | binary |
jsonb | JSONB | map/list |
{enum, [atoms]} | VARCHAR(255) | atom |
{array, Type} | Type[] | list |
{embed, ...} | — | map/list |
Auto-generating migrations
With the rebar3_kura compile hook we added in the previous chapter, compile the project:
rebar3 compile
===> kura: generated src/migrations/m20260223120000_update_schema.erl
===> kura: migration generated
===> Compiling blog
Kura replayed existing migration files to determine the current database state (no migrations yet = empty database), then compared that against your schema definitions and generated a migration file.
Kura generates a single combined migration covering all schema changes detected since the last compile. If you define both user and post schemas before the first compile, both tables will appear in the same migration file. The migration module name uses update_schema rather than a table-specific name.
Walking through the migration
Open the generated file in src/migrations/:
-module(m20260223120000_update_schema).
-behaviour(kura_migration).
-include_lib("kura/include/kura.hrl").
-export([up/0, down/0]).
up() ->
[{create_table, <<"users">>, [
#kura_column{name = id, type = id, primary_key = true, nullable = false},
#kura_column{name = username, type = string, nullable = false},
#kura_column{name = email, type = string, nullable = false},
#kura_column{name = password_hash, type = string, nullable = false},
#kura_column{name = inserted_at, type = utc_datetime},
#kura_column{name = updated_at, type = utc_datetime}
]}].
down() ->
[{drop_table, <<"users">>}].
The migration has two functions:
up/0— returns operations to apply (create the table)down/0— returns operations to reverse (drop the table)
Migration files are named with a timestamp prefix so they run in order.
Defining the post schema
Now let's add a post schema with an enum type for status. Create src/schemas/post.erl:
-module(post).
-behaviour(kura_schema).
-include_lib("kura/include/kura.hrl").
-export([table/0, fields/0]).
table() -> <<"posts">>.
fields() ->
[
#kura_field{name = id, type = id, primary_key = true, nullable = false},
#kura_field{name = title, type = string, nullable = false},
#kura_field{name = body, type = text},
#kura_field{name = status, type = {enum, [draft, published, archived]}, default = draft},
#kura_field{name = user_id, type = integer},
#kura_field{name = inserted_at, type = utc_datetime},
#kura_field{name = updated_at, type = utc_datetime}
].
The status field uses an enum type — Kura stores it as VARCHAR(255) in PostgreSQL but casts between atoms and binaries automatically. When you query a post, status comes back as an atom (draft, published, or archived).
Compile again:
rebar3 compile
===> kura: generated src/migrations/m20260223120100_update_schema.erl
===> kura: migration generated
===> Compiling blog
A second migration appears for the posts table.
Running migrations
In the previous chapter we added both blog_repo:start() and kura_migrator:migrate(blog_repo) to the supervisor. The repo start creates the connection pool; migrate/1 checks the schema_migrations table and runs any pending migrations in order.
Start the application:
rebar3 nova serve
Check the logs — you should see the migrations being applied:
Kura: up migration 20260223120000 (m20260223120000_update_schema)
Kura: up migration 20260223120100 (m20260223120100_update_schema)
The schema_migrations table
Kura creates a schema_migrations table to track which migrations have been applied:
blog_dev=# SELECT * FROM schema_migrations;
version | inserted_at
--------------------+-------------------
20260223120000 | 2026-02-23 12:00:00
20260223120100 | 2026-02-23 12:01:00
Each row records a migration version (the timestamp from the filename). Kura only runs migrations that are not in this table.
Managing migrations
During development you'll sometimes need to undo a migration or check what's been applied:
%% Roll back the last migration
kura_migrator:rollback(blog_repo).
%% Roll back the last 3 migrations
kura_migrator:rollback(blog_repo, 3).
%% Show status of all migrations (up or pending)
kura_migrator:status(blog_repo).
status/1 returns a list of {Version, Module, up | pending} tuples — handy for verifying the state of your database during development.
Modifying schemas
When you change a schema — add a field, remove one, or change a type — Kura detects the difference on the next compile and generates an alter_table migration.
For example, add a bio field to the user schema:
fields() ->
[
#kura_field{name = id, type = id, primary_key = true, nullable = false},
#kura_field{name = username, type = string, nullable = false},
#kura_field{name = email, type = string, nullable = false},
#kura_field{name = password_hash, type = string, nullable = false},
#kura_field{name = bio, type = text},
#kura_field{name = inserted_at, type = utc_datetime},
#kura_field{name = updated_at, type = utc_datetime}
].
Compile:
rebar3 compile
===> kura: generated src/migrations/m20260223120200_alter_users.erl
===> kura: migration generated
===> Compiling blog
The generated migration adds the column:
up() ->
[{alter_table, <<"users">>, [
{add_column, #kura_column{name = bio, type = text}}
]}].
down() ->
[{alter_table, <<"users">>, [
{drop_column, bio}
]}].
Define your schema, compile, migration appears. No SQL files to maintain.
Now that we have tables, let's learn about changesets and validation — how Kura validates and tracks data changes before they hit the database.
Changesets and Validation
In the previous chapter we defined schemas and generated migrations. Before we can insert or update data, we need to validate it. Kura uses changesets — a data structure that tracks what fields changed, validates them, and accumulates errors. No exceptions, no side effects — just data in, data out.
The changeset concept
A changeset takes three inputs:
- Data — the existing record (or
#{}for a new one) - Params — the incoming data (typically from a request body)
- Allowed fields — which params are permitted (everything else is ignored)
It produces a #kura_changeset{} record with:
changes— a map of field → new valueerrors— a list of{field, message}tuplesvalid—trueorfalse
Adding changeset functions to schemas
Let's add a changeset/2 function to the post schema. Update src/schemas/post.erl:
-module(post).
-behaviour(kura_schema).
-include_lib("kura/include/kura.hrl").
-export([table/0, fields/0, changeset/2]).
table() -> <<"posts">>.
fields() ->
[
#kura_field{name = id, type = id, primary_key = true, nullable = false},
#kura_field{name = title, type = string, nullable = false},
#kura_field{name = body, type = text},
#kura_field{name = status, type = {enum, [draft, published, archived]}, default = draft},
#kura_field{name = user_id, type = integer},
#kura_field{name = inserted_at, type = utc_datetime},
#kura_field{name = updated_at, type = utc_datetime}
].
changeset(Data, Params) ->
CS = kura_changeset:cast(post, Data, Params, [title, body, status, user_id]),
CS1 = kura_changeset:validate_required(CS, [title, body]),
CS2 = kura_changeset:validate_length(CS1, title, [{min, 3}, {max, 200}]),
kura_changeset:validate_inclusion(CS2, status, [draft, published, archived]).
Here is what each step does:
cast/4— takes the schema module, existing data, incoming params, and a list of allowed fields. It converts param values to the correct Erlang types (binaries to atoms for enums, binaries to integers for IDs, etc.) and puts them inchanges.validate_required/2— ensures the listed fields are present and non-empty.validate_length/3— checks string length constraints.validate_inclusion/3— ensures the value is one of the allowed options.
User changeset with format and unique constraints
Update src/schemas/user.erl:
-module(user).
-behaviour(kura_schema).
-include_lib("kura/include/kura.hrl").
-export([table/0, fields/0, changeset/2]).
table() -> <<"users">>.
fields() ->
[
#kura_field{name = id, type = id, primary_key = true, nullable = false},
#kura_field{name = username, type = string, nullable = false},
#kura_field{name = email, type = string, nullable = false},
#kura_field{name = password_hash, type = string, nullable = false},
#kura_field{name = inserted_at, type = utc_datetime},
#kura_field{name = updated_at, type = utc_datetime}
].
changeset(Data, Params) ->
CS = kura_changeset:cast(user, Data, Params, [username, email, password_hash]),
CS1 = kura_changeset:validate_required(CS, [username, email, password_hash]),
CS2 = kura_changeset:validate_format(CS1, email, <<"^[^@]+@[^@]+\\.[^@]+$">>),
CS3 = kura_changeset:validate_length(CS2, username, [{min, 2}, {max, 50}]),
CS4 = kura_changeset:unique_constraint(CS3, email),
kura_changeset:unique_constraint(CS4, username).
New validations:
validate_format/3— checks the value against a regex. The email regex ensures it has@and a domain.unique_constraint/2— declares that this field has a unique index in the database. If an insert/update violates the constraint, Kura maps the PostgreSQL error to a friendly changeset error instead of crashing.
unique_constraint does not check uniqueness in Erlang — it tells Kura how to handle the PostgreSQL unique violation error. You still need a unique index on the column, which you would add to a migration.
Changeset errors as structured data
Errors are a list of {Field, Message} tuples on the changeset:
1> CS = post:changeset(#{}, #{}).
#kura_changeset{valid = false, errors = [{title, <<"can't be blank">>},
{body, <<"can't be blank">>}], ...}
2> CS#kura_changeset.valid.
false
3> CS#kura_changeset.errors.
[{title, <<"can't be blank">>}, {body, <<"can't be blank">>}]
4> CS2 = post:changeset(#{}, #{<<"title">> => <<"Hi">>, <<"body">> => <<"Hello">>}).
#kura_changeset{valid = false, errors = [{title, <<"should be at least 3 character(s)">>}], ...}
Working with changeset fields
Kura exports helper functions for reading and modifying changeset data programmatically. These are essential when building multi-step changeset pipelines.
| Function | Purpose |
|---|---|
get_field(CS, Field) | Returns value from changes, falling back to data |
get_change(CS, Field) | Returns value only if it is in changes |
put_change(CS, Field, Value) | Adds or overwrites a value in changes |
add_error(CS, Field, Msg) | Appends a custom error and sets valid = false |
apply_changes(CS) | Merges changes into data, returns the merged map (no persistence) |
A common use case is hashing a password before storing it:
-export([registration_changeset/2]).
registration_changeset(Data, Params) ->
CS = kura_changeset:cast(user, Data, Params, [username, email, password]),
CS1 = kura_changeset:validate_required(CS, [username, email, password]),
CS2 = kura_changeset:validate_length(CS1, password, [{min, 8}]),
maybe_hash_password(CS2).
maybe_hash_password(#kura_changeset{valid = true, changes = #{password := Password}} = CS) ->
Hash = bcrypt:hashpw(Password, bcrypt:gen_salt()),
kura_changeset:put_change(CS, password_hash, list_to_binary(Hash));
maybe_hash_password(CS) ->
CS.
apply_changes/1 is useful when you need the merged result without hitting the database — for example, to preview changes or pass data to a template:
Preview = kura_changeset:apply_changes(CS),
#{title := Title, body := Body} = Preview.
Rendering errors in JSON responses
Convert changeset errors to a JSON-friendly map. A field can have multiple errors (e.g., too short and wrong format), so we group them into lists:
changeset_errors_to_json(#kura_changeset{errors = Errors}) ->
lists:foldl(fun({Field, Msg}, Acc) ->
Key = atom_to_binary(Field),
Existing = maps:get(Key, Acc, []),
Acc#{Key => Existing ++ [Msg]}
end, #{}, Errors).
Use it in controllers:
create(#{json := Params}) ->
CS = post:changeset(#{}, Params),
case blog_repo:insert(CS) of
{ok, Post} ->
{json, 201, #{}, post_to_json(Post)};
{error, #kura_changeset{} = CS1} ->
{json, 422, #{}, #{errors => changeset_errors_to_json(CS1)}}
end.
The response looks like:
{
"errors": {
"title": ["can't be blank"],
"body": ["can't be blank"]
}
}
Available validation functions
| Function | Purpose |
|---|---|
validate_required(CS, Fields) | Fields must be present and non-empty |
validate_format(CS, Field, Regex) | Value must match the regex |
validate_length(CS, Field, Opts) | String length: [{min,N}, {max,N}, {is,N}] |
validate_number(CS, Field, Opts) | Number range: [{greater_than,N}, {less_than,N}, {greater_than_or_equal_to,N}, {less_than_or_equal_to,N}, {equal_to,N}] |
validate_inclusion(CS, Field, List) | Value must be in the list |
validate_change(CS, Field, Fun) | Custom validation: fun(Val) -> ok | {error, Msg} |
unique_constraint(CS, Field) | Map PG unique violation to a changeset error |
foreign_key_constraint(CS, Field) | Map PG FK violation to a changeset error |
check_constraint(CS, Name, Field, Opts) | Map PG check constraint to a changeset error |
validate_format, validate_length, validate_number, and validate_inclusion only run when the field appears in changes. If the field was not cast, the validation is skipped. This means update changesets only validate the fields being changed — unchanged fields keep their existing values without re-validation.
Schemaless changesets
For validating data that does not map to a database table (like search filters or contact forms), pass a types map instead of a schema module:
Types = #{query => string, page => integer, per_page => integer},
CS = kura_changeset:cast(Types, #{}, Params, [query, page, per_page]),
CS1 = kura_changeset:validate_required(CS, [query]),
CS2 = kura_changeset:validate_number(CS1, per_page, [{greater_than, 0}, {less_than, 101}]).
Schemaless changesets cannot be persisted via the repo — they are for validation only.
Validations are declarative and composable. Errors are data, not exceptions. Now let's use changesets to perform CRUD operations with the repository.
CRUD with the Repository
We have schemas, migrations, and changesets. Now let's use the repository to create, read, update, and delete records — and wire it all up to a controller.
Insert
Create a record by building a changeset and passing it to blog_repo:insert/1:
Params = #{<<"title">> => <<"My First Post">>,
<<"body">> => <<"Hello from Nova!">>,
<<"status">> => <<"draft">>,
<<"user_id">> => 1},
CS = post:changeset(#{}, Params),
{ok, Post} = blog_repo:insert(CS).
If the changeset is invalid, insert returns {error, Changeset} with the errors:
CS = post:changeset(#{}, #{}),
{error, #kura_changeset{errors = [{title, <<"can't be blank">>}, ...]}} = blog_repo:insert(CS).
Query all
Use the query builder to fetch records:
Q = kura_query:from(post),
{ok, Posts} = blog_repo:all(Q).
Posts is a list of maps, each representing a row:
[#{id => 1, title => <<"My First Post">>, body => <<"Hello from Nova!">>,
status => draft, user_id => 1,
inserted_at => {{2026,2,23},{12,0,0}}, updated_at => {{2026,2,23},{12,0,0}}}]
Notice status is the atom draft, not a binary — Kura handles the conversion.
Get by ID
Fetch a single record by primary key:
{ok, Post} = blog_repo:get(post, 1).
{error, not_found} = blog_repo:get(post, 999).
Get by field
get_by/2 fetches a single record matching the given fields:
{ok, User} = blog_repo:get_by(user, [{email, <<"alice@example.com">>}]).
{error, not_found} = blog_repo:get_by(user, [{username, <<"nobody">>}]).
If more than one row matches, it returns {error, multiple_results}.
For more complex lookups, one/1 returns a single result from a query:
Q = kura_query:from(post),
Q1 = kura_query:where(Q, {status, published}),
Q2 = kura_query:order_by(Q1, [{inserted_at, desc}]),
{ok, Latest} = blog_repo:one(Q2).
Like get_by, it returns {error, not_found} when no rows match and {error, multiple_results} when more than one row matches.
Update
To update a record, build a changeset from the existing data and new params:
{ok, Post} = blog_repo:get(post, 1),
CS = post:changeset(Post, #{<<"title">> => <<"Updated Title">>}),
{ok, UpdatedPost} = blog_repo:update(CS).
Only the changed fields are included in the UPDATE statement.
Delete
Delete takes a changeset built from the existing record:
{ok, Post} = blog_repo:get(post, 1),
CS = kura_changeset:cast(post, Post, #{}, []),
{ok, _} = blog_repo:delete(CS).
Query builder
The query builder composes — chain functions to build up complex queries:
%% Filter by status
Q = kura_query:from(post),
Q1 = kura_query:where(Q, {status, published}),
{ok, Published} = blog_repo:all(Q1).
%% Order by insertion date, newest first
Q2 = kura_query:order_by(Q1, [{inserted_at, desc}]),
%% Limit and offset for pagination
Q3 = kura_query:limit(Q2, 10),
Q4 = kura_query:offset(Q3, 20),
{ok, Page3} = blog_repo:all(Q4).
%% Select specific fields only
Q5 = kura_query:select(Q, [id, title, status]),
{ok, Posts} = blog_repo:all(Q5).
Where conditions
%% Equality
kura_query:where(Q, {title, <<"Hello">>})
%% Comparison operators
kura_query:where(Q, {user_id, '>', 5})
kura_query:where(Q, {inserted_at, '>=', {{2026,1,1},{0,0,0}}})
%% IN clause
kura_query:where(Q, {status, in, [draft, published]})
%% LIKE / ILIKE
kura_query:where(Q, {title, ilike, <<"%nova%">>})
%% NULL checks
kura_query:where(Q, {body, is_nil})
kura_query:where(Q, {body, is_not_nil})
%% OR conditions
kura_query:where(Q, {'or', [{status, draft}, {status, archived}]})
%% NOT IN clause
kura_query:where(Q, {status, not_in, [archived, deleted]})
%% BETWEEN
kura_query:where(Q, {user_id, between, {1, 100}})
%% NOT wrapper
kura_query:where(Q, {'not', {status, draft}})
%% AND conditions (multiple where calls are AND by default)
Q1 = kura_query:where(Q, {status, published}),
Q2 = kura_query:where(Q1, {user_id, 1}).
Wiring up to a controller
Let's build a posts API controller that uses the repo. Create src/controllers/blog_posts_controller.erl:
-module(blog_posts_controller).
-include_lib("kura/include/kura.hrl").
-export([
list/1,
show/1,
create/1,
update/1,
delete/1
]).
list(_Req) ->
Q = kura_query:from(post),
Q1 = kura_query:order_by(Q, [{inserted_at, desc}]),
{ok, Posts} = blog_repo:all(Q1),
{json, #{posts => [post_to_json(P) || P <- Posts]}}.
show(#{bindings := #{<<"id">> := Id}}) ->
case blog_repo:get(post, binary_to_integer(Id)) of
{ok, Post} ->
{json, post_to_json(Post)};
{error, not_found} ->
{status, 404, #{}, #{error => <<"post not found">>}}
end.
create(#{json := Params}) ->
CS = post:changeset(#{}, Params),
case blog_repo:insert(CS) of
{ok, Post} ->
{json, 201, #{}, post_to_json(Post)};
{error, #kura_changeset{} = CS1} ->
{json, 422, #{}, #{errors => changeset_errors_to_json(CS1)}}
end;
create(_Req) ->
{status, 422, #{}, #{error => <<"request body required">>}}.
update(#{bindings := #{<<"id">> := Id}, json := Params}) ->
case blog_repo:get(post, binary_to_integer(Id)) of
{ok, Post} ->
CS = post:changeset(Post, Params),
case blog_repo:update(CS) of
{ok, Updated} ->
{json, post_to_json(Updated)};
{error, #kura_changeset{} = CS1} ->
{json, 422, #{}, #{errors => changeset_errors_to_json(CS1)}}
end;
{error, not_found} ->
{status, 404, #{}, #{error => <<"post not found">>}}
end.
delete(#{bindings := #{<<"id">> := Id}}) ->
case blog_repo:get(post, binary_to_integer(Id)) of
{ok, Post} ->
CS = kura_changeset:cast(post, Post, #{}, []),
{ok, _} = blog_repo:delete(CS),
{status, 204};
{error, not_found} ->
{status, 404, #{}, #{error => <<"post not found">>}}
end.
%% Helpers
post_to_json(#{id := Id, title := Title, body := Body, status := Status,
user_id := UserId, inserted_at := InsertedAt}) ->
#{id => Id, title => Title, body => Body,
status => atom_to_binary(Status), user_id => UserId,
inserted_at => format_datetime(InsertedAt)}.
changeset_errors_to_json(#kura_changeset{errors = Errors}) ->
lists:foldl(fun({Field, Msg}, Acc) ->
Key = atom_to_binary(Field),
Existing = maps:get(Key, Acc, []),
Acc#{Key => Existing ++ [Msg]}
end, #{}, Errors).
format_datetime({{Y,Mo,D},{H,Mi,S}}) ->
list_to_binary(io_lib:format("~4..0B-~2..0B-~2..0BT~2..0B:~2..0B:~2..0B",
[Y, Mo, D, H, Mi, S]));
format_datetime(_) ->
null.
Adding the routes
#{prefix => "/api",
security => false,
routes => [
{"/posts", fun blog_posts_controller:list/1, #{methods => [get]}},
{"/posts/:id", fun blog_posts_controller:show/1, #{methods => [get]}},
{"/posts", fun blog_posts_controller:create/1, #{methods => [post]}},
{"/posts/:id", fun blog_posts_controller:update/1, #{methods => [put]}},
{"/posts/:id", fun blog_posts_controller:delete/1, #{methods => [delete]}}
]
}
Testing with curl
Start the node and test:
# Create a post
curl -s -X POST localhost:8080/api/posts \
-H "Content-Type: application/json" \
-d '{"title": "My First Post", "body": "Hello from Nova!", "status": "draft", "user_id": 1}' \
| python3 -m json.tool
# List all posts
curl -s localhost:8080/api/posts | python3 -m json.tool
# Get a single post
curl -s localhost:8080/api/posts/1 | python3 -m json.tool
# Update a post
curl -s -X PUT localhost:8080/api/posts/1 \
-H "Content-Type: application/json" \
-d '{"title": "Updated Title", "status": "published"}' \
| python3 -m json.tool
# Delete a post
curl -s -X DELETE localhost:8080/api/posts/1 -w "%{http_code}\n"
# Try creating with invalid data
curl -s -X POST localhost:8080/api/posts \
-H "Content-Type: application/json" \
-d '{"title": "Hi"}' \
| python3 -m json.tool
The last command returns a 422 with validation errors.
No SQL strings anywhere. The query builder composes, the repo executes.
This gives us a working API for a single resource. Next, let's add associations and preloading to connect posts to users and comments.
Associations and Preloading
So far our posts exist in isolation. In a real blog, posts belong to users and have comments. Kura supports belongs_to, has_many, has_one, and many_to_many associations with automatic preloading.
Adding associations to schemas
Post belongs to user
Update src/schemas/post.erl to add associations:
-module(post).
-behaviour(kura_schema).
-include_lib("kura/include/kura.hrl").
-export([table/0, fields/0, associations/0, changeset/2]).
table() -> <<"posts">>.
fields() ->
[
#kura_field{name = id, type = id, primary_key = true, nullable = false},
#kura_field{name = title, type = string, nullable = false},
#kura_field{name = body, type = text},
#kura_field{name = status, type = {enum, [draft, published, archived]}, default = draft},
#kura_field{name = user_id, type = integer},
#kura_field{name = inserted_at, type = utc_datetime},
#kura_field{name = updated_at, type = utc_datetime}
].
associations() ->
[
#kura_assoc{name = author, type = belongs_to, schema = user, foreign_key = user_id},
#kura_assoc{name = comments, type = has_many, schema = comment, foreign_key = post_id}
].
changeset(Data, Params) ->
CS = kura_changeset:cast(post, Data, Params, [title, body, status, user_id]),
CS1 = kura_changeset:validate_required(CS, [title, body]),
CS2 = kura_changeset:validate_length(CS1, title, [{min, 3}, {max, 200}]),
CS3 = kura_changeset:validate_inclusion(CS2, status, [draft, published, archived]),
kura_changeset:foreign_key_constraint(CS3, user_id).
The associations/0 callback returns a list of #kura_assoc{} records:
belongs_to— the foreign key (user_id) is on this table.schemais the associated module,foreign_keyis the column.has_many— the foreign key (post_id) is on the other table.
We also added foreign_key_constraint/2 to the changeset — if an insert fails because the user doesn't exist, Kura maps the PostgreSQL foreign key error to a friendly changeset error.
Comment schema
Create src/schemas/comment.erl:
-module(comment).
-behaviour(kura_schema).
-include_lib("kura/include/kura.hrl").
-export([table/0, fields/0, associations/0, changeset/2]).
table() -> <<"comments">>.
fields() ->
[
#kura_field{name = id, type = id, primary_key = true, nullable = false},
#kura_field{name = body, type = text, nullable = false},
#kura_field{name = post_id, type = integer, nullable = false},
#kura_field{name = user_id, type = integer, nullable = false},
#kura_field{name = inserted_at, type = utc_datetime},
#kura_field{name = updated_at, type = utc_datetime}
].
associations() ->
[
#kura_assoc{name = post, type = belongs_to, schema = post, foreign_key = post_id},
#kura_assoc{name = author, type = belongs_to, schema = user, foreign_key = user_id}
].
changeset(Data, Params) ->
CS = kura_changeset:cast(comment, Data, Params, [body, post_id, user_id]),
CS1 = kura_changeset:validate_required(CS, [body, post_id, user_id]),
CS2 = kura_changeset:foreign_key_constraint(CS1, post_id),
kura_changeset:foreign_key_constraint(CS2, user_id).
User has many posts
Update src/schemas/user.erl to add the has_many side:
-export([table/0, fields/0, associations/0, changeset/2]).
%% ... fields() unchanged ...
associations() ->
[
#kura_assoc{name = posts, type = has_many, schema = post, foreign_key = user_id}
].
%% ... changeset/2 unchanged ...
Generate the migration
Compile to generate the comments table migration:
rebar3 compile
===> [kura] Schema diff detected changes
===> [kura] Generated src/migrations/m20260223130000_create_comments.erl
The migration creates the comments table with foreign keys to posts and users.
Preloading associations
By default, fetching a post returns only its own fields — associations are not loaded. Use kura_query:preload/2 to eagerly load them.
Preload via query
Q = kura_query:from(post),
Q1 = kura_query:preload(Q, [author, comments]),
{ok, Posts} = blog_repo:all(Q1).
Each post in Posts now has author and comments keys:
#{id => 1,
title => <<"My First Post">>,
author => #{id => 1, username => <<"alice">>, email => <<"alice@example.com">>, ...},
comments => [
#{id => 1, body => <<"Great post!">>, user_id => 2, ...},
#{id => 2, body => <<"Thanks!">>, user_id => 1, ...}
],
...}
Nested preloading
Load the author of each comment too:
Q = kura_query:from(post),
Q1 = kura_query:preload(Q, [author, {comments, [author]}]),
{ok, Posts} = blog_repo:all(Q1).
Now each comment also has its author loaded.
Standalone preload
If you already have records and want to preload associations after the fact:
{ok, Post} = blog_repo:get(post, 1),
Post1 = blog_repo:preload(post, Post, [author, comments]).
%% Works with lists too
{ok, Posts} = blog_repo:all(kura_query:from(post)),
Posts1 = blog_repo:preload(post, Posts, [author]).
Kura uses WHERE IN queries for preloading — not JOINs. This means one extra query per association, which keeps things predictable and avoids N+1 problems.
Creating with associations (cast_assoc)
You can create a post with comments in a single request using cast_assoc:
Params = #{<<"title">> => <<"New Post">>,
<<"body">> => <<"Content here">>,
<<"comments">> => [
#{<<"body">> => <<"First comment">>, <<"user_id">> => 2}
]},
CS = kura_changeset:cast(post, #{}, Params, [title, body, user_id]),
CS1 = kura_changeset:validate_required(CS, [title, body]),
CS2 = kura_changeset:cast_assoc(CS1, comments),
{ok, Post} = blog_repo:insert(CS2).
cast_assoc reads the comments key from the params, builds child changesets using comment:changeset/2, and wraps everything in a transaction. The parent is inserted first, then each child gets the parent's ID set as its foreign key.
Custom cast function
If you need different validation for nested creates:
CS2 = kura_changeset:cast_assoc(CS1, comments, #{
with => fun(Data, ChildParams) ->
comment:changeset(Data, ChildParams)
end
}).
API endpoint with preloading
Update the posts controller to return posts with their author and comments:
show(#{bindings := #{<<"id">> := Id}}) ->
case blog_repo:get(post, binary_to_integer(Id)) of
{ok, Post} ->
Post1 = blog_repo:preload(post, Post, [author, {comments, [author]}]),
{json, post_with_assocs_to_json(Post1)};
{error, not_found} ->
{status, 404, #{}, #{error => <<"post not found">>}}
end.
post_with_assocs_to_json(#{id := Id, title := Title, body := Body,
status := Status, author := Author,
comments := Comments}) ->
#{id => Id,
title => Title,
body => Body,
status => atom_to_binary(Status),
author => #{id => maps:get(id, Author),
username => maps:get(username, Author)},
comments => [#{id => maps:get(id, C),
body => maps:get(body, C),
author => #{id => maps:get(id, maps:get(author, C)),
username => maps:get(username, maps:get(author, C))}}
|| C <- Comments]}.
Test it:
curl -s localhost:8080/api/posts/1 | python3 -m json.tool
{
"id": 1,
"title": "My First Post",
"body": "Hello from Nova!",
"status": "draft",
"author": {
"id": 1,
"username": "alice"
},
"comments": [
{
"id": 1,
"body": "Great post!",
"author": {
"id": 2,
"username": "bob"
}
}
]
}
Next, let's add tags, many-to-many relationships, and embedded schemas for post metadata.
Tags, Many-to-Many & Embedded Schemas
Our blog has users, posts, and comments. Now let's add tags (many-to-many through a join table) and post metadata (embedded schema stored as JSONB).
Tag schema
Create src/schemas/tag.erl:
-module(tag).
-behaviour(kura_schema).
-include_lib("kura/include/kura.hrl").
-export([table/0, fields/0, associations/0, changeset/2]).
table() -> <<"tags">>.
fields() ->
[
#kura_field{name = id, type = id, primary_key = true, nullable = false},
#kura_field{name = name, type = string, nullable = false},
#kura_field{name = inserted_at, type = utc_datetime}
].
associations() ->
[
#kura_assoc{name = posts, type = many_to_many, schema = post,
join_through = <<"posts_tags">>, join_keys = {tag_id, post_id}}
].
changeset(Data, Params) ->
CS = kura_changeset:cast(tag, Data, Params, [name]),
CS1 = kura_changeset:validate_required(CS, [name]),
kura_changeset:unique_constraint(CS1, name).
Join table schema
The many-to-many relationship needs a join table. Create src/schemas/posts_tags.erl:
-module(posts_tags).
-behaviour(kura_schema).
-include_lib("kura/include/kura.hrl").
-export([table/0, fields/0]).
table() -> <<"posts_tags">>.
fields() ->
[
#kura_field{name = id, type = id, primary_key = true, nullable = false},
#kura_field{name = post_id, type = integer, nullable = false},
#kura_field{name = tag_id, type = integer, nullable = false}
].
Adding many-to-many to posts
Update the associations/0 in src/schemas/post.erl:
associations() ->
[
#kura_assoc{name = author, type = belongs_to, schema = user, foreign_key = user_id},
#kura_assoc{name = comments, type = has_many, schema = comment, foreign_key = post_id},
#kura_assoc{name = tags, type = many_to_many, schema = tag,
join_through = <<"posts_tags">>, join_keys = {post_id, tag_id}}
].
The many_to_many association specifies:
join_through— the join table namejoin_keys—{this_side_fk, other_side_fk}on the join table
Generate the migrations
Compile to generate the new tables:
rebar3 compile
===> [kura] Schema diff detected changes
===> [kura] Generated src/migrations/m20260223140000_create_tags.erl
===> [kura] Generated src/migrations/m20260223140100_create_posts_tags.erl
Tagging posts with put_assoc
Use put_assoc to set tags on a post:
%% Get existing tags (or create new ones first)
{ok, Erlang} = blog_repo:get_by(tag, [{name, <<"erlang">>}]),
{ok, Nova} = blog_repo:get_by(tag, [{name, <<"nova">>}]),
%% Assign tags to a post
{ok, Post} = blog_repo:get(post, 1),
CS = kura_changeset:cast(post, Post, #{}, []),
CS1 = kura_changeset:put_assoc(CS, tags, [Erlang, Nova]),
{ok, _} = blog_repo:update(CS1).
put_assoc replaces the entire association — under the hood it deletes existing join table rows and inserts new ones, all in a transaction.
Preloading tags
Q = kura_query:from(post),
Q1 = kura_query:preload(Q, [author, tags]),
{ok, Posts} = blog_repo:all(Q1).
Each post now has a tags key with a list of tag maps:
#{id => 1, title => <<"My First Post">>,
tags => [#{id => 1, name => <<"erlang">>}, #{id => 2, name => <<"nova">>}],
...}
Embedded schemas
Sometimes you need structured data that doesn't deserve its own table. Kura's embedded schemas store nested structures as JSONB columns.
Post metadata
Create src/schemas/post_metadata.erl:
-module(post_metadata).
-behaviour(kura_schema).
-include_lib("kura/include/kura.hrl").
-export([table/0, fields/0, changeset/2]).
table() -> <<"_embedded">>.
fields() ->
[
#kura_field{name = meta_title, type = string},
#kura_field{name = meta_description, type = string},
#kura_field{name = og_image, type = string}
].
changeset(Data, Params) ->
CS = kura_changeset:cast(post_metadata, Data, Params,
[meta_title, meta_description, og_image]),
kura_changeset:validate_length(CS, meta_description, [{max, 160}]).
The embedded schema looks like a regular schema but with table() returning a placeholder (it's never queried directly) and no primary_key = true field.
Adding the embed to posts
Update src/schemas/post.erl to add an embeds/0 callback and a metadata JSONB field:
-export([table/0, fields/0, associations/0, embeds/0, changeset/2]).
fields() ->
[
#kura_field{name = id, type = id, primary_key = true, nullable = false},
#kura_field{name = title, type = string, nullable = false},
#kura_field{name = body, type = text},
#kura_field{name = status, type = {enum, [draft, published, archived]}, default = draft},
#kura_field{name = user_id, type = integer},
#kura_field{name = metadata, type = {embed, embeds_one, post_metadata}},
#kura_field{name = inserted_at, type = utc_datetime},
#kura_field{name = updated_at, type = utc_datetime}
].
embeds() ->
[
#kura_embed{name = metadata, type = embeds_one, schema = post_metadata}
].
Compile to generate a migration that adds the metadata JSONB column:
rebar3 compile
Using embedded schemas
Cast the embed in your changeset:
changeset(Data, Params) ->
CS = kura_changeset:cast(post, Data, Params, [title, body, status, user_id]),
CS1 = kura_changeset:validate_required(CS, [title, body]),
CS2 = kura_changeset:validate_length(CS1, title, [{min, 3}, {max, 200}]),
CS3 = kura_changeset:validate_inclusion(CS2, status, [draft, published, archived]),
CS4 = kura_changeset:foreign_key_constraint(CS3, user_id),
kura_changeset:cast_embed(CS4, metadata).
cast_embed reads the metadata key from params and builds a nested changeset using post_metadata:changeset/2. Create a post with metadata:
curl -s -X POST localhost:8080/api/posts \
-H "Content-Type: application/json" \
-d '{
"title": "SEO Optimized Post",
"body": "Great content here",
"user_id": 1,
"metadata": {
"meta_title": "Best Post Ever",
"meta_description": "A post about great things",
"og_image": "https://example.com/image.jpg"
}
}' | python3 -m json.tool
The metadata is stored as JSONB in PostgreSQL and loaded back as a nested map:
#{id => 5,
title => <<"SEO Optimized Post">>,
metadata => #{meta_title => <<"Best Post Ever">>,
meta_description => <<"A post about great things">>,
og_image => <<"https://example.com/image.jpg">>},
...}
Filtering by tag
To find posts with a specific tag, use a raw SQL fragment or build the query through the join table:
%% Find all post IDs for a given tag
find_posts_by_tag(TagName) ->
{ok, Tag} = blog_repo:get_by(tag, [{name, TagName}]),
TagId = maps:get(id, Tag),
Q = kura_query:from(posts_tags),
Q1 = kura_query:where(Q, {tag_id, TagId}),
{ok, JoinRows} = blog_repo:all(Q1),
PostIds = [maps:get(post_id, R) || R <- JoinRows],
Q2 = kura_query:from(post),
Q3 = kura_query:where(Q2, {id, in, PostIds}),
Q4 = kura_query:preload(Q3, [author, tags]),
blog_repo:all(Q4).
API endpoint for tags
Add a simple tags controller:
-module(blog_tags_controller).
-export([list/1, create/1]).
list(_Req) ->
Q = kura_query:from(tag),
Q1 = kura_query:order_by(Q, [{name, asc}]),
{ok, Tags} = blog_repo:all(Q1),
{json, #{tags => Tags}}.
create(#{json := Params}) ->
CS = tag:changeset(#{}, Params),
case blog_repo:insert(CS) of
{ok, Tag} ->
{json, 201, #{}, Tag};
{error, _CS} ->
{status, 422, #{}, #{error => <<"invalid tag">>}}
end.
We now have a rich data model with associations, many-to-many relationships, and embedded schemas. Next, let's explore advanced queries for complex data retrieval.
Advanced Queries
The basic query builder covers most needs — where, order_by, limit, offset. But sometimes you need aggregations, subqueries, common table expressions (CTEs), or window functions. Kura supports all of these.
Aggregates
%% Count all published posts
Q = kura_query:from(post),
Q1 = kura_query:where(Q, {status, published}),
Q2 = kura_query:select(Q1, [{count, id}]),
{ok, [#{count => Count}]} = blog_repo:all(Q2).
%% Multiple aggregates
Q = kura_query:from(post),
Q1 = kura_query:group_by(Q, [user_id]),
Q2 = kura_query:select(Q1, [user_id, {count, id}, {max, inserted_at}]),
{ok, Stats} = blog_repo:all(Q2).
%% [#{user_id => 1, count => 5, max => {{2026,2,23},{12,0,0}}}, ...]
Supported aggregate functions: count, sum, avg, min, max.
Having clauses
Filter grouped results:
%% Users with more than 10 posts
Q = kura_query:from(post),
Q1 = kura_query:group_by(Q, [user_id]),
Q2 = kura_query:select(Q1, [user_id, {count, id}]),
Q3 = kura_query:having(Q2, {count, id, '>', 10}),
{ok, ActiveAuthors} = blog_repo:all(Q3).
Joins
%% Join posts with users
Q = kura_query:from(post),
Q1 = kura_query:join(Q, user, {post, user_id, user, id}),
Q2 = kura_query:select(Q1, [{post, [id, title]}, {user, [username]}]),
{ok, Results} = blog_repo:all(Q2).
%% Left join (include posts without comments)
Q = kura_query:from(post),
Q1 = kura_query:left_join(Q, comment, {post, id, comment, post_id}),
Q2 = kura_query:group_by(Q1, [{post, id}]),
Q3 = kura_query:select(Q2, [{post, [id, title]}, {count, {comment, id}}]),
{ok, PostsWithCounts} = blog_repo:all(Q3).
Subqueries
Use a query as a condition in another query:
%% Posts by users who joined in the last 30 days
RecentUsers = kura_query:from(user),
RecentUsers1 = kura_query:where(RecentUsers, {inserted_at, '>=', ThirtyDaysAgo}),
RecentUsers2 = kura_query:select(RecentUsers1, [id]),
Q = kura_query:from(post),
Q1 = kura_query:where(Q, {user_id, in, {subquery, RecentUsers2}}),
{ok, Posts} = blog_repo:all(Q1).
Common Table Expressions (CTEs)
CTEs make complex queries readable by breaking them into named steps:
%% Find the top 5 authors and their latest post
TopAuthors = kura_query:from(post),
TopAuthors1 = kura_query:group_by(TopAuthors, [user_id]),
TopAuthors2 = kura_query:select(TopAuthors1, [user_id, {count, id}]),
TopAuthors3 = kura_query:order_by(TopAuthors2, [{count, desc}]),
TopAuthors4 = kura_query:limit(TopAuthors3, 5),
Q = kura_query:with(<<"top_authors">>, TopAuthors4),
Q1 = kura_query:from_cte(Q, <<"top_authors">>),
Q2 = kura_query:join(Q1, user, {<<"top_authors">>, user_id, user, id}),
{ok, Results} = blog_repo:all(Q2).
Window functions
Compute values across a set of rows without collapsing them:
%% Rank posts by comment count within each user
Q = kura_query:from(post),
Q1 = kura_query:left_join(Q, comment, {post, id, comment, post_id}),
Q2 = kura_query:select(Q1, [
{post, [id, title, user_id]},
{count, {comment, id}},
{window, row_number, [], [{partition_by, {post, user_id}},
{order_by, [{count, desc}]}]}
]),
{ok, RankedPosts} = blog_repo:all(Q2).
Union queries
Combine results from multiple queries:
Drafts = kura_query:from(post),
Drafts1 = kura_query:where(Drafts, {status, draft}),
Drafts2 = kura_query:select(Drafts1, [id, title, status]),
Archived = kura_query:from(post),
Archived1 = kura_query:where(Archived, {status, archived}),
Archived2 = kura_query:select(Archived1, [id, title, status]),
Q = kura_query:union(Drafts2, Archived2),
{ok, Results} = blog_repo:all(Q).
Distinct
Q = kura_query:from(post),
Q1 = kura_query:select(Q, [user_id]),
Q2 = kura_query:distinct(Q1),
{ok, UniqueAuthors} = blog_repo:all(Q2).
Raw SQL escape hatch
When the query builder doesn't cover your case, use raw SQL:
SQL = "SELECT p.id, p.title, COUNT(c.id) as comment_count "
"FROM posts p LEFT JOIN comments c ON c.post_id = p.id "
"GROUP BY p.id ORDER BY comment_count DESC LIMIT $1",
{ok, Results} = blog_repo:query(SQL, [10]).
query/2 returns rows as maps with atom keys.
Next, let's cover transactions and multi for atomic multi-step operations.
Transactions, Multi & Bulk Operations
For simple CRUD, the repo functions are enough. But some operations need atomicity (all-or-nothing), multi-step pipelines, or bulk efficiency. Kura provides transactions, multi, and bulk operations for these cases.
Transactions
Wrap multiple operations in a transaction — if any step fails, everything rolls back:
blog_repo:transaction(fun() ->
CS1 = user:changeset(#{}, #{<<"username">> => <<"alice">>,
<<"email">> => <<"alice@example.com">>,
<<"password_hash">> => <<"hashed">>}),
{ok, User} = blog_repo:insert(CS1),
CS2 = post:changeset(#{}, #{<<"title">> => <<"Welcome">>,
<<"body">> => <<"Hello world">>,
<<"user_id">> => maps:get(id, User)}),
{ok, _Post} = blog_repo:insert(CS2),
ok
end).
If the second insert fails, the user creation is rolled back too. The transaction function returns {ok, ReturnValue} on success or {error, Reason} on failure.
Multi: named transaction pipelines
For complex multi-step operations, kura_multi provides a pipeline where each step has a name and can reference results from previous steps:
M = kura_multi:new(),
%% Step 1: Create a user
M1 = kura_multi:insert(M, create_user,
user:changeset(#{}, #{<<"username">> => <<"alice">>,
<<"email">> => <<"alice@example.com">>,
<<"password_hash">> => <<"hashed">>})),
%% Step 2: Create a first draft, using the user ID from step 1
M2 = kura_multi:insert(M1, create_draft,
fun(#{create_user := User}) ->
post:changeset(#{}, #{<<"title">> => <<"My First Draft">>,
<<"body">> => <<"Coming soon...">>,
<<"user_id">> => maps:get(id, User)})
end),
%% Step 3: Run a custom function
M3 = kura_multi:run(M2, send_welcome,
fun(#{create_user := User}) ->
logger:info("Welcome ~s!", [maps:get(username, User)]),
{ok, sent}
end),
%% Execute everything atomically
case blog_repo:multi(M3) of
{ok, #{create_user := User, create_draft := Post, send_welcome := sent}} ->
logger:info("User ~p created with draft post ~p",
[maps:get(id, User), maps:get(id, Post)]);
{error, FailedStep, FailedValue, _Completed} ->
logger:error("Multi failed at step ~p: ~p", [FailedStep, FailedValue])
end.
Multi API
| Function | Purpose |
|---|---|
kura_multi:new() | Create a new multi |
kura_multi:insert(M, Name, CS) | Insert a record (changeset or fun returning changeset) |
kura_multi:update(M, Name, CS) | Update a record |
kura_multi:delete(M, Name, CS) | Delete a record |
kura_multi:run(M, Name, Fun) | Run a custom function |
Steps that take a fun receive a map of all completed steps so far:
fun(#{step1 := Result1, step2 := Result2}) -> ...
Error handling
When a multi fails, you get the name of the failed step, the error value, and a map of steps that completed before the failure:
case blog_repo:multi(M) of
{ok, Results} ->
%% All steps succeeded, Results is a map of step_name => result
ok;
{error, FailedStep, FailedValue, CompletedSteps} ->
%% FailedStep: atom name of the step that failed
%% FailedValue: the error (e.g., a changeset with errors)
%% CompletedSteps: map of steps that succeeded (then rolled back)
ok
end.
Bulk operations
insert_all — batch inserts
Insert many records at once:
Posts = [
#{title => <<"Post 1">>, body => <<"Body 1">>, status => draft, user_id => 1},
#{title => <<"Post 2">>, body => <<"Body 2">>, status => draft, user_id => 1},
#{title => <<"Post 3">>, body => <<"Body 3">>, status => published, user_id => 2}
],
{ok, 3} = blog_repo:insert_all(post, Posts).
insert_all bypasses changesets — it inserts raw maps directly. Use it for imports and seeding where you trust the data. The return value is the number of rows inserted.
update_all — batch updates
Update many records matching a query:
%% Publish all drafts
Q = kura_query:from(post),
Q1 = kura_query:where(Q, {status, draft}),
{ok, Count} = blog_repo:update_all(Q1, #{status => published}).
update_all returns the count of rows affected. It applies the updates in a single SQL statement.
delete_all — batch deletes
Delete all records matching a query:
%% Delete all archived posts
Q = kura_query:from(post),
Q1 = kura_query:where(Q, {status, archived}),
{ok, Count} = blog_repo:delete_all(Q1).
Upserts with on_conflict
Import data without failing on duplicates:
%% Insert a tag, do nothing if it already exists
CS = tag:changeset(#{}, #{<<"name">> => <<"erlang">>}),
{ok, Tag} = blog_repo:insert(CS, #{on_conflict => {name, nothing}}).
The on_conflict option controls what happens when a unique constraint is violated:
%% Do nothing on conflict (skip the row)
#{on_conflict => {name, nothing}}
%% Replace all fields on conflict
#{on_conflict => {name, replace_all}}
%% Replace specific fields on conflict
#{on_conflict => {name, {replace, [updated_at]}}}
%% Use a named constraint instead of a field
#{on_conflict => {{constraint, <<"tags_name_key">>}, nothing}}
Practical example: importing posts
import_posts(Posts) ->
lists:foreach(fun(PostData) ->
CS = post:changeset(#{}, PostData),
blog_repo:insert(CS, #{on_conflict => {title, nothing}})
end, Posts).
Putting it all together
A controller action that publishes a post and notifies subscribers atomically:
publish(#{bindings := #{<<"id">> := Id}}) ->
case blog_repo:get(post, binary_to_integer(Id)) of
{ok, #{status := draft} = Post} ->
M = kura_multi:new(),
M1 = kura_multi:update(M, publish_post,
post:changeset(Post, #{<<"status">> => <<"published">>})),
M2 = kura_multi:run(M1, notify,
fun(#{publish_post := Published}) ->
nova_pubsub:broadcast(posts, "post_published", Published),
{ok, notified}
end),
case blog_repo:multi(M2) of
{ok, #{publish_post := Published}} ->
{json, post_to_json(Published)};
{error, _Step, _Value, _} ->
{status, 422, #{}, #{error => <<"failed to publish">>}}
end;
{ok, _} ->
{status, 422, #{}, #{error => <<"only drafts can be published">>}};
{error, not_found} ->
{status, 404, #{}, #{error => <<"post not found">>}}
end.
That covers the data layer — schemas, changesets, CRUD, associations, and now transactions and bulk operations. Next we shift to authentication: let's add user sessions to our application.
Sessions
Nova has a built-in session system backed by ETS (Erlang Term Storage). Session IDs are stored in a session_id cookie.
How sessions work
Nova automatically creates a session for every visitor. On each request, the nova_stream_h stream handler checks for a session_id cookie:
- Cookie exists — the request proceeds normally. The session ID is read from the cookie when you call the session API.
- No cookie — Nova generates a new session ID, sets the
session_idcookie on the response, and stores the ID in the request map.
This means you never need to manually generate session IDs or set the session cookie. By the time your controller runs, every request already has a session — you just read from and write to it.
The session API
nova_session:get(Req, Key) -> {ok, Value} | {error, not_found}.
nova_session:set(Req, Key, Value) -> ok | {error, session_id_not_set}.
nova_session:delete(Req) -> {ok, Req1}.
nova_session:delete(Req, Key) -> {ok, Req1}.
| Function | Description |
|---|---|
get/2 | Retrieve a value by key. Returns {error, not_found} if the key or session doesn't exist. |
set/3 | Store a value in the current session. |
delete/1 | Delete the entire session and expire the cookie (sets max_age => 0). Returns an updated request — use this Req1 if you need the cookie change in the response. |
delete/2 | Delete a single key from the session. |
Configuration
The session manager is configured in sys.config:
{nova, [
{use_sessions, true}, %% Enable sessions (default: true)
{session_manager, nova_session_ets} %% Backend module (default)
]}
nova_session_ets stores session data in an ETS table and replicates changes across clustered nodes using nova_pubsub. Set use_sessions to false if your application doesn't need sessions (e.g. a pure JSON API).
Cookie options
Nova sets the session_id cookie automatically with default options. For production, you may want to customise the cookie by setting it yourself in a plugin or by configuring Cowboy's cookie defaults:
cowboy_req:set_resp_cookie(<<"session_id">>, SessionId, Req, #{
path => <<"/">>, %% Cookie is valid for all paths
http_only => true, %% Not accessible from JavaScript
secure => true, %% Only sent over HTTPS
max_age => 86400 %% Expires after 24 hours (in seconds)
}).
Custom session backends
If you want to store sessions in a database or Redis instead of ETS, implement the nova_session behaviour:
-module(my_redis_session).
-behaviour(nova_session).
-export([start_link/0,
get_value/2,
set_value/3,
delete_value/1,
delete_value/2]).
start_link() ->
ignore.
get_value(SessionId, Key) ->
{ok, Value}.
set_value(SessionId, Key, Value) ->
ok.
delete_value(SessionId) ->
ok.
delete_value(SessionId, Key) ->
ok.
Then configure it:
{nova, [
{session_manager, my_redis_session}
]}
Distributed sessions
nova_session_ets replicates session changes across clustered nodes using nova_pubsub (built on OTP's pg module). When you call nova_session:set/3 on one node, the change is broadcast to all other nodes in the cluster.
This means users can hit any node behind a load balancer and their session data is available — no sticky sessions required.
With sessions in place, let's build authentication on top of them.
Authentication
Now let's protect routes so only logged-in users can access them. We'll build session-based authentication by hand, then see how the gen_auth generator scaffolds a complete email/password system.
Security in route groups
Authentication in Nova is configured per route group using the security key. It points to a function that receives the request and returns either {true, AuthData} (allow) or a denial value (deny).
Creating a security module
Create src/blog_auth.erl:
-module(blog_auth).
-export([session_auth/1]).
session_auth(Req) ->
case nova_session:get(Req, <<"username">>) of
{ok, Username} ->
{true, #{username => Username}};
{error, _} ->
{redirect, "/login"}
end.
session_auth/1 checks whether the session contains a username. If so, it returns {true, AuthData} — the auth data map is merged into the request and accessible in your controller as auth_data. If the session is empty, it redirects to the login page.
Returning {redirect, "/login"} instead of bare false gives users a friendly redirect to the login page. A bare false would trigger the generic 401 error handler, which is more appropriate for APIs.
Processing the login form
Credential validation belongs in the controller, not the security function. The security function's job is to gate access — the login POST route is public by definition (unauthenticated users need to reach it), so it uses security => false.
The controller checks the submitted credentials and either creates a session or re-renders the form with an error:
login_post(#{params := Params} = Req) ->
case Params of
#{<<"username">> := Username,
<<"password">> := <<"password">>} ->
nova_session:set(Req, <<"username">>, Username),
{redirect, "/"};
_ ->
{ok, [{error, <<"Invalid username or password">>}], #{view => login}}
end.
On success, we store the username in the session and redirect to the home page. On failure, we re-render the login template with an error message — the user sees the form again instead of a raw error page.
This is a hardcoded password for demonstration only. In a real application you would validate credentials against a database with properly hashed passwords.
How security works
The security flow for each request is:
- Nova matches the request to a route group
- If
securityisfalse, skip to the controller - If
securityis a function, call it with the request map - If it returns
{true, AuthData}, mergeauth_data => AuthDatainto the request and continue to the controller - If it returns
true, continue to the controller (no auth data attached) - If it returns
false, trigger the 401 error handler - If it returns
{redirect, Path}, send a 302 redirect without calling the controller - If it returns
{false, StatusCode, Headers, Body}, respond with a custom error
The structured {false, StatusCode, Headers, Body} form is useful for APIs where you want to return JSON error details instead of triggering the generic 401 handler.
You can have different security functions for different route groups — one for API token auth, another for session auth, and so on.
Wiring up the login flow
Update the controller to handle login, logout, and the home page:
-module(blog_main_controller).
-export([
index/1,
login/1,
login_post/1,
logout/1
]).
index(#{auth_data := #{username := Username}}) ->
{ok, [{message, <<"Hello ", Username/binary>>}]}.
login(_Req) ->
{ok, [], #{view => login}}.
login_post(#{params := Params} = Req) ->
case Params of
#{<<"username">> := Username,
<<"password">> := <<"password">>} ->
nova_session:set(Req, <<"username">>, Username),
{redirect, "/"};
_ ->
{ok, [{error, <<"Invalid username or password">>}], #{view => login}}
end.
logout(Req) ->
{ok, Req1} = nova_session:delete(Req),
{redirect, "/login", Req1}.
Updating the routes
routes(_Environment) ->
[
%% Public routes (no auth required)
#{prefix => "",
security => false,
routes => [
{"/login", fun blog_main_controller:login/1, #{methods => [get]}},
{"/login", fun blog_main_controller:login_post/1, #{methods => [post]}},
{"/heartbeat", fun(_) -> {status, 200} end, #{methods => [get]}}
]
},
%% Protected routes (session auth required)
#{prefix => "",
security => fun blog_auth:session_auth/1,
routes => [
{"/", fun blog_main_controller:index/1, #{methods => [get]}},
{"/logout", fun blog_main_controller:logout/1, #{methods => [get]}}
]
}
].
The gen_auth scaffold
For a production-ready authentication system, use the gen_auth generator:
rebar3 nova gen_auth
This generates a complete email/password auth system:
- Migration —
userstable with email, password_hash, and confirmation fields - Schema —
user.erlwith registration and login changesets - Context module —
blog_accounts.erlwith create_user, authenticate, token management - Security callback —
blog_auth.erlwith session and token-based authentication - Controllers — Registration, login, password reset controllers
- Test suite — Common Test suite covering the auth flow
The generated code uses bcrypt for password hashing and includes:
- Email/password registration with confirmation
- Login with session creation
- Logout with session destruction
- Password reset flow with time-limited tokens
- Remember-me tokens
gen_auth is a starting point. Review the generated code, adjust the changeset validations, and wire in your email adapter (see Sending Email) for confirmation and password reset emails.
Next, let's look at authorization — controlling what authenticated users can do.
Authorization
Authentication answers "who are you?" — authorization answers "what can you do?" This chapter covers patterns for controlling access based on user roles and permissions.
Role-based security functions
The simplest approach is checking roles in your security function:
-module(blog_auth).
-export([session_auth/1, admin_auth/1]).
session_auth(Req) ->
case nova_session:get(Req, <<"user_id">>) of
{ok, UserId} ->
{ok, User} = blog_repo:get(user, UserId),
{true, User};
{error, _} ->
{redirect, "/login"}
end.
admin_auth(Req) ->
case session_auth(Req) of
{true, #{role := admin} = User} ->
{true, User};
{true, _User} ->
{false, 403, #{}, #{error => <<"forbidden">>}};
Other ->
Other
end.
Use different security functions for different route groups:
routes(_Environment) ->
[
#{prefix => "",
security => fun blog_auth:session_auth/1,
routes => [
{"/", fun blog_main_controller:index/1, #{methods => [get]}}
]
},
#{prefix => "/admin",
security => fun blog_auth:admin_auth/1,
routes => [
{"/dashboard", fun blog_admin_controller:index/1, #{methods => [get]}}
]
}
].
Resource-level authorization
Sometimes you need to check ownership — "can this user edit this post?" This happens in the controller:
update(#{bindings := #{<<"id">> := Id}, json := Params,
auth_data := #{id := UserId}}) ->
case blog_repo:get(post, binary_to_integer(Id)) of
{ok, #{user_id := UserId} = Post} ->
%% User owns this post — allow update
CS = post:changeset(Post, Params),
case blog_repo:update(CS) of
{ok, Updated} -> {json, post_to_json(Updated)};
{error, CS1} -> {json, 422, #{}, #{errors => changeset_errors_to_json(CS1)}}
end;
{ok, _Post} ->
%% Different user — forbidden
{status, 403, #{}, #{error => <<"you can only edit your own posts">>}};
{error, not_found} ->
{status, 404, #{}, #{error => <<"post not found">>}}
end.
Token-based API authentication
For APIs, use bearer tokens instead of sessions:
api_auth(Req) ->
case cowboy_req:header(<<"authorization">>, Req) of
<<"Bearer ", Token/binary>> ->
case blog_accounts:verify_token(Token) of
{ok, User} -> {true, User};
{error, _} -> {false, 401, #{}, #{error => <<"invalid token">>}}
end;
_ ->
{false, 401, #{}, #{error => <<"missing authorization header">>}}
end.
Combining authentication strategies
Different route groups can use different strategies:
routes(_Environment) ->
[
%% Public
#{prefix => "", security => false,
routes => [
{"/login", fun blog_main_controller:login/1, #{methods => [get, post]}}
]},
%% Session-based (HTML pages)
#{prefix => "", security => fun blog_auth:session_auth/1,
routes => [
{"/", fun blog_main_controller:index/1, #{methods => [get]}},
{"/logout", fun blog_main_controller:logout/1, #{methods => [get]}}
]},
%% Token-based (API)
#{prefix => "/api", security => fun blog_auth:api_auth/1,
routes => [
{"/posts", fun blog_posts_controller:list/1, #{methods => [get]}},
{"/posts", fun blog_posts_controller:create/1, #{methods => [post]}}
]}
].
With authentication and authorization covered, let's move to the visual layer. Next: ErlyDTL Templates.
ErlyDTL Templates
Nova uses ErlyDTL for HTML templating — an Erlang implementation of Django's template language. Templates live in src/views/ and are compiled to Erlang modules at build time.
Template basics
ErlyDTL supports the same syntax as Django templates:
| Syntax | Purpose | Example |
|---|---|---|
{{ var }} | Output a variable | {{ username }} |
{% if cond %}...{% endif %} | Conditional | {% if error %}...{% endif %} |
{% for x in list %}...{% endfor %} | Loop | {% for post in posts %}...{% endfor %} |
{{ var|filter }} | Apply a filter | {{ name|upper }} |
{{ var|default:"n/a" }} | Fallback value | {{ bio|default:"No bio" }} |
{% extends "base.dtl" %} | Inherit a layout | See below |
{% block name %}...{% endblock %} | Override a block | See below |
See the ErlyDTL documentation for the full list of tags and filters.
Creating a base layout
Most pages share the same outer HTML. Template inheritance lets you define a base layout once and override specific blocks in child templates.
Create src/views/base.dtl:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>{% block title %}Blog{% endblock %}</title>
</head>
<body>
<nav>
{% if username %}
<span>{{ username }}</span> | <a href="/logout">Logout</a>
{% else %}
<a href="/login">Login</a>
{% endif %}
</nav>
<main>
{% block content %}{% endblock %}
</main>
</body>
</html>
Child templates use {% extends "base.dtl" %} and fill in the blocks they need. Anything outside a {% block %} tag in the child is ignored.
Creating a login template
Create src/views/login.dtl:
{% extends "base.dtl" %}
{% block title %}Login{% endblock %}
{% block content %}
<div>
{% if error %}<p style="color:red">{{ error }}</p>{% endif %}
<form action="/login" method="post">
<input type="hidden" name="_csrf_token" value="{{ csrf_token }}" />
<label for="username">Username:</label>
<input type="text" id="username" name="username"><br>
<label for="password">Password:</label>
<input type="password" id="password" name="password"><br>
<input type="submit" value="Submit">
</form>
</div>
{% endblock %}
This form POSTs to /login with username and password fields. The URL-encoded body will be decoded by nova_request_plugin (which we configured in the Plugins chapter).
The hidden _csrf_token field is required because we enabled nova_csrf_plugin. Nova automatically injects the csrf_token variable into every template — you just need to include it in the form. Without it, the POST request would be rejected with a 403 error.
Adding a controller function
Our generated controller is in src/controllers/blog_main_controller.erl:
-module(blog_main_controller).
-export([
index/1,
login/1
]).
index(_Req) ->
{ok, [{message, "Hello world!"}]}.
login(_Req) ->
{ok, [], #{view => login}}.
The return tuple {ok, [], #{view => login}} tells Nova:
ok— render a template[]— no template variables#{view => login}— use thelogintemplate (matcheslogin.dtl)
How template resolution works
When a controller returns {ok, Variables} (without a view option), Nova looks for a template named after the controller module. For blog_main_controller:index/1, it looks for blog_main.dtl.
When you specify #{view => login}, Nova uses login.dtl instead.
Template options
The full return tuple is {ok, Variables, Options} where Options is a map that supports three keys:
| Option | Default | Description |
|---|---|---|
view | derived from module name | Which template to render |
headers | #{<<"content-type">> => <<"text/html">>} | Response headers |
status_code | 200 | HTTP status code |
Some examples:
%% Render login.dtl with default 200 status
{ok, [], #{view => login}}.
%% Render with a 422 status (useful for form validation errors)
{ok, [{error, <<"Invalid input">>}], #{view => login, status_code => 422}}.
%% Return plain text instead of HTML
{ok, [{data, Body}], #{headers => #{<<"content-type">> => <<"text/plain">>}}}.
{view, Variables} and {view, Variables, Options} are aliases for {ok, ...} — they behave identically.
With templates in place, let's build complete pages in the next chapter: Building Pages.
Building Pages
In the previous chapter we learned ErlyDTL template syntax. Now let's build complete pages — layouts with navigation, forms with error handling, and reusable partials.
A proper base layout
Expand the base layout with navigation, flash messages, and a footer:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>{% block title %}Blog{% endblock %} — Nova Blog</title>
<link rel="stylesheet" href="/assets/css/style.css">
{% block head %}{% endblock %}
</head>
<body>
<header>
<nav>
<a href="/">Home</a>
{% if auth_data %}
<a href="/posts/new">New Post</a>
<span>{{ auth_data.username }}</span>
<a href="/logout">Logout</a>
{% else %}
<a href="/login">Login</a>
<a href="/register">Register</a>
{% endif %}
</nav>
</header>
<main>
{% if flash_info %}
<div class="flash flash-info">{{ flash_info }}</div>
{% endif %}
{% if flash_error %}
<div class="flash flash-error">{{ flash_error }}</div>
{% endif %}
{% block content %}{% endblock %}
</main>
<footer>
<p>Built with Nova</p>
</footer>
</body>
</html>
Forms with validation errors
A post creation form that displays changeset errors:
{% extends "base.dtl" %}
{% block title %}New Post{% endblock %}
{% block content %}
<h1>New Post</h1>
<form action="/posts" method="post">
<input type="hidden" name="_csrf_token" value="{{ csrf_token }}" />
<div class="field">
<label for="title">Title</label>
<input type="text" id="title" name="title" value="{{ form_title|default:"" }}">
{% if errors.title %}
<span class="error">{{ errors.title }}</span>
{% endif %}
</div>
<div class="field">
<label for="body">Body</label>
<textarea id="body" name="body">{{ form_body|default:"" }}</textarea>
{% if errors.body %}
<span class="error">{{ errors.body }}</span>
{% endif %}
</div>
<div class="field">
<label for="status">Status</label>
<select id="status" name="status">
<option value="draft" {% if form_status == "draft" %}selected{% endif %}>Draft</option>
<option value="published" {% if form_status == "published" %}selected{% endif %}>Published</option>
</select>
</div>
<button type="submit">Create Post</button>
</form>
{% endblock %}
The controller re-renders the form with errors and the submitted values:
create(#{params := Params, auth_data := #{id := UserId}} = _Req) ->
Params1 = Params#{<<"user_id">> => UserId},
CS = post:changeset(#{}, Params1),
case blog_repo:insert(CS) of
{ok, Post} ->
{redirect, "/posts/" ++ integer_to_list(maps:get(id, Post))};
{error, #kura_changeset{} = CS1} ->
Errors = changeset_errors_to_json(CS1),
{ok, [{errors, Errors},
{form_title, maps:get(<<"title">>, Params, <<>>)},
{form_body, maps:get(<<"body">>, Params, <<>>)},
{form_status, maps:get(<<"status">>, Params, <<"draft">>)}],
#{view => new_post, status_code => 422}}
end.
Template includes (partials)
Extract reusable fragments with {% include %}:
src/views/_post_card.dtl:
<article class="post-card">
<h2><a href="/posts/{{ post.id }}">{{ post.title }}</a></h2>
<p>by {{ post.author.username }} — {{ post.inserted_at }}</p>
{% if post.tags %}
<div class="tags">
{% for tag in post.tags %}
<span class="tag">{{ tag.name }}</span>
{% endfor %}
</div>
{% endif %}
</article>
Use it in a listing page:
{% extends "base.dtl" %}
{% block content %}
<h1>Posts</h1>
{% for post in posts %}
{% include "_post_card.dtl" %}
{% endfor %}
{% endblock %}
Flash messages
Flash messages show one-time notifications (e.g. "Post created successfully"). Store them in the session and clear after display.
A controller sets a flash before redirecting:
create(#{params := Params} = Req) ->
case blog_repo:insert(post:changeset(#{}, Params)) of
{ok, Post} ->
set_flash(Req, flash_info, <<"Post created!">>),
{redirect, "/posts/" ++ integer_to_list(maps:get(id, Post))};
{error, CS} ->
%% re-render with errors (no flash needed)
...
end.
The helpers that store and retrieve flash values from the session:
%% Setting a flash message
set_flash(Req, Key, Message) ->
nova_session:set(Req, Key, Message).
%% Reading and clearing flash messages
get_flash(Req, Key) ->
case nova_session:get(Req, Key) of
{ok, Message} ->
nova_session:delete(Req, Key),
Message;
{error, _} ->
undefined
end.
The destination controller reads the flash and passes it to the template. The base layout (shown at the top of this chapter) renders flash_info and flash_error if present.
We now have a complete HTML frontend. Next, let's build JSON APIs with code generators.
JSON API with Generators
In the previous chapter we built a posts controller by hand. The rebar3_nova plugin includes generators that scaffold controllers, JSON schemas, and test suites so you can skip the boilerplate.
Generate a resource
The nova gen_resource command creates a controller, a JSON schema, and prints route definitions. Like gen_controller, it also accepts --actions to limit which actions are scaffolded:
rebar3 nova gen_resource --name posts
===> Writing src/controllers/blog_posts_controller.erl
===> Writing priv/schemas/post.json
Add these routes to your router:
{<<"/posts">>, {blog_posts_controller, list}, #{methods => [get]}}
{<<"/posts/:id">>, {blog_posts_controller, show}, #{methods => [get]}}
{<<"/posts">>, {blog_posts_controller, create}, #{methods => [post]}}
{<<"/posts/:id">>, {blog_posts_controller, update}, #{methods => [put]}}
{<<"/posts/:id">>, {blog_posts_controller, delete}, #{methods => [delete]}}
The generated controller
-module(blog_posts_controller).
-export([
list/1,
show/1,
create/1,
update/1,
delete/1
]).
list(_Req) ->
{json, #{<<"message">> => <<"TODO">>}}.
show(_Req) ->
{json, #{<<"message">> => <<"TODO">>}}.
create(_Req) ->
{status, 201, #{}, #{<<"message">> => <<"TODO">>}}.
update(_Req) ->
{json, #{<<"message">> => <<"TODO">>}}.
delete(_Req) ->
{status, 204}.
Every action returns a valid Nova response tuple so you can compile and run immediately.
The generated JSON schema
priv/schemas/post.json:
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"properties": {
"id": { "type": "integer" },
"name": { "type": "string" }
},
"required": ["id", "name"]
}
Edit this to match your actual data model:
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"properties": {
"id": { "type": "integer", "description": "Unique identifier" },
"title": { "type": "string", "description": "Post title" },
"body": { "type": "string", "description": "Post body" },
"status": { "type": "string", "enum": ["draft", "published", "archived"] },
"user_id": { "type": "integer", "description": "Author ID" }
},
"required": ["title", "body"]
}
This schema is picked up by the OpenAPI generator to produce API documentation automatically.
Filling in Kura calls
Replace the TODO stubs with actual Kura repo calls. Since we already wrote a full posts controller in the CRUD chapter, here is the pattern — generate, then fill in:
-module(blog_posts_controller).
-include_lib("kura/include/kura.hrl").
-export([
list/1,
show/1,
create/1,
update/1,
delete/1
]).
list(_Req) ->
Q = kura_query:from(post),
Q1 = kura_query:order_by(Q, [{inserted_at, desc}]),
{ok, Posts} = blog_repo:all(Q1),
{json, #{posts => [post_to_json(P) || P <- Posts]}}.
show(#{bindings := #{<<"id">> := Id}}) ->
case blog_repo:get(post, binary_to_integer(Id)) of
{ok, Post} ->
{json, post_to_json(Post)};
{error, not_found} ->
{status, 404, #{}, #{error => <<"post not found">>}}
end.
create(#{json := Params}) ->
CS = post:changeset(#{}, Params),
case blog_repo:insert(CS) of
{ok, Post} ->
{json, 201, #{}, post_to_json(Post)};
{error, #kura_changeset{} = CS1} ->
{json, 422, #{}, #{errors => changeset_errors_to_json(CS1)}}
end;
create(_Req) ->
{status, 422, #{}, #{error => <<"request body required">>}}.
update(#{bindings := #{<<"id">> := Id}, json := Params}) ->
case blog_repo:get(post, binary_to_integer(Id)) of
{ok, Post} ->
CS = post:changeset(Post, Params),
case blog_repo:update(CS) of
{ok, Updated} ->
{json, post_to_json(Updated)};
{error, #kura_changeset{} = CS1} ->
{json, 422, #{}, #{errors => changeset_errors_to_json(CS1)}}
end;
{error, not_found} ->
{status, 404, #{}, #{error => <<"post not found">>}}
end.
delete(#{bindings := #{<<"id">> := Id}}) ->
case blog_repo:get(post, binary_to_integer(Id)) of
{ok, Post} ->
CS = kura_changeset:cast(post, Post, #{}, []),
{ok, _} = blog_repo:delete(CS),
{status, 204};
{error, not_found} ->
{status, 404, #{}, #{error => <<"post not found">>}}
end.
%% Helpers
post_to_json(#{id := Id, title := Title, body := Body, status := Status,
user_id := UserId}) ->
#{id => Id, title => Title, body => Body,
status => atom_to_binary(Status), user_id => UserId}.
changeset_errors_to_json(#kura_changeset{errors = Errors}) ->
lists:foldl(fun({Field, Msg}, Acc) ->
Key = atom_to_binary(Field),
Existing = maps:get(Key, Acc, []),
Acc#{Key => Existing ++ [Msg]}
end, #{}, Errors).
Generate a test suite
The nova gen_test command scaffolds a Common Test suite:
rebar3 nova gen_test --name posts
===> Writing test/blog_posts_controller_SUITE.erl
The generated suite has test cases for each CRUD action that make HTTP requests against your running application:
-module(blog_posts_controller_SUITE).
-include_lib("common_test/include/ct.hrl").
-export([all/0, init_per_suite/1, end_per_suite/1]).
-export([test_list/1, test_show/1, test_create/1, test_update/1, test_delete/1]).
all() ->
[test_list, test_show, test_create, test_update, test_delete].
init_per_suite(Config) ->
application:ensure_all_started(inets),
application:ensure_all_started(blog),
Config.
end_per_suite(_Config) ->
application:stop(blog),
ok.
test_list(_Config) ->
{ok, {{_, 200, _}, _, _}} =
httpc:request("http://localhost:8080/posts").
test_show(_Config) ->
{ok, {{_, 200, _}, _, _}} =
httpc:request("http://localhost:8080/posts/1").
test_create(_Config) ->
{ok, {{_, 201, _}, _, _}} =
httpc:request(post, {"http://localhost:8080/posts", [],
"application/json", "{}"}, [], []).
test_update(_Config) ->
{ok, {{_, 200, _}, _, _}} =
httpc:request(put, {"http://localhost:8080/posts/1", [],
"application/json", "{}"}, [], []).
test_delete(_Config) ->
{ok, {{_, 204, _}, _, _}} =
httpc:request(delete, {"http://localhost:8080/posts/1", []}, [], []).
Update the request bodies and assertions to match your actual API. We will cover testing in detail in the Testing chapter.
Other generators
Generate a controller with specific actions:
rebar3 nova gen_controller --name comments --actions list,create
===> Writing src/controllers/blog_comments_controller.erl
Typical workflow
Adding a new resource to your API:
# 1. Define the Kura schema
vi src/schemas/comment.erl
# 2. Compile to generate the migration
rebar3 compile
# 3. Generate the resource (controller + schema + route hints)
rebar3 nova gen_resource --name comments
# 4. Copy the printed routes into your router
# 5. Fill in the Kura repo calls in the controller
# 6. Generate a test suite
rebar3 nova gen_test --name comments
# 7. Run the tests
rebar3 ct
Generate, fill in the Kura calls, test. Three steps to a working API.
Our posts API works with flat data. Next, let's generate API documentation and inspect our application with Nova's built-in tools.
OpenAPI, Inspection & Audit
The rebar3_nova plugin includes tools for generating API documentation, inspecting your application's configuration, and auditing security. This chapter covers all three.
OpenAPI documentation
Prerequisites
For the OpenAPI generator to produce schema definitions, you need JSON schema files in priv/schemas/. If you used nova gen_resource (see JSON API with Generators) these were created for you. Otherwise create them by hand:
mkdir -p priv/schemas
priv/schemas/post.json:
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"properties": {
"id": { "type": "integer", "description": "Unique identifier" },
"title": { "type": "string", "description": "Post title" },
"body": { "type": "string", "description": "Post body" },
"status": { "type": "string", "enum": ["draft", "published", "archived"] }
},
"required": ["title", "body"]
}
Generating the spec
Run the OpenAPI generator:
rebar3 nova openapi
===> Generated openapi.json
===> Generated swagger.html
This reads your compiled routes and JSON schemas, then produces two files:
openapi.json— the OpenAPI 3.0.3 specificationswagger.html— a standalone Swagger UI page
Customize the output:
rebar3 nova openapi \
--output priv/assets/openapi.json \
--title "Blog API" \
--api-version 1.0.0
| Flag | Default | Description |
|---|---|---|
--output | openapi.json | Output file path |
--title | app name | API title in the spec |
--api-version | 0.1.0 | API version string |
What gets generated
The generator inspects every route registered with Nova. For each route it creates a path entry with the correct HTTP method, operation ID, path parameters, and response schema. It skips static file handlers and error controllers.
A snippet from a generated spec:
{
"openapi": "3.0.3",
"info": {
"title": "Blog API",
"version": "1.0.0"
},
"paths": {
"/api/posts": {
"get": {
"operationId": "blog_posts_controller.list",
"responses": {
"200": {
"description": "OK",
"content": {
"application/json": {
"schema": { "$ref": "#/components/schemas/post" }
}
}
}
}
},
"post": {
"operationId": "blog_posts_controller.create",
"requestBody": {
"content": {
"application/json": {
"schema": { "$ref": "#/components/schemas/post" }
}
}
},
"responses": {
"201": { "description": "Created" }
}
}
}
}
}
Swagger UI
The generated swagger.html loads the Swagger UI from a CDN and points it at your openapi.json. If you place both files in priv/assets/, you can serve them through Nova by adding a static route:
{"/docs/[...]", "priv/assets", #{}}
Then navigate to http://localhost:8080/docs/swagger.html to browse your API interactively.
Auto-generating on release
The nova release command automatically regenerates the OpenAPI spec before building a release:
rebar3 nova release
===> Generated priv/assets/openapi.json
===> Generated priv/assets/swagger.html
===> Release successfully assembled: _build/prod/rel/blog
This means your deployed application always has up-to-date API documentation bundled in.
Inspection tools
View configuration
The nova config command displays all Nova configuration values with their defaults:
rebar3 nova config
=== Nova Configuration ===
bootstrap_application blog
environment dev
cowboy_configuration #{port => 8080}
plugins [{pre_request,nova_request_plugin,
#{decode_json_body => true,
read_urlencoded_body => true}}]
json_lib thoas (default)
use_stacktrace true
dispatch_backend persistent_term (default)
Keys showing (default) are using the built-in default rather than an explicit setting.
| Key | Default | Description |
|---|---|---|
bootstrap_application | (required) | Main application to bootstrap |
environment | dev | Current environment |
cowboy_configuration | #{port => 8080} | Cowboy listener settings |
plugins | [] | Global middleware plugins |
json_lib | thoas | JSON encoding library |
use_stacktrace | false | Include stacktraces in error responses |
dispatch_backend | persistent_term | Backend for route dispatch storage |
Inspect middleware chains
The nova middleware command shows the global and per-route-group plugin chains:
rebar3 nova middleware
=== Global Plugins ===
pre_request: nova_request_plugin #{decode_json_body => true,
read_urlencoded_body => true}
=== Route Groups (blog_router) ===
Group: prefix= security=false
Plugins:
(inherits global)
Routes:
GET /login -> blog_main_controller:login
GET /heartbeat -> (inline fun)
Group: prefix=/api security=false
Plugins:
(inherits global)
Routes:
GET /posts -> blog_posts_controller:list
POST /posts -> blog_posts_controller:create
GET /posts/:id -> blog_posts_controller:show
PUT /posts/:id -> blog_posts_controller:update
DELETE /posts/:id -> blog_posts_controller:delete
Listing routes
The nova routes command displays the compiled routing tree:
rebar3 nova routes
Host: '_'
├─ /api
│ ├─ GET /posts (blog, blog_posts_controller:list/1)
│ ├─ GET /posts/:id (blog, blog_posts_controller:show/1)
│ ├─ POST /posts (blog, blog_posts_controller:create/1)
│ ├─ PUT /posts/:id (blog, blog_posts_controller:update/1)
│ └─ DELETE /posts/:id (blog, blog_posts_controller:delete/1)
├─ GET /login (blog, blog_main_controller:login/1)
└─ GET /heartbeat
Security audit
The nova audit command scans your routes and flags potential security issues:
rebar3 nova audit
=== Security Audit ===
WARNINGS:
POST /api/posts (blog_posts_controller) has no security
PUT /api/posts/:id (blog_posts_controller) has no security
DELETE /api/posts/:id (blog_posts_controller) has no security
INFO:
GET /login (blog_main_controller) has no security
GET /heartbeat has no security
GET /api/posts (blog_posts_controller) has no security
Summary: 3 warning(s), 3 info(s)
The audit classifies findings into two levels:
- WARNINGS — mutation methods (POST, PUT, DELETE, PATCH) without security, wildcard method handlers
- INFO — GET routes without security (common for public endpoints but worth reviewing)
Run rebar3 nova audit before deploying to make sure you haven't left endpoints unprotected by mistake.
To fix the warnings, add a security callback to the route group:
#{prefix => "/api",
security => fun blog_auth:validate_token/1,
routes => [
{"/posts", fun blog_posts_controller:list/1, #{methods => [get]}},
{"/posts/:id", fun blog_posts_controller:show/1, #{methods => [get]}},
{"/posts", fun blog_posts_controller:create/1, #{methods => [post]}},
{"/posts/:id", fun blog_posts_controller:update/1, #{methods => [put]}},
{"/posts/:id", fun blog_posts_controller:delete/1, #{methods => [delete]}}
]}
Command summary
| Command | Purpose |
|---|---|
rebar3 nova openapi | Generate OpenAPI 3.0.3 spec + Swagger UI |
rebar3 nova config | Show Nova configuration with defaults |
rebar3 nova middleware | Show global and per-group plugin chains |
rebar3 nova audit | Find routes missing security callbacks |
rebar3 nova routes | Display the compiled routing tree |
rebar3 nova release | Build release with auto-generated OpenAPI |
Use config to verify settings, middleware to trace request processing, audit to check security coverage, and routes to see the endpoint map.
Next, let's learn about error handling — custom error pages, JSON error responses, and fallback controllers.
Error Handling
When something goes wrong, you want to show a useful error page instead of a cryptic response. Let's look at how Nova handles errors and how to create custom error pages.
Nova's default error handling
Nova comes with default handlers for 404 (not found) and 500 (server error) responses. In development mode, 500 errors show crash details. In production they return a bare status code.
Status code routes
Nova lets you register custom handlers for specific HTTP status codes directly in your router. Use a status code integer instead of a path:
routes(_Environment) ->
[
#{routes => [
{404, fun blog_error_controller:not_found/1, #{}},
{500, fun blog_error_controller:server_error/1, #{}}
]},
#{prefix => "",
security => false,
routes => [
{"/", fun blog_main_controller:index/1, #{methods => [get]}},
{"/heartbeat", fun(_) -> {status, 200} end, #{methods => [get]}}
]
}
].
Your status code handlers override Nova's defaults because your routes are compiled after Nova's built-in routes.
Creating an error controller
Create src/controllers/blog_error_controller.erl:
-module(blog_error_controller).
-export([
not_found/1,
server_error/1
]).
not_found(_Req) ->
{ok, [{title, <<"404 - Not Found">>},
{message, <<"The page you are looking for does not exist.">>}],
#{view => error_page, status_code => 404}}.
server_error(_Req) ->
{ok, [{title, <<"500 - Server Error">>},
{message, <<"Something went wrong. Please try again later.">>}],
#{view => error_page, status_code => 500}}.
The status_code option in the return map sets the HTTP status code on the response.
Error view template
Create src/views/error_page.dtl:
<html>
<head><title>{{ title }}</title></head>
<body>
<h1>{{ title }}</h1>
<p>{{ message }}</p>
<a href="/">Go back home</a>
</body>
</html>
JSON error responses
For APIs, return JSON instead of HTML. Check the Accept header to decide:
not_found(Req) ->
case cowboy_req:header(<<"accept">>, Req) of
<<"application/json">> ->
{json, 404, #{}, #{error => <<"not_found">>,
message => <<"Resource not found">>}};
_ ->
{ok, [{title, <<"404">>}, {message, <<"Page not found">>}],
#{view => error_page, status_code => 404}}
end.
Rendering changeset errors as JSON
When using Kura, changeset validation errors are structured data. Use the changeset_errors_to_json helper from the Changesets chapter to convert errors into a JSON-friendly map.
Use it in your controllers:
create(#{json := Params}) ->
CS = post:changeset(#{}, Params),
case blog_repo:insert(CS) of
{ok, Post} ->
{json, 201, #{}, post_to_json(Post)};
{error, #kura_changeset{} = CS1} ->
{json, 422, #{}, #{errors => changeset_errors_to_json(CS1)}}
end.
This returns errors like {"errors": {"title": ["can't be blank"], "email": ["has already been taken"]}}.
Handling controller crashes
When a controller crashes, Nova catches the exception and triggers the 500 handler. The request map passed to your error controller will contain crash_info:
server_error(#{crash_info := CrashInfo} = _Req) ->
logger:error("Controller crash: ~p", [CrashInfo]),
{ok, [{title, <<"500">>},
{message, <<"Internal server error">>}],
#{view => error_page, status_code => 500}};
server_error(_Req) ->
{ok, [{title, <<"500">>},
{message, <<"Internal server error">>}],
#{view => error_page, status_code => 500}}.
More status codes
Register handlers for any HTTP status code:
#{routes => [
{400, fun blog_error_controller:bad_request/1, #{}},
{401, fun blog_error_controller:unauthorized/1, #{}},
{403, fun blog_error_controller:forbidden/1, #{}},
{404, fun blog_error_controller:not_found/1, #{}},
{500, fun blog_error_controller:server_error/1, #{}}
]}
bad_request(_Req) ->
{json, 400, #{}, #{error => <<"bad_request">>}}.
unauthorized(_Req) ->
{json, 401, #{}, #{error => <<"unauthorized">>}}.
forbidden(_Req) ->
{json, 403, #{}, #{error => <<"forbidden">>}}.
Error flow in the pipeline
Here is how errors flow through Nova:
- Route not found — triggers the 404 handler
- Security function returns false — triggers the 401 handler
- Controller crashes — Nova catches the exception, triggers the 500 handler
- Plugin returns
{error, Reason}— triggers the 500 handler - Controller returns
{status, Code}— if a handler is registered for that code, it is used
For each case, Nova looks up your registered status code handler. If none is registered, it falls back to its own default.
Fallback controllers
If a controller returns an unrecognized value, Nova can delegate to a fallback controller:
-module(blog_posts_controller).
-fallback_controller(blog_error_controller).
index(_Req) ->
case do_something() of
{ok, Data} -> {json, Data};
unexpected_value -> unexpected_value %% Goes to fallback
end.
The fallback module needs resolve/2:
resolve(Req, InvalidReturn) ->
logger:warning("Unexpected controller return: ~p", [InvalidReturn]),
{status, 500, #{}, #{error => <<"internal server error">>}}.
Disabling error page rendering
To skip Nova's error page rendering entirely:
{nova, [
{render_error_pages, false}
]}
With error handling in place, our application is more robust. Next, let's explore real-time web interfaces with Arizona Fundamentals.
Arizona Fundamentals
So far our blog renders HTML on the server and sends complete pages to the browser. For real-time interactivity — updating a comment section without a page reload, live form validation, instant notifications — we need something more. Arizona is a real-time web framework for Erlang, inspired by Phoenix LiveView, that brings server-rendered interactivity to Nova applications.
Arizona requires OTP 28+. Check your Erlang version with erl -eval 'io:format("~s~n", [erlang:system_info(otp_release)]), halt().' -noshell.
What Arizona does
Arizona keeps a persistent WebSocket connection between the browser and the server. When state changes on the server, Arizona:
- Computes which parts of the HTML changed (differential rendering)
- Sends only the changed parts over the WebSocket
- The client patches the DOM using morphdom
No full page reloads. No client-side framework. The state lives on the server in an Erlang process.
How it works under the hood
Arizona uses compile-time template optimization via Erlang parse transforms. When you write a template:
render(Bindings) ->
arizona_template:from_html(~"""
<h1>Hello, {arizona_template:get_binding(name, Bindings)}!</h1>
<p>You have {integer_to_list(arizona_template:get_binding(count, Bindings))} messages.</p>
""").
At compile time, Arizona:
- Parses the template into static and dynamic segments
- Tracks which variables each segment depends on
- Generates code that only re-renders segments whose dependencies changed
This means when count changes but name doesn't, only the second <p> is re-rendered and sent to the client.
Templates use OTP 28 sigil strings (~"..." or ~"""...""" for multi-line). Dynamic expressions are wrapped in {...} inside the template. All Arizona modules must include -compile({parse_transform, arizona_parse_transform}). for the template system to work.
Adding Arizona to your project
Add the dependency to rebar.config:
{deps, [
nova,
{kura, "~> 1.0"},
{arizona_core, {git, "https://github.com/Taure/arizona_core.git", {branch, "main"}}}
]}.
Add arizona_core to your application dependencies in src/blog.app.src:
{applications,
[kernel,
stdlib,
nova,
kura,
arizona_core
]},
Template syntax
Arizona supports three template syntaxes. The HTML sigil string syntax is the most common:
%% Embedded Erlang expressions with {...}
arizona_template:from_html(~"""
<div class="post">
<h2>{arizona_template:get_binding(title, Bindings)}</h2>
<p>{arizona_template:get_binding(body, Bindings)}</p>
</div>
""")
Expressions inside {...} are evaluated at render time and tracked for differential updates. Use arizona_template:get_binding/2 to access bindings with automatic dependency tracking.
The connection lifecycle
- Browser requests a page — Nova renders the initial HTML and sends a full page
- Arizona's JavaScript client opens a WebSocket connection
- The server spawns an
arizona_liveGenServer for this connection - User interactions trigger events sent over the WebSocket
- The server processes events, updates state, and pushes DOM diffs back
- The client patches the DOM
Each connected user has their own server-side process holding their state — true server-rendered interactivity.
Next, let's build our first live view in Live Views.
Live Views
A live view is a server-side process that renders HTML and responds to user events. It's the core building block of Arizona.
Creating a live view
A live view implements the arizona_view behaviour with two required callbacks: mount/2 and render/1.
-module(blog_counter_live).
-compile({parse_transform, arizona_parse_transform}).
-behaviour(arizona_view).
-export([mount/2, render/1, handle_event/3]).
mount(_Params, _Req) ->
arizona_view:new(?MODULE, #{
id => ~"counter",
count => 0
}, none).
render(Bindings) ->
arizona_template:from_html(~"""
<div>
<h1>Count: {integer_to_list(arizona_template:get_binding(count, Bindings))}</h1>
<button az-click="increment">+1</button>
<button az-click="decrement">-1</button>
</div>
""").
handle_event(<<"increment">>, _Params, View) ->
State = arizona_view:get_state(View),
Count = arizona_stateful:get_binding(count, State),
NewState = arizona_stateful:put_binding(count, Count + 1, State),
{[], arizona_view:update_state(NewState, View)};
handle_event(<<"decrement">>, _Params, View) ->
State = arizona_view:get_state(View),
Count = arizona_stateful:get_binding(count, State),
NewState = arizona_stateful:put_binding(count, Count - 1, State),
{[], arizona_view:update_state(NewState, View)}.
mount/2
Called when the live view is first loaded. Receives the mount argument and an Arizona request, and returns a new view created with arizona_view:new/3. The third argument is a layout module (none for no layout).
render/1
Called whenever state changes. Receives the current bindings as a map and returns an Arizona template. Arizona diffs the output against the previous render and only sends changes to the client. Use arizona_template:get_binding/2 to access bindings — this enables Arizona's dependency tracking for differential updates.
handle_event/3
Called when the user triggers an event (click, form submit, key press). Receives the event name, event parameters, and current view. Returns {Actions, UpdatedView} where Actions is a list of action tuples (empty list for no actions).
State is managed through the arizona_stateful API:
arizona_view:get_state(View)— get the stateful state from the viewarizona_stateful:get_binding(Key, State)— read a bindingarizona_stateful:put_binding(Key, Value, State)— update a bindingarizona_view:update_state(State, View)— put the updated state back into the view
Event bindings
Arizona uses az- attributes to bind DOM events to server-side handlers:
| Attribute | Triggers on |
|---|---|
az-click | Click |
az-submit | Form submission |
az-change | Input change |
az-keydown | Key press |
az-keyup | Key release |
az-focus | Element focus |
az-blur | Element blur |
<button az-click="delete" az-value-id="42">Delete</button>
The az-value-* attributes send additional data with the event. In this case, handle_event receives #{<<"id">> => <<"42">>} as the params.
Routing a live view
Add the live view to your Nova router:
#{prefix => "",
security => false,
routes => [
{"/counter", blog_counter_live, #{protocol => live_view}}
]}
The protocol => live_view option tells Nova to handle this route with Arizona's live view protocol.
A blog-relevant example: live post editor
-module(blog_post_editor_live).
-compile({parse_transform, arizona_parse_transform}).
-behaviour(arizona_view).
-export([mount/2, render/1, handle_event/3]).
mount(#{<<"id">> := PostId}, _Req) ->
{ok, Post} = blog_repo:get(post, binary_to_integer(PostId)),
arizona_view:new(?MODULE, #{
id => ~"post_editor",
post => Post,
editing => false,
saved => false
}, none).
render(Bindings) ->
case arizona_template:get_binding(editing, Bindings) of
false ->
Post = arizona_template:get_binding(post, Bindings),
arizona_template:from_html(~"""
<article>
<h1>{maps:get(title, Post)}</h1>
<div>{maps:get(body, Post)}</div>
<button az-click="edit">Edit</button>
</article>
""");
true ->
Post = arizona_template:get_binding(post, Bindings),
arizona_template:from_html(~"""
<form az-submit="save">
<input type="text" name="title"
value="{maps:get(title, Post)}" />
<textarea name="body">{maps:get(body, Post)}</textarea>
<button type="submit">Save</button>
<button type="button" az-click="cancel">Cancel</button>
</form>
""")
end.
handle_event(<<"edit">>, _Params, View) ->
State = arizona_view:get_state(View),
NewState = arizona_stateful:put_binding(editing, true, State),
{[], arizona_view:update_state(NewState, View)};
handle_event(<<"cancel">>, _Params, View) ->
State = arizona_view:get_state(View),
NewState = arizona_stateful:put_binding(editing, false, State),
{[], arizona_view:update_state(NewState, View)};
handle_event(<<"save">>, Params, View) ->
State = arizona_view:get_state(View),
Post = arizona_stateful:get_binding(post, State),
CS = post:changeset(Post, Params),
case blog_repo:update(CS) of
{ok, Updated} ->
S1 = arizona_stateful:put_binding(post, Updated, State),
S2 = arizona_stateful:put_binding(editing, false, S1),
S3 = arizona_stateful:put_binding(saved, true, S2),
{[], arizona_view:update_state(S3, View)};
{error, _CS} ->
{[], View}
end.
The form submits over the WebSocket — no HTTP round trip, no page reload. The state updates and Arizona re-renders just the changed parts.
Next, let's build reusable UI pieces with Components.
Components
Live views render entire pages. Components extract reusable pieces — a comment form, a notification badge, a tag selector. Arizona has two types: stateful components (with their own state and event handlers) and stateless components (pure render functions).
Stateless components
A stateless component is a function that takes bindings and returns a template. It has no state and no event handling — just rendering.
-module(blog_components).
-compile({parse_transform, arizona_parse_transform}).
-export([post_card/1, tag_badge/1, user_avatar/1]).
post_card(Bindings) ->
Post = maps:get(post, Bindings),
arizona_template:from_html(~"""
<article class="post-card">
<h2>{maps:get(title, Post)}</h2>
<p class="meta">by {maps:get(username, maps:get(author, Post))}</p>
<p>{binary:part(maps:get(body, Post), 0, min(200, byte_size(maps:get(body, Post))))}...</p>
</article>
""").
tag_badge(Bindings) ->
Tag = maps:get(tag, Bindings),
arizona_template:from_html(~"""
<span class="tag">{maps:get(name, Tag)}</span>
""").
user_avatar(Bindings) ->
User = maps:get(user, Bindings),
arizona_template:from_html(~"""
<div class="avatar">
<span>{binary:part(maps:get(username, User), 0, 1)}</span>
</div>
""").
Use stateless components in a live view with arizona_template:render_stateless/3:
render(Bindings) ->
arizona_template:from_html(~"""
<div class="post-list">
{arizona_template:render_list(
arizona_template:get_binding(posts, Bindings),
fun(P) ->
arizona_template:render_stateless(blog_components, post_card, #{post => P})
end)}
</div>
""").
Stateful components
A stateful component has its own state, handles its own events, and re-renders independently of its parent. Each stateful component must have a unique id binding.
-module(blog_comment_form).
-compile({parse_transform, arizona_parse_transform}).
-behaviour(arizona_stateful).
-export([mount/1, render/1, handle_event/3]).
mount(Bindings) ->
arizona_stateful:new(?MODULE, #{
id => maps:get(id, Bindings),
post_id => maps:get(post_id, Bindings),
body => <<>>,
error => undefined
}).
render(Bindings) ->
arizona_template:from_html(~"""
<div id="comment-form">
{case arizona_template:get_binding(error, Bindings) of
undefined -> ~"";
Err -> arizona_template:from_html(~"<p class='error'>{Err}</p>")
end}
<form az-submit="submit_comment">
<textarea name="body" az-change="validate"
placeholder="Write a comment...">{arizona_template:get_binding(body, Bindings)}</textarea>
<button type="submit">Post Comment</button>
</form>
</div>
""").
handle_event(<<"validate">>, #{<<"body">> := Body}, State) ->
Error = case byte_size(Body) of
0 -> <<"Comment cannot be empty">>;
_ -> undefined
end,
S1 = arizona_stateful:put_binding(body, Body, State),
S2 = arizona_stateful:put_binding(error, Error, S1),
{[], S2};
handle_event(<<"submit_comment">>, #{<<"body">> := Body}, State) ->
PostId = arizona_stateful:get_binding(post_id, State),
CS = comment:changeset(#{}, #{<<"body">> => Body, <<"post_id">> => PostId}),
case blog_repo:insert(CS) of
{ok, _Comment} ->
S1 = arizona_stateful:put_binding(body, <<>>, State),
S2 = arizona_stateful:put_binding(error, undefined, S1),
{[], S2};
{error, _} ->
S1 = arizona_stateful:put_binding(error, <<"Failed to post comment">>, State),
{[], S1}
end.
Embed a stateful component in a live view with arizona_template:render_stateful/2:
render(Bindings) ->
Post = arizona_template:get_binding(post, Bindings),
arizona_template:from_html(~"""
<article>
<h1>{maps:get(title, Post)}</h1>
{arizona_template:render_stateful(blog_comment_form, #{
id => ~"comment-form",
post_id => maps:get(id, Post)
})}
</article>
""").
The id is required — Arizona uses it to track the component instance across re-renders.
Slots
Slots let components accept nested content from their parent, enabling flexible composition:
-module(blog_card).
-compile({parse_transform, arizona_parse_transform}).
-export([render/1]).
render(Bindings) ->
arizona_template:from_html(~"""
<div class="card">
<div class="card-header">
<h3>{maps:get(title, Bindings)}</h3>
</div>
<div class="card-body">
{arizona_template:render_slot(maps:get(inner_content, Bindings))}
</div>
</div>
""").
When to use which
| Use case | Component type |
|---|---|
| Display-only UI pieces (badges, cards, avatars) | Stateless |
| Interactive forms, toggles, dropdowns | Stateful |
| UI that needs its own event handling | Stateful |
| Layout wrappers, formatting helpers | Stateless |
Stateless components re-render when their parent re-renders. Stateful components re-render independently — only when their own state changes.
Next, let's handle user interactions in depth with Events & Interactivity.
Events & Interactivity
Arizona's event system connects user interactions in the browser to server-side Erlang functions. Every click, form submission, and key press travels over the WebSocket to your handle_event/3 callback.
Event handling
The handle_event/3 callback receives three arguments:
handle_event(EventName, Params, View) ->
{Actions, UpdatedView}.
State is managed through the arizona_stateful and arizona_view APIs — get the state from the view, update bindings, and put it back.
Return values
The return is always {Actions, View} where Actions is a list:
| Action | Effect |
|---|---|
[] | No actions — just update state and re-render |
[{redirect, Path}] | Navigate to a new page |
[{dispatch, Event, Payload}] | Dispatch an event to another component |
Form handling
Forms are the most common interactive pattern:
render(Bindings) ->
CS = arizona_template:get_binding(changeset, Bindings),
Errors = arizona_template:get_binding(errors, Bindings),
arizona_template:from_html(~"""
<form az-submit="save" az-change="validate">
<input type="text" name="title"
value="{maps:get(title, kura_changeset:apply_changes(CS))}" />
{render_error(Errors, title)}
<textarea name="body">{maps:get(body, kura_changeset:apply_changes(CS))}</textarea>
{render_error(Errors, body)}
<button type="submit">Save</button>
</form>
""").
handle_event(<<"validate">>, Params, View) ->
State = arizona_view:get_state(View),
CS = post:changeset(#{}, Params),
Errors = changeset_errors_to_json(CS),
S1 = arizona_stateful:put_binding(changeset, CS, State),
S2 = arizona_stateful:put_binding(errors, Errors, S1),
{[], arizona_view:update_state(S2, View)};
handle_event(<<"save">>, Params, View) ->
CS = post:changeset(#{}, Params),
case blog_repo:insert(CS) of
{ok, Post} ->
{[{redirect, "/posts/" ++ integer_to_list(maps:get(id, Post))}], View};
{error, CS1} ->
State = arizona_view:get_state(View),
S1 = arizona_stateful:put_binding(changeset, CS1, State),
S2 = arizona_stateful:put_binding(errors, changeset_errors_to_json(CS1), S1),
{[], arizona_view:update_state(S2, View)}
end.
The render_error/2 helper formats a field's error for display:
render_error(Errors, Field) ->
case maps:get(atom_to_binary(Field), Errors, []) of
[] -> <<>>;
[Msg | _] -> arizona_template:from_html(~"<span class=\"error\">{Msg}</span>")
end.
The az-change attribute triggers validation on every input change — giving users instant feedback without a form submission.
Passing values with events
Use az-value-* attributes to send data with events:
<button az-click="delete" az-value-id="42" az-value-type="post">Delete</button>
In handle_event:
handle_event(<<"delete">>, #{<<"id">> := Id, <<"type">> := Type}, View) ->
...
Key events
<input type="text" az-keydown="search" az-debounce="300" />
The az-debounce attribute delays the event by the specified milliseconds — useful for search-as-you-type to avoid flooding the server.
handle_event(<<"search">>, #{<<"value">> := Query}, View) ->
Q = kura_query:from(post),
Q1 = kura_query:where(Q, {title, ilike, <<"%", Query/binary, "%">>}),
{ok, Results} = blog_repo:all(Q1),
State = arizona_view:get_state(View),
NewState = arizona_stateful:put_binding(results, Results, State),
{[], arizona_view:update_state(NewState, View)}.
Client-side JavaScript hooks
Arizona exposes a JavaScript API for pushing events from custom JS code:
// Push an event to the live view
arizona.pushEvent("my_event", {key: "value"});
// Push to a specific component by ID
arizona.pushEventTo("#comment-form", "submit", {body: "Hello"});
// Call an event and get a reply
const result = await arizona.callEvent("get_data", {id: 42});
// Call on a specific component
const result = await arizona.callEventFrom("#search", "search", {q: "nova"});
On the server:
handle_event(<<"get_data">>, #{<<"id">> := Id}, View) ->
{ok, Post} = blog_repo:get(post, binary_to_integer(Id)),
{[{dispatch, <<"get_data_reply">>, #{title => maps:get(title, Post)}}], View}.
Actions
Actions let you trigger side effects alongside state updates:
handle_event(<<"publish">>, _Params, View) ->
State = arizona_view:get_state(View),
Post = arizona_stateful:get_binding(post, State),
CS = post:changeset(Post, #{<<"status">> => <<"published">>}),
{ok, Updated} = blog_repo:update(CS),
NewState = arizona_stateful:put_binding(post, Updated, State),
Actions = [
{dispatch, <<"post_published">>, #{id => maps:get(id, Updated)}},
{redirect, "/posts/" ++ integer_to_list(maps:get(id, Updated))}
],
{Actions, arizona_view:update_state(NewState, View)}.
Next: Live Navigation — navigating between live views without full page reloads.
Live Navigation
Traditional page navigation triggers a full HTTP request-response cycle. Arizona supports live navigation — moving between live views over the existing WebSocket connection, preserving the connection state and avoiding full page reloads.
Live redirects
A live redirect navigates to a new live view, replacing the current one:
handle_event(<<"go_to_post">>, #{<<"id">> := Id}, View) ->
{[{redirect, "/posts/" ++ binary_to_list(Id)}], View}.
In templates, use az-live-redirect:
<a href="/posts/42" az-live-redirect>View Post</a>
When the user clicks this link:
- The browser updates the URL (pushState)
- The WebSocket sends a navigation event
- The server mounts the new live view
- Arizona sends the new HTML over the WebSocket
- The client patches the DOM
No HTTP request. No page flash.
Live patches
A live patch updates the URL and re-triggers mount/2 on the same live view. Useful for filtering, pagination, and search:
<a href="/posts?page=2" az-live-patch>Page 2</a>
<a href="/posts?status=published" az-live-patch>Published</a>
mount(Params, _Req) ->
Page = binary_to_integer(maps:get(<<"page">>, Params, <<"1">>)),
Status = maps:get(<<"status">>, Params, <<"all">>),
Q = build_query(Status, Page),
{ok, Posts} = blog_repo:all(Q),
arizona_view:new(?MODULE, #{
id => ~"post_list",
posts => Posts,
page => Page,
status => Status
}, none).
The same live view, different URL, different state. The WebSocket connection stays alive.
URL-driven state
Live patches make the URL the source of truth for view state. This means:
- Back/forward buttons work
- URLs are shareable and bookmarkable
- Browser history is preserved
handle_event(<<"filter">>, #{<<"status">> := Status}, View) ->
Path = "/posts?status=" ++ binary_to_list(Status),
{[{patch, Path}], View}.
The {patch, Path} action updates the URL and re-mounts with the new params.
Regular navigation
For navigating to non-live-view pages (traditional Nova controllers), use regular links:
<a href="/about">About</a>
This triggers a normal HTTP navigation — a full page load. Use live navigation only between live views.
With Arizona covered, let's look at the underlying WebSocket infrastructure that powers both raw WebSocket handlers and Arizona's live connections.
WebSockets
HTTP request-response works well for most operations, but sometimes you need real-time, bidirectional communication. Nova has built-in WebSocket support through the nova_websocket behaviour. We will use it to build a live comments handler for our blog.
Creating a WebSocket handler
A WebSocket handler implements three callbacks: init/1, websocket_handle/2, and websocket_info/2.
Create src/controllers/blog_ws_handler.erl:
-module(blog_ws_handler).
-behaviour(nova_websocket).
-export([
init/1,
websocket_handle/2,
websocket_info/2
]).
init(State) ->
{ok, State}.
websocket_handle({text, Msg}, State) ->
{reply, {text, <<"Echo: ", Msg/binary>>}, State};
websocket_handle(_Frame, State) ->
{ok, State}.
websocket_info(_Info, State) ->
{ok, State}.
The callbacks:
init/1— called when the WebSocket connection is established. Return{ok, State}to accept.websocket_handle/2— called when a message arrives from the client. Return{reply, Frame, State}to send a response,{ok, State}to do nothing, or{stop, State}to close.websocket_info/2— called when the handler process receives an Erlang message (not a WebSocket frame). Useful for receiving pub/sub notifications from other processes.
Adding the route
WebSocket routes use the module name as an atom (not a fun reference) and set protocol => ws:
{"/ws", blog_ws_handler, #{protocol => ws}}
Add it to your public routes:
#{prefix => "",
security => false,
routes => [
{"/login", fun blog_main_controller:login/1, #{methods => [get]}},
{"/heartbeat", fun(_) -> {status, 200} end, #{methods => [get]}},
{"/ws", blog_ws_handler, #{protocol => ws}}
]
}
Testing the WebSocket
Start the node with rebar3 nova serve and test from a browser console:
let ws = new WebSocket("ws://localhost:8080/ws");
ws.onmessage = (e) => console.log(e.data);
ws.onopen = () => ws.send("Hello Nova!");
// Should log: "Echo: Hello Nova!"
A live comments handler
Let's build something more practical — a handler that broadcasts new comments to all connected clients using nova_pubsub.
Create src/controllers/blog_comments_ws_handler.erl:
-module(blog_comments_ws_handler).
-behaviour(nova_websocket).
-export([
init/1,
websocket_handle/2,
websocket_info/2
]).
init(State) ->
nova_pubsub:join(comments),
{ok, State}.
websocket_handle({text, Msg}, State) ->
nova_pubsub:broadcast(comments, "new_comment", Msg),
{ok, State};
websocket_handle(_Frame, State) ->
{ok, State}.
websocket_info({nova_pubsub, comments, _Sender, "new_comment", Msg}, State) ->
{reply, {text, Msg}, State};
websocket_info(_Info, State) ->
{ok, State}.
In init/1 we join the comments channel. When a client sends a message, we broadcast it to all channel members. When a pub/sub message arrives via websocket_info/2, we forward it to the connected client. We will explore pub/sub in depth in the Pub/Sub chapter.
With WebSockets in place, let's build a real-time comment feed using Pub/Sub.
Pub/Sub and Real-Time Feed
In the WebSockets chapter we used nova_pubsub to broadcast comments. Now let's dive deeper into Nova's pub/sub system and build a real-time feed for our blog — live notifications when posts are published and comments are added.
How nova_pubsub works
Nova's pub/sub is built on OTP's pg module (process groups). It starts automatically with Nova — no configuration needed. Any Erlang process can join channels, and messages are delivered to all members.
%% Join a channel
nova_pubsub:join(channel_name).
%% Leave a channel
nova_pubsub:leave(channel_name).
%% Broadcast to all members on all nodes
nova_pubsub:broadcast(channel_name, Topic, Payload).
%% Broadcast to members on the local node only
nova_pubsub:local_broadcast(channel_name, Topic, Payload).
%% Get all members of a channel
nova_pubsub:get_members(channel_name).
%% Get members on the local node
nova_pubsub:get_local_members(channel_name).
Channels are atoms. Topics can be lists or binaries. Payloads can be anything.
Message format
When a process receives a pub/sub message, it arrives as:
{nova_pubsub, Channel, SenderPid, Topic, Payload}
In a gen_server, handle this in handle_info/2. In a WebSocket handler, use websocket_info/2.
Building the real-time feed
Notification WebSocket handler
Create src/controllers/blog_feed_handler.erl:
-module(blog_feed_handler).
-behaviour(nova_websocket).
-export([
init/1,
websocket_handle/2,
websocket_info/2
]).
init(State) ->
nova_pubsub:join(posts),
nova_pubsub:join(comments),
{ok, State}.
websocket_handle({text, <<"ping">>}, State) ->
{reply, {text, <<"pong">>}, State};
websocket_handle(_Frame, State) ->
{ok, State}.
websocket_info({nova_pubsub, Channel, _Sender, Topic, Payload}, State) ->
Msg = thoas:encode(#{
channel => Channel,
event => list_to_binary(Topic),
data => Payload
}),
{reply, {text, Msg}, State};
websocket_info(_Info, State) ->
{ok, State}.
On connect, the handler joins both the posts and comments channels. Any pub/sub message is encoded as JSON and forwarded to the client.
Broadcasting from controllers
Update the posts controller to broadcast on changes:
create(#{json := Params}) ->
CS = post:changeset(#{}, Params),
case blog_repo:insert(CS) of
{ok, Post} ->
nova_pubsub:broadcast(posts, "post_created", post_to_json(Post)),
{json, 201, #{}, post_to_json(Post)};
{error, #kura_changeset{} = CS1} ->
{json, 422, #{}, #{errors => changeset_errors_to_json(CS1)}}
end.
Do the same for updates and deletes:
%% After a successful update:
nova_pubsub:broadcast(posts, "post_updated", post_to_json(Updated)),
%% After a successful delete:
nova_pubsub:broadcast(posts, "post_deleted", #{id => binary_to_integer(Id)}),
And for comments (using a comment_to_json/1 helper that follows the same pattern as post_to_json/1):
%% After creating a comment:
nova_pubsub:broadcast(comments, "comment_created", comment_to_json(Comment)),
The comment_to_json/1 helper follows the same pattern as post_to_json/1:
comment_to_json(#{id := Id, body := Body, post_id := PostId, inserted_at := At}) ->
#{id => Id, body => Body, post_id => PostId, inserted_at => At}.
Adding the route
{"/feed", blog_feed_handler, #{protocol => ws}}
Client-side JavaScript
const ws = new WebSocket("ws://localhost:8080/feed");
ws.onmessage = (event) => {
const msg = JSON.parse(event.data);
console.log(`[${msg.channel}] ${msg.event}:`, msg.data);
switch (msg.event) {
case "post_created":
// Add the new post to the feed
break;
case "post_updated":
// Update the post in the feed
break;
case "post_deleted":
// Remove the post from the feed
break;
case "comment_created":
// Append the new comment
break;
}
};
// Keep-alive
setInterval(() => ws.send("ping"), 30000);
Per-post comment feeds
For a live comment section on a specific post, use dynamic channel names:
-module(blog_post_comments_handler).
-behaviour(nova_websocket).
-export([init/1, websocket_handle/2, websocket_info/2]).
init(#{req := #{bindings := #{<<"post_id">> := PostId}}} = State) ->
Channel = list_to_atom("post_comments_" ++ binary_to_list(PostId)),
nova_pubsub:join(Channel),
{ok, State#{channel => Channel}};
init(State) ->
{ok, State}.
websocket_handle(_Frame, State) ->
{ok, State}.
websocket_info({nova_pubsub, _Channel, _Sender, _Topic, Payload}, State) ->
{reply, {text, thoas:encode(Payload)}, State};
websocket_info(_Info, State) ->
{ok, State}.
Route:
{"/posts/:post_id/comments/ws", blog_post_comments_handler, #{protocol => ws}}
When creating a comment, broadcast to the post-specific channel:
Channel = list_to_atom("post_comments_" ++ integer_to_list(PostId)),
nova_pubsub:broadcast(Channel, "new_comment", comment_to_json(Comment)).
Using pub/sub in gen_servers
Any Erlang process can join a channel. This is useful for background workers like search indexing:
-module(blog_search_indexer).
-behaviour(gen_server).
-export([start_link/0]).
-export([init/1, handle_info/2, handle_cast/2, handle_call/3]).
start_link() ->
gen_server:start_link({local, ?MODULE}, ?MODULE, [], []).
init([]) ->
nova_pubsub:join(posts),
{ok, #{}}.
handle_info({nova_pubsub, posts, _Sender, "post_created", Post}, State) ->
logger:info("Indexing new post: ~p", [maps:get(title, Post)]),
%% Add to search index
{noreply, State};
handle_info({nova_pubsub, posts, _Sender, "post_deleted", #{id := Id}}, State) ->
logger:info("Removing post ~p from index", [Id]),
%% Remove from search index
{noreply, State};
handle_info(_Info, State) ->
{noreply, State}.
handle_cast(_Msg, State) ->
{noreply, State}.
handle_call(_Req, _From, State) ->
{reply, ok, State}.
Add it to your supervisor to start automatically.
Distributed pub/sub
nova_pubsub works across Erlang nodes. If you have multiple instances connected in a cluster, broadcast/3 delivers to all members on all nodes.
For local-only messaging (e.g., clearing a local cache):
nova_pubsub:local_broadcast(posts, "cache_invalidated", #{id => PostId}).
Organizing channels and topics
%% Different channels for different domains
nova_pubsub:join(posts).
nova_pubsub:join(comments).
nova_pubsub:join(users).
nova_pubsub:join(system).
%% Topics within channels for filtering
nova_pubsub:broadcast(posts, "created", Post).
nova_pubsub:broadcast(posts, "published", Post).
nova_pubsub:broadcast(comments, "created", Comment).
nova_pubsub:broadcast(users, "logged_in", #{username => User}).
nova_pubsub:broadcast(system, "deploy", #{version => <<"1.2.0">>}).
Processes can join multiple channels and pattern match on channel and topic in their handlers.
Next, let's build a complete live feature combining WebSockets and pub/sub.
Building a Live Feature
Let's bring everything together — Arizona live views, Nova PubSub, and Kura — to build a real-time comment section for our blog. When anyone posts a comment, all viewers see it instantly.
The live comment section
-module(blog_post_live).
-compile({parse_transform, arizona_parse_transform}).
-behaviour(arizona_view).
-export([mount/2, render/1, handle_event/3, handle_info/2]).
mount(#{<<"id">> := PostId}, _Req) ->
Id = binary_to_integer(PostId),
{ok, Post} = blog_repo:get(post, Id),
Post1 = blog_repo:preload(post, Post, [{comments, [author]}]),
%% Subscribe to real-time comment updates
Channel = list_to_atom("comments_" ++ integer_to_list(Id)),
nova_pubsub:join(Channel),
arizona_view:new(?MODULE, #{
id => list_to_binary("post_live_" ++ integer_to_list(Id)),
post => Post1,
comments => maps:get(comments, Post1, []),
new_comment => <<>>,
channel => Channel
}, none).
render(Bindings) ->
Post = arizona_template:get_binding(post, Bindings),
Comments = arizona_template:get_binding(comments, Bindings),
arizona_template:from_html(~"""
<article>
<h1>{maps:get(title, Post)}</h1>
<div class="body">{maps:get(body, Post)}</div>
</article>
<section class="comments">
<h2>Comments ({integer_to_list(length(Comments))})</h2>
{arizona_template:render_list(Comments, fun(C) ->
render_comment(C)
end)}
<form az-submit="post_comment">
<textarea name="body" placeholder="Write a comment..."
az-change="update_comment">{arizona_template:get_binding(new_comment, Bindings)}</textarea>
<button type="submit">Post Comment</button>
</form>
</section>
""").
handle_event(<<"update_comment">>, #{<<"body">> := Body}, View) ->
State = arizona_view:get_state(View),
NewState = arizona_stateful:put_binding(new_comment, Body, State),
{[], arizona_view:update_state(NewState, View)};
handle_event(<<"post_comment">>, #{<<"body">> := Body}, View) ->
State = arizona_view:get_state(View),
Post = arizona_stateful:get_binding(post, State),
Channel = arizona_stateful:get_binding(channel, State),
PostId = maps:get(id, Post),
CS = comment:changeset(#{}, #{<<"body">> => Body,
<<"post_id">> => PostId,
<<"user_id">> => 1}),
case blog_repo:insert(CS) of
{ok, Comment} ->
Comment1 = blog_repo:preload(comment, Comment, [author]),
%% Broadcast to all viewers
nova_pubsub:broadcast(Channel, "new_comment", Comment1),
NewState = arizona_stateful:put_binding(new_comment, <<>>, State),
{[], arizona_view:update_state(NewState, View)};
{error, _} ->
{[], View}
end.
%% Receive broadcasts from PubSub
handle_info({nova_pubsub, _Channel, _Sender, "new_comment", Comment}, View) ->
State = arizona_view:get_state(View),
Comments = arizona_stateful:get_binding(comments, State),
NewState = arizona_stateful:put_binding(comments, Comments ++ [Comment], State),
{[], arizona_view:update_state(NewState, View)}.
%% Helpers
render_comment(Comment) ->
arizona_template:from_html(~"""
<div class="comment">
<strong>{maps:get(username, maps:get(author, Comment))}</strong>
<p>{maps:get(body, Comment)}</p>
</div>
""").
How it works
- When a user visits
/posts/42, Arizona mountsblog_post_livewith the post ID - The mount function loads the post with comments and subscribes to PubSub
- Arizona renders the HTML and sends it to the browser
- When someone submits a comment:
- The comment is saved to the database via Kura
- The comment is broadcast via Nova PubSub
- All subscribed live views receive the broadcast in
handle_info/2 - Each live view updates its state with the new comment
- Arizona diffs the HTML and pushes only the new comment to each client
Broadcasting from controllers
You can also broadcast from traditional Nova controllers. If comments are also created via the JSON API:
%% In blog_comments_controller.erl
create(#{json := Params}) ->
CS = comment:changeset(#{}, Params),
case blog_repo:insert(CS) of
{ok, Comment} ->
PostId = maps:get(post_id, Comment),
Comment1 = blog_repo:preload(comment, Comment, [author]),
Channel = list_to_atom("comments_" ++ integer_to_list(PostId)),
nova_pubsub:broadcast(Channel, "new_comment", Comment1),
{json, 201, #{}, comment_to_json(Comment1)};
{error, CS1} ->
{json, 422, #{}, #{errors => changeset_errors_to_json(CS1)}}
end.
comment_to_json(#{id := Id, body := Body, post_id := PostId, inserted_at := At}) ->
#{id => Id, body => Body, post_id => PostId, inserted_at => At}.
Both live views and WebSocket handlers receive the broadcast — any process that called nova_pubsub:join(Channel) gets the message.
Optimistic updates
For a snappier feel, update the UI immediately and reconcile later:
handle_event(<<"post_comment">>, #{<<"body">> := Body}, View) ->
State = arizona_view:get_state(View),
Post = arizona_stateful:get_binding(post, State),
Comments = arizona_stateful:get_binding(comments, State),
PostId = maps:get(id, Post),
%% Optimistic: show the comment immediately
TempComment = #{body => Body, author => #{username => <<"you">>},
id => temp, post_id => PostId},
S1 = arizona_stateful:put_binding(comments, Comments ++ [TempComment], State),
S2 = arizona_stateful:put_binding(new_comment, <<>>, S1),
%% Persist in background
CS = comment:changeset(#{}, #{<<"body">> => Body,
<<"post_id">> => PostId,
<<"user_id">> => 1}),
blog_repo:insert(CS),
{[], arizona_view:update_state(S2, View)}.
With our live feature complete, let's add email notifications. Next: Sending Email.
Sending Email
Hikyaku is a composable email library for Erlang with pluggable adapters. It handles building and delivering emails without tying you to a specific provider.
Adding Hikyaku
Add the dependency to rebar.config:
{deps, [
nova,
{kura, "~> 1.0"},
{hikyaku, "~> 0.1"}
]}.
Add hikyaku to your application dependencies in src/blog.app.src:
{applications,
[kernel,
stdlib,
nova,
kura,
hikyaku
]},
Creating a mailer
Hikyaku uses a behaviour-based pattern. Define a mailer module that configures the delivery adapter:
-module(blog_mailer).
-behaviour(hikyaku_mailer).
-export([config/0]).
config() ->
#{adapter => hikyaku_adapter_logger}.
The hikyaku_adapter_logger prints emails to the console — perfect for development. We'll switch to a real adapter for production.
Available adapters
| Adapter | Service | Config keys |
|---|---|---|
hikyaku_adapter_smtp | Any SMTP server | relay, port, username, password, tls |
hikyaku_adapter_sendgrid | SendGrid v3 API | api_key |
hikyaku_adapter_mailgun | Mailgun | api_key, domain |
hikyaku_adapter_ses | Amazon SES v2 | access_key, secret_key, region |
hikyaku_adapter_logger | Console output | level |
hikyaku_adapter_test | Test assertions | pid |
Building an email
Hikyaku uses a builder API — each function takes an email record and returns a new one:
E0 = hikyaku_email:new(),
E1 = hikyaku_email:from(E0, {<<"Blog">>, <<"noreply@myblog.com">>}),
E2 = hikyaku_email:to(E1, {<<"Alice">>, <<"alice@example.com">>}),
E3 = hikyaku_email:subject(E2, <<"Welcome to the Blog!">>),
E4 = hikyaku_email:text_body(E3, <<"Thanks for signing up, Alice.">>),
E5 = hikyaku_email:html_body(E4, <<"<h1>Welcome!</h1><p>Thanks for signing up.</p>">>),
{ok, _} = hikyaku_mailer:deliver(blog_mailer, E5).
Builder functions
| Function | Purpose |
|---|---|
hikyaku_email:new/0 | Create a new email |
hikyaku_email:from/2 | Set the sender ({Name, Address} or Address) |
hikyaku_email:to/2 | Add a recipient |
hikyaku_email:cc/2 | Add a CC recipient |
hikyaku_email:bcc/2 | Add a BCC recipient |
hikyaku_email:reply_to/2 | Set the reply-to address |
hikyaku_email:subject/2 | Set the subject line |
hikyaku_email:text_body/2 | Set the plain text body |
hikyaku_email:html_body/2 | Set the HTML body |
hikyaku_email:header/3 | Add a custom header |
hikyaku_email:attachment/2 | Add an attachment |
Creating email helper functions
Organize your emails in a dedicated module:
-module(blog_emails).
-export([welcome/1, comment_notification/2]).
welcome(#{email := Email, username := Username}) ->
E0 = hikyaku_email:new(),
E1 = hikyaku_email:from(E0, {<<"Nova Blog">>, <<"noreply@myblog.com">>}),
E2 = hikyaku_email:to(E1, Email),
E3 = hikyaku_email:subject(E2, <<"Welcome to Nova Blog!">>),
E4 = hikyaku_email:text_body(E3,
<<"Hi ", Username/binary, ",\n\n",
"Thanks for joining Nova Blog.\n\n",
"— The Blog Team">>),
E5 = hikyaku_email:html_body(E4,
<<"<h1>Welcome, ", Username/binary, "!</h1>",
"<p>Thanks for joining Nova Blog.</p>">>),
hikyaku_mailer:deliver(blog_mailer, E5).
comment_notification(Post, Comment) ->
AuthorEmail = maps:get(email, maps:get(author, Post)),
CommentAuthor = maps:get(username, maps:get(author, Comment)),
PostTitle = maps:get(title, Post),
E0 = hikyaku_email:new(),
E1 = hikyaku_email:from(E0, {<<"Nova Blog">>, <<"noreply@myblog.com">>}),
E2 = hikyaku_email:to(E1, AuthorEmail),
E3 = hikyaku_email:subject(E2,
<<CommentAuthor/binary, " commented on \"", PostTitle/binary, "\"">>),
E4 = hikyaku_email:text_body(E3,
<<CommentAuthor/binary, " left a comment on your post \"",
PostTitle/binary, "\":\n\n",
(maps:get(body, Comment))/binary>>),
hikyaku_mailer:deliver(blog_mailer, E4).
Attachments
Attachment = hikyaku_attachment:from_data(CsvData, <<"export.csv">>),
E0 = hikyaku_email:new(),
E1 = hikyaku_email:from(E0, <<"noreply@myblog.com">>),
E2 = hikyaku_email:to(E1, <<"alice@example.com">>),
E3 = hikyaku_email:subject(E2, <<"Your export is ready">>),
E4 = hikyaku_email:text_body(E3, <<"See attached.">>),
E5 = hikyaku_email:attachment(E4, Attachment),
{ok, _} = hikyaku_mailer:deliver(blog_mailer, E5).
For inline images (e.g. in HTML emails):
LogoAttachment = hikyaku_attachment:from_data(LogoData, <<"logo.png">>),
InlineAttachment = hikyaku_attachment:inline(LogoAttachment, <<"logo">>),
E0 = hikyaku_email:new(),
E1 = hikyaku_email:html_body(E0, <<"<h1>Hello</h1><img src=\"cid:logo\">">>),
E2 = hikyaku_email:attachment(E1, InlineAttachment),
hikyaku_mailer:deliver(blog_mailer, E2).
Production adapter configuration
SendGrid
-module(blog_mailer).
-behaviour(hikyaku_mailer).
-export([config/0]).
config() ->
#{adapter => hikyaku_adapter_sendgrid,
api_key => application:get_env(blog, sendgrid_api_key, <<>>)}.
Amazon SES
config() ->
#{adapter => hikyaku_adapter_ses,
access_key => application:get_env(blog, aws_access_key, <<>>),
secret_key => application:get_env(blog, aws_secret_key, <<>>),
region => <<"us-east-1">>}.
SMTP
config() ->
#{adapter => hikyaku_adapter_smtp,
relay => <<"smtp.example.com">>,
port => 587,
username => application:get_env(blog, smtp_user, <<>>),
password => application:get_env(blog, smtp_pass, <<>>),
tls => always}.
Next, let's build Transactional Email flows — registration confirmation, password reset, and notifications.
Transactional Email
In the previous chapter we set up Hikyaku and built email helpers. Now let's wire emails into real application flows — registration confirmation, password reset, and comment notifications.
Registration confirmation
When a user registers, send a confirmation email with a time-limited token:
-module(blog_accounts).
-export([register_user/1, confirm_user/1]).
register_user(Params) ->
CS = user:registration_changeset(#{}, Params),
case blog_repo:insert(CS) of
{ok, User} ->
Token = generate_token(maps:get(id, User), <<"confirm">>, 24),
blog_emails:confirmation(User, Token),
{ok, User};
{error, CS1} ->
{error, CS1}
end.
confirm_user(Token) ->
case verify_token(Token, <<"confirm">>) of
{ok, UserId} ->
{ok, User} = blog_repo:get(user, UserId),
CS = kura_changeset:cast(user, User,
#{<<"confirmed_at">> => calendar:universal_time()},
[confirmed_at]),
blog_repo:update(CS);
{error, _} ->
{error, invalid_token}
end.
The email helper:
-module(blog_emails).
-export([welcome/1, confirmation/2, password_reset/2, comment_notification/2]).
confirmation(#{email := Email, username := Username}, Token) ->
ConfirmUrl = <<"https://myblog.com/confirm?token=", Token/binary>>,
E0 = hikyaku_email:new(),
E1 = hikyaku_email:from(E0, {<<"Nova Blog">>, <<"noreply@myblog.com">>}),
E2 = hikyaku_email:to(E1, Email),
E3 = hikyaku_email:subject(E2, <<"Confirm your email">>),
E4 = hikyaku_email:text_body(E3,
<<"Hi ", Username/binary, ",\n\n",
"Click the link below to confirm your email:\n\n",
ConfirmUrl/binary, "\n\n",
"This link expires in 24 hours.">>),
E5 = hikyaku_email:html_body(E4,
<<"<h1>Confirm your email</h1>",
"<p>Hi ", Username/binary, ",</p>",
"<p><a href=\"", ConfirmUrl/binary, "\">Click here to confirm</a></p>",
"<p>This link expires in 24 hours.</p>">>),
hikyaku_mailer:deliver(blog_mailer, E5).
Password reset
request_password_reset(Email) ->
case blog_repo:get_by(user, [{email, Email}]) of
{ok, User} ->
Token = generate_token(maps:get(id, User), <<"reset">>, 1),
blog_emails:password_reset(User, Token),
ok;
{error, not_found} ->
%% Don't reveal whether the email exists
ok
end.
reset_password(Token, NewPassword) ->
case verify_token(Token, <<"reset">>) of
{ok, UserId} ->
{ok, User} = blog_repo:get(user, UserId),
CS = user:password_changeset(User, #{<<"password">> => NewPassword}),
blog_repo:update(CS);
{error, _} ->
{error, invalid_token}
end.
password_reset(#{email := Email, username := Username}, Token) ->
ResetUrl = <<"https://myblog.com/reset-password?token=", Token/binary>>,
E0 = hikyaku_email:new(),
E1 = hikyaku_email:from(E0, {<<"Nova Blog">>, <<"noreply@myblog.com">>}),
E2 = hikyaku_email:to(E1, Email),
E3 = hikyaku_email:subject(E2, <<"Reset your password">>),
E4 = hikyaku_email:text_body(E3,
<<"Hi ", Username/binary, ",\n\n",
"Click below to reset your password:\n\n",
ResetUrl/binary, "\n\n",
"This link expires in 1 hour.\n",
"If you didn't request this, ignore this email.">>),
hikyaku_mailer:deliver(blog_mailer, E4).
Comment notifications
Notify post authors when someone comments, triggered from PubSub:
-module(blog_notification_worker).
-behaviour(gen_server).
-export([start_link/0, init/1, handle_info/2, handle_cast/2, handle_call/3]).
start_link() ->
gen_server:start_link({local, ?MODULE}, ?MODULE, [], []).
init([]) ->
nova_pubsub:join(comments),
{ok, #{}}.
handle_info({nova_pubsub, comments, _Sender, "comment_created", Comment}, State) ->
PostId = maps:get(post_id, Comment),
{ok, Post} = blog_repo:get(post, PostId),
Post1 = blog_repo:preload(post, Post, [author]),
Comment1 = blog_repo:preload(comment, Comment, [author]),
%% Don't notify if the author commented on their own post
PostAuthorId = maps:get(id, maps:get(author, Post1)),
CommentAuthorId = maps:get(user_id, Comment1),
case PostAuthorId =/= CommentAuthorId of
true -> blog_emails:comment_notification(Post1, Comment1);
false -> ok
end,
{noreply, State};
handle_info(_Info, State) ->
{noreply, State}.
handle_cast(_Msg, State) -> {noreply, State}.
handle_call(_Req, _From, State) -> {reply, ok, State}.
Add this worker to your supervisor to start automatically.
Testing email delivery
Use the test adapter to capture emails in tests. Configure blog_mailer to use the test adapter in your test environment, with self() as the receiving process:
%% In test config or test setup
%% Override blog_mailer to use the test adapter:
config() ->
#{adapter => hikyaku_adapter_test,
pid => self()}.
%% In your test
test_registration_sends_email(_Config) ->
{ok, _User} = blog_accounts:register_user(#{
<<"username">> => <<"testuser">>,
<<"email">> => <<"test@example.com">>,
<<"password">> => <<"password123">>
}),
receive
{hikyaku_email, Email} ->
<<"Confirm your email">> = hikyaku_email:get_subject(Email),
ok
after 1000 ->
ct:fail("No email received")
end.
Token generation helpers
generate_token(UserId, Purpose, ExpiryHours) ->
Payload = #{user_id => UserId, purpose => Purpose,
expires_at => erlang:system_time(second) + ExpiryHours * 3600},
base64:encode(term_to_binary(Payload)).
verify_token(Token, ExpectedPurpose) ->
try
Payload = binary_to_term(base64:decode(Token)),
#{user_id := UserId, purpose := Purpose, expires_at := ExpiresAt} = Payload,
Now = erlang:system_time(second),
case Purpose =:= ExpectedPurpose andalso ExpiresAt > Now of
true -> {ok, UserId};
false -> {error, expired}
end
catch _:_ ->
{error, invalid}
end.
This is a simplified token implementation for illustration. In production, use cryptographically signed tokens (e.g. HMAC-SHA256) and store token hashes in the database for revocation.
With email integrated, let's ensure everything works with proper Unit Testing.
Unit Testing
Nova controllers are regular Erlang functions — they take a request map and return a tuple. Changesets are pure functions — data in, data out. This makes unit testing straightforward with EUnit.
Adding nova_test
Add nova_test as a test dependency in rebar.config:
{profiles, [
{test, [
{deps, [
{nova_test, "0.1.0"}
]}
]}
]}.
Testing changesets
Changesets are pure — no database, no side effects. Test them directly:
-module(post_changeset_tests).
-include_lib("kura/include/kura.hrl").
-include_lib("eunit/include/eunit.hrl").
valid_changeset_test() ->
CS = post:changeset(#{}, #{<<"title">> => <<"Good Title">>,
<<"body">> => <<"Some content">>}),
?assert(CS#kura_changeset.valid).
missing_title_test() ->
CS = post:changeset(#{}, #{<<"body">> => <<"Some content">>}),
?assertNot(CS#kura_changeset.valid),
?assertMatch([{title, _} | _], CS#kura_changeset.errors).
title_too_short_test() ->
CS = post:changeset(#{}, #{<<"title">> => <<"Hi">>,
<<"body">> => <<"Content">>}),
?assertNot(CS#kura_changeset.valid),
?assertMatch([{title, _}], CS#kura_changeset.errors).
invalid_status_test() ->
CS = post:changeset(#{}, #{<<"title">> => <<"Good Title">>,
<<"body">> => <<"Content">>,
<<"status">> => <<"invalid">>}),
?assertNot(CS#kura_changeset.valid).
valid_email_format_test() ->
CS = user:changeset(#{}, #{<<"username">> => <<"alice">>,
<<"email">> => <<"alice@example.com">>,
<<"password_hash">> => <<"hashed">>}),
?assert(CS#kura_changeset.valid).
invalid_email_format_test() ->
CS = user:changeset(#{}, #{<<"username">> => <<"alice">>,
<<"email">> => <<"not-an-email">>,
<<"password_hash">> => <<"hashed">>}),
?assertNot(CS#kura_changeset.valid).
Testing controllers
The controller tests below call blog_repo functions, which need a running database. They are closer to integration tests. For true unit tests, you could mock the repo — but in practice, testing against a real database (as shown in Integration Testing) catches more bugs. These examples show how to use nova_test_req to build request maps.
The nova_test_req module builds well-formed request maps so you don't have to construct them by hand:
-module(blog_posts_controller_tests).
-include_lib("nova_test/include/nova_test.hrl").
show_existing_post_test() ->
Req = nova_test_req:new(get, "/api/posts/1"),
Req1 = nova_test_req:with_bindings(#{<<"id">> => <<"1">>}, Req),
Result = blog_posts_controller:show(Req1),
?assertMatch({json, #{id := 1, title := _}}, Result).
show_missing_post_test() ->
Req = nova_test_req:new(get, "/api/posts/999999"),
Req1 = nova_test_req:with_bindings(#{<<"id">> => <<"999999">>}, Req),
Result = blog_posts_controller:show(Req1),
?assertMatch({status, 404, _, _}, Result).
create_post_test() ->
Req = nova_test_req:new(post, "/api/posts"),
Req1 = nova_test_req:with_json(#{<<"title">> => <<"Test Post">>,
<<"body">> => <<"Test body">>,
<<"user_id">> => 1}, Req),
Result = blog_posts_controller:create(Req1),
?assertMatch({json, 201, _, #{id := _}}, Result).
create_invalid_post_test() ->
Req = nova_test_req:new(post, "/api/posts"),
Req1 = nova_test_req:with_json(#{}, Req),
Result = blog_posts_controller:create(Req1),
?assertMatch({json, 422, _, #{errors := _}}, Result).
Request builder functions
| Function | Purpose |
|---|---|
nova_test_req:new/2 | Create a request with method and path |
nova_test_req:with_bindings/2 | Set path bindings (e.g. #{<<"id">> => <<"1">>}) |
nova_test_req:with_json/2 | Set a JSON body (auto-encodes, sets content-type) |
nova_test_req:with_header/3 | Add a request header |
nova_test_req:with_query/2 | Set query string parameters |
nova_test_req:with_body/2 | Set a raw body |
nova_test_req:with_auth_data/2 | Set auth data (for testing authenticated controllers) |
nova_test_req:with_peer/2 | Set the client peer address |
Testing security modules
-module(blog_auth_tests).
-include_lib("nova_test/include/nova_test.hrl").
valid_login_test() ->
Req = nova_test_req:new(post, "/login"),
Req1 = Req#{params => #{<<"username">> => <<"admin">>,
<<"password">> => <<"password">>}},
?assertMatch({true, #{authed := true, username := <<"admin">>}},
blog_auth:username_password(Req1)).
invalid_password_test() ->
Req = nova_test_req:new(post, "/login"),
Req1 = Req#{params => #{<<"username">> => <<"admin">>,
<<"password">> => <<"wrong">>}},
?assertEqual(false, blog_auth:username_password(Req1)).
missing_params_test() ->
Req = nova_test_req:new(post, "/login"),
?assertEqual(false, blog_auth:username_password(Req)).
Running EUnit tests
rebar3 eunit
Next: Integration Testing — testing the full application with HTTP requests.
Integration Testing
Unit tests verify individual functions. Integration tests verify that the full application works end-to-end — HTTP requests go through routing, plugins, security, controllers, and the database.
Setup
Integration tests use Common Test with nova_test helpers that manage application lifecycle and provide an HTTP client.
Database
Tests need a running PostgreSQL. Use the same docker-compose.yml from the Database Setup chapter:
docker compose up -d
Your test configuration should point at the test database:
%% test sys.config
{blog, [
{database, <<"blog_test">>}
]}
Writing integration tests
Create test/blog_api_SUITE.erl:
-module(blog_api_SUITE).
-include_lib("common_test/include/ct.hrl").
-include_lib("nova_test/include/nova_test.hrl").
-export([
all/0,
init_per_suite/1,
end_per_suite/1,
test_list_posts/1,
test_create_post/1,
test_create_invalid_post/1,
test_get_post/1,
test_update_post/1,
test_delete_post/1,
test_get_post_not_found/1
]).
all() ->
[test_list_posts,
test_create_post,
test_create_invalid_post,
test_get_post,
test_update_post,
test_delete_post,
test_get_post_not_found].
init_per_suite(Config) ->
nova_test:start(blog, Config).
end_per_suite(Config) ->
nova_test:stop(Config).
test_list_posts(Config) ->
{ok, Resp} = nova_test:get("/api/posts", Config),
?assertStatus(200, Resp),
?assertJson(#{<<"posts">> := _}, Resp).
test_create_post(Config) ->
{ok, Resp} = nova_test:post("/api/posts",
#{json => #{<<"title">> => <<"Test Post">>,
<<"body">> => <<"Test body">>,
<<"user_id">> => 1}},
Config),
?assertStatus(201, Resp),
?assertJson(#{<<"title">> := <<"Test Post">>}, Resp).
test_create_invalid_post(Config) ->
{ok, Resp} = nova_test:post("/api/posts",
#{json => #{<<"title">> => <<"Hi">>}},
Config),
?assertStatus(422, Resp),
?assertJson(#{<<"errors">> := _}, Resp).
test_get_post(Config) ->
%% Create a post first
{ok, CreateResp} = nova_test:post("/api/posts",
#{json => #{<<"title">> => <<"Get Test">>,
<<"body">> => <<"Body">>,
<<"user_id">> => 1}},
Config),
?assertStatus(201, CreateResp),
#{<<"id">> := Id} = nova_test:json(CreateResp),
%% Fetch it
{ok, Resp} = nova_test:get("/api/posts/" ++ integer_to_list(Id), Config),
?assertStatus(200, Resp),
?assertJson(#{<<"title">> := <<"Get Test">>}, Resp).
test_update_post(Config) ->
%% Create a post first
{ok, CreateResp} = nova_test:post("/api/posts",
#{json => #{<<"title">> => <<"Before Update">>,
<<"body">> => <<"Body">>,
<<"user_id">> => 1}},
Config),
#{<<"id">> := Id} = nova_test:json(CreateResp),
%% Update it
{ok, Resp} = nova_test:put("/api/posts/" ++ integer_to_list(Id),
#{json => #{<<"title">> => <<"After Update">>}},
Config),
?assertStatus(200, Resp),
?assertJson(#{<<"title">> := <<"After Update">>}, Resp).
test_delete_post(Config) ->
%% Create a post first
{ok, CreateResp} = nova_test:post("/api/posts",
#{json => #{<<"title">> => <<"To Delete">>,
<<"body">> => <<"Body">>,
<<"user_id">> => 1}},
Config),
#{<<"id">> := Id} = nova_test:json(CreateResp),
%% Delete it
{ok, Resp} = nova_test:delete("/api/posts/" ++ integer_to_list(Id), Config),
?assertStatus(204, Resp).
test_get_post_not_found(Config) ->
{ok, Resp} = nova_test:get("/api/posts/999999", Config),
?assertStatus(404, Resp).
Assertion macros
| Macro | Purpose |
|---|---|
?assertStatus(Code, Resp) | Assert the HTTP status code |
?assertJson(Pattern, Resp) | Pattern-match the decoded JSON body |
?assertBody(Expected, Resp) | Assert the raw response body |
?assertHeader(Name, Expected, Resp) | Assert a response header value |
Running integration tests
rebar3 ct
Running both
rebar3 do eunit, ct
Test structure
test/
├── blog_posts_controller_tests.erl %% EUnit — controller unit tests
├── post_changeset_tests.erl %% EUnit — changeset validation
├── blog_auth_tests.erl %% EUnit — security functions
└── blog_api_SUITE.erl %% Common Test — integration tests
- Use EUnit for fast unit tests of individual functions and changesets
- Use Common Test for integration tests that need the full application running
- Run both with
rebar3 do eunit, ct
Next: Testing Real-Time — testing WebSocket handlers and live views.
Testing Real-Time
WebSocket handlers and live views need different testing approaches than HTTP endpoints. This chapter covers strategies for testing both.
Testing WebSocket handlers
WebSocket handlers are Erlang modules with callbacks. Test them by calling the callbacks directly:
-module(blog_ws_handler_tests).
-include_lib("eunit/include/eunit.hrl").
init_test() ->
{ok, State} = blog_ws_handler:init(#{}),
?assertMatch(#{}, State).
echo_test() ->
{reply, {text, Reply}, _State} =
blog_ws_handler:websocket_handle({text, <<"hello">>}, #{}),
?assertEqual(<<"Echo: hello">>, Reply).
ignore_binary_frames_test() ->
{ok, _State} =
blog_ws_handler:websocket_handle({binary, <<1,2,3>>}, #{}),
ok.
Testing PubSub integration
For handlers that use PubSub, verify that messages are forwarded:
-module(blog_feed_handler_tests).
-include_lib("eunit/include/eunit.hrl").
pubsub_message_forwarded_test() ->
State = #{},
Msg = {nova_pubsub, posts, self(), "post_created", #{id => 1, title => <<"Test">>}},
{reply, {text, Json}, _State} =
blog_feed_handler:websocket_info(Msg, State),
Decoded = thoas:decode(Json),
?assertMatch({ok, #{<<"channel">> := <<"posts">>}}, Decoded).
Integration testing WebSockets
For end-to-end WebSocket tests, use gun (an Erlang HTTP/WebSocket client):
-module(blog_ws_SUITE).
-include_lib("common_test/include/ct.hrl").
-export([all/0, init_per_suite/1, end_per_suite/1,
test_ws_echo/1, test_ws_feed/1]).
all() -> [test_ws_echo, test_ws_feed].
init_per_suite(Config) ->
application:ensure_all_started(gun),
nova_test:start(blog, Config).
end_per_suite(Config) ->
nova_test:stop(Config).
test_ws_echo(Config) ->
{ok, ConnPid} = gun:open("localhost", 8080),
{ok, _} = gun:await_up(ConnPid),
StreamRef = gun:ws_upgrade(ConnPid, "/ws"),
receive {gun_upgrade, ConnPid, StreamRef, _, _} -> ok end,
gun:ws_send(ConnPid, StreamRef, {text, <<"hello">>}),
receive
{gun_ws, ConnPid, StreamRef, {text, Reply}} ->
<<"Echo: hello">> = Reply
after 1000 ->
ct:fail("No WebSocket response")
end,
gun:close(ConnPid).
test_ws_feed(_Config) ->
{ok, ConnPid} = gun:open("localhost", 8080),
{ok, _} = gun:await_up(ConnPid),
StreamRef = gun:ws_upgrade(ConnPid, "/feed"),
receive {gun_upgrade, ConnPid, StreamRef, _, _} -> ok end,
%% Trigger a broadcast
nova_pubsub:broadcast(posts, "post_created", #{id => 99, title => <<"Test">>}),
receive
{gun_ws, ConnPid, StreamRef, {text, Json}} ->
{ok, Decoded} = thoas:decode(Json),
#{<<"event">> := <<"post_created">>} = Decoded
after 2000 ->
ct:fail("No feed message received")
end,
gun:close(ConnPid).
Testing Arizona live views
Arizona live views use opaque state types (arizona_view and arizona_stateful), so unit testing callbacks directly requires constructing views with arizona_view:new/3. Test the mount and event callbacks:
-module(blog_counter_live_tests).
-include_lib("eunit/include/eunit.hrl").
mount_test() ->
View = blog_counter_live:mount(#{}, undefined),
State = arizona_view:get_state(View),
?assertEqual(0, arizona_stateful:get_binding(count, State)).
increment_test() ->
View = blog_counter_live:mount(#{}, undefined),
{_Actions, View1} =
blog_counter_live:handle_event(<<"increment">>, #{}, View),
State = arizona_view:get_state(View1),
?assertEqual(1, arizona_stateful:get_binding(count, State)).
decrement_test() ->
View = blog_counter_live:mount(#{}, undefined),
{_Actions, View1} =
blog_counter_live:handle_event(<<"decrement">>, #{}, View),
State = arizona_view:get_state(View1),
?assertEqual(-1, arizona_stateful:get_binding(count, State)).
Testing live view rendering
Verify that render/1 produces expected content:
render_shows_count_test() ->
Html = blog_counter_live:render(#{count => 42}),
%% Check that the rendered output contains the count
?assertNotEqual(nomatch, binary:match(iolist_to_binary(Html), <<"42">>)).
Testing PubSub in live views
handle_info_new_comment_test() ->
Comment = #{id => 1, body => <<"Nice!">>, author => #{username => <<"bob">>}},
%% Build a view with empty comments
View = arizona_view:new(blog_post_live, #{
id => ~"test", comments => [], post => #{}, new_comment => <<>>,
channel => test_channel
}, none),
Msg = {nova_pubsub, comments_1, self(), "new_comment", Comment},
{_Actions, NewView} = blog_post_live:handle_info(Msg, View),
State = arizona_view:get_state(NewView),
?assertEqual([Comment], arizona_stateful:get_binding(comments, State)).
Test structure
test/
├── post_changeset_tests.erl %% EUnit — changeset validation
├── blog_posts_controller_tests.erl %% EUnit — controller unit tests
├── blog_auth_tests.erl %% EUnit — security functions
├── blog_ws_handler_tests.erl %% EUnit — WebSocket handler unit tests
├── blog_counter_live_tests.erl %% EUnit — live view unit tests
├── blog_api_SUITE.erl %% CT — API integration tests
└── blog_ws_SUITE.erl %% CT — WebSocket integration tests
With testing covered, let's prepare for production. Next: Configuration.
Configuration
Nova uses standard OTP application configuration via sys.config. This chapter covers organizing configuration across environments, using environment variables, and the key settings you'll need.
Configuration files
The generated project includes two config files:
config/dev_sys.config.src— development settings (used byrebar3 shellandrebar3 nova serve)config/prod_sys.config.src— production settings (used when building releases)
The .src suffix means rebar3 processes the file as a template — ${VAR} references are replaced with environment variables at build time.
Development config
[
{kernel, [
{logger_level, debug},
{logger, [
{handler, default, logger_std_h,
#{formatter => {flatlog, #{
map_depth => 3,
term_depth => 50,
colored => true,
template => [colored_start, "[\033[1m", level, "\033[0m",
colored_start, "] ", msg, "\n", colored_end]
}}}}
]}
]},
{nova, [
{use_stacktrace, true},
{environment, dev},
{cowboy_configuration, #{port => 8080}},
{dev_mode, true},
{bootstrap_application, blog},
{plugins, [
{pre_request, nova_request_plugin, #{
read_urlencoded_body => true,
decode_json_body => true,
parse_qs => true
}}
]}
]},
{blog, [
{database, <<"blog_dev">>}
]}
].
Production config
[
{kernel, [
{logger_level, info},
{logger, [
{handler, default, logger_std_h,
#{config => #{file => "log/erlang.log"},
formatter => {flatlog, #{
map_depth => 3,
term_depth => 50,
colored => false,
template => ["[", level, "] ", msg, "\n"]
}}}}
]}
]},
{nova, [
{use_stacktrace, false},
{environment, prod},
{cowboy_configuration, #{port => 8080}},
{dev_mode, false},
{bootstrap_application, blog},
{plugins, [
{pre_request, nova_correlation_plugin, #{
request_correlation_header => <<"x-correlation-id">>,
logger_metadata_key => correlation_id
}},
{pre_request, nova_request_plugin, #{
decode_json_body => true,
read_urlencoded_body => true,
parse_qs => true
}},
{pre_request, nova_csrf_plugin, #{
excluded_paths => [<<"/api/">>]
}}
]}
]},
{blog, [
{database, <<"${DB_NAME}">>},
{db_host, <<"${DB_HOST}">>},
{db_user, <<"${DB_USER}">>},
{db_password, <<"${DB_PASSWORD}">>},
{sendgrid_api_key, <<"${SENDGRID_API_KEY}">>}
]}
].
Key differences from development:
- Logger level is
infoinstead ofdebug use_stacktraceisfalse— don't leak stack traces to users- Correlation plugin is enabled for request tracing
- CSRF plugin is enabled
- Secrets use
${VAR}environment variable substitution sendgrid_api_keyis stored under theblogapp — theblog_mailermodule reads it viaapplication:get_env/3(see Sending Email)
Nova configuration reference
| Key | Default | Description |
|---|---|---|
bootstrap_application | (required) | Main application to bootstrap |
environment | dev | Current environment (dev or prod) |
cowboy_configuration | #{port => 8080} | Cowboy listener settings |
plugins | [] | Global middleware plugins |
json_lib | thoas | JSON encoding library |
use_stacktrace | false | Include stacktraces in error responses |
use_sessions | true | Enable session management |
session_manager | nova_session_ets | Session backend module |
dev_mode | false | Enable development features |
render_error_pages | true | Use custom error page controllers |
dispatch_backend | persistent_term | Route dispatch storage backend |
Environment-based routing
The routes/1 function receives the environment atom:
routes(prod) -> prod_routes();
routes(dev) -> prod_routes() ++ dev_routes().
This lets you add development-only routes (debug tools, test endpoints) without them leaking into production.
VM arguments
config/vm.args.src controls Erlang VM settings:
-name blog@${HOSTNAME}
-setcookie ${RELEASE_COOKIE}
+K true
+A30
+sbwt very_long
+swt very_low
-name— full node name (needed for clustering)-setcookie— cluster security cookie+K— enable kernel poll+A— async thread pool size+sbwt/+swt— scheduler busy-wait tuning
Next: Observability — tracing, metrics, and logging in production.
OpenTelemetry
When your Nova application is in production, you need visibility into what it is doing. OpenTelemetry is the industry standard for collecting traces and metrics. The opentelemetry_nova library gives you automatic instrumentation — every HTTP request gets a trace span and metrics are recorded without manual instrumentation code.
What you get
Once configured, opentelemetry_nova provides:
Distributed traces — Every incoming request creates a span with attributes like method, path, status code, controller, and action. If the caller sends a W3C traceparent header, the span is linked to the upstream trace.
HTTP metrics — Four metrics recorded for every request:
| Metric | Type | Description |
|---|---|---|
http.server.request.duration | Histogram | Request duration in seconds |
http.server.active_requests | Gauge | Number of in-flight requests |
http.server.request.body.size | Histogram | Request body size in bytes |
http.server.response.body.size | Histogram | Response body size in bytes |
Adding the dependency
Add opentelemetry_nova and the OpenTelemetry SDK to rebar.config:
{deps, [
nova,
{kura, "~> 1.0"},
{opentelemetry, "~> 1.5"},
{opentelemetry_experimental, "~> 0.5"},
{opentelemetry_exporter, "~> 1.8"},
opentelemetry_nova
]}.
Configuring the stream handler
opentelemetry_nova uses a Cowboy stream handler to intercept requests. Add otel_nova_stream_h to the Nova cowboy configuration:
{nova, [
{cowboy_configuration, #{
port => 8080,
stream_handlers => [otel_nova_stream_h, cowboy_stream_h]
}}
]}
The order matters — otel_nova_stream_h must come before cowboy_stream_h to wrap the full request lifecycle.
Setting up tracing
Configure the SDK to export traces via OTLP HTTP:
{opentelemetry, [
{span_processor, batch},
{traces_exporter, {opentelemetry_exporter, #{
protocol => http_protobuf,
endpoints => [#{host => "localhost", port => 4318, path => "/v1/traces"}]
}}}
]},
{opentelemetry_exporter, [
{otlp_protocol, http_protobuf},
{otlp_endpoint, "http://localhost:4318"}
]}
This sends traces to any OTLP-compatible backend — Grafana Tempo, Jaeger, or any OpenTelemetry Collector.
Setting up Prometheus metrics
Configure a metric reader with the Prometheus exporter:
{opentelemetry_experimental, [
{readers, [
#{module => otel_metric_reader,
config => #{
export_interval_ms => 5000,
exporter => {otel_nova_prom_exporter, #{}}
}}
]}
]}
In your application's start/2, initialize metrics and start the Prometheus HTTP server:
start(_StartType, _StartArgs) ->
opentelemetry_nova:setup(#{prometheus => #{port => 9464}}),
blog_sup:start_link().
This starts a Prometheus endpoint at http://localhost:9464/metrics. Point your Prometheus server or Grafana Agent at it.
If you only want metrics without the Prometheus HTTP server (e.g., pushing via OTLP instead), call opentelemetry_nova:setup() with no arguments.
Span enrichment with the Nova plugin
The stream handler creates spans with basic HTTP attributes. To also get the controller and action on each span, add the otel_nova_plugin as a pre-request plugin:
routes(_Environment) ->
[#{
plugins => [{pre_request, otel_nova_plugin, #{}}],
routes => [
{"/posts", fun blog_posts_controller:list/1, #{methods => [get]}},
{"/posts/:id", fun blog_posts_controller:show/1, #{methods => [get]}}
]
}].
Spans get enriched with nova.app, nova.controller, and nova.action attributes, and the span name becomes GET blog_posts_controller:list instead of just HTTP GET.
Kura query telemetry
Kura has its own telemetry for database queries. Enable it in sys.config:
{kura, [{log, true}]}
This logs every query with its SQL, parameters, duration, and row count. For custom handling, pass an {M, F} tuple:
{kura, [{log, {blog_telemetry, log_query}}]}
Combined with OpenTelemetry HTTP spans, you get end-to-end visibility from the HTTP request through the database query and back.
Full sys.config example
[
{nova, [
{cowboy_configuration, #{
port => 8080,
stream_handlers => [otel_nova_stream_h, cowboy_stream_h]
}}
]},
{kura, [{log, true}]},
{opentelemetry, [
{span_processor, batch},
{traces_exporter, {opentelemetry_exporter, #{
protocol => http_protobuf,
endpoints => [#{host => "localhost", port => 4318, path => "/v1/traces"}]
}}}
]},
{opentelemetry_experimental, [
{readers, [
#{module => otel_metric_reader,
config => #{
export_interval_ms => 5000,
exporter => {otel_nova_prom_exporter, #{}}
}}
]}
]},
{opentelemetry_exporter, [
{otlp_protocol, http_protobuf},
{otlp_endpoint, "http://localhost:4318"}
]}
].
Verifying it works
Make some requests:
curl http://localhost:8080/api/posts
curl -X POST -H "Content-Type: application/json" \
-d '{"title":"Test","body":"Hello"}' http://localhost:8080/api/posts
Check the Prometheus endpoint:
curl http://localhost:9464/metrics
You should see output like:
# HELP http_server_request_duration_seconds Duration of HTTP server requests
# TYPE http_server_request_duration_seconds histogram
http_server_request_duration_seconds_bucket{method="GET",...,le="0.005"} 1
...
For traces, check your configured backend (Tempo, Jaeger, etc.).
How it works under the hood
The otel_nova_stream_h stream handler sits in Cowboy's stream pipeline. When a request arrives it:
- Extracts trace context from the
traceparentheader - Creates a server span named
HTTP <method> - Sets request attributes (method, path, scheme, host, port, peer address, user agent)
- Increments the active requests counter
When the request terminates it:
- Sets the response status code attribute
- Marks the span as error if status >= 500
- Ends the span
- Records duration, request body size, and response body size metrics
- Decrements the active requests counter
Running with a full observability stack
The nova_otel_demo repository has a complete example with Docker Compose including:
- OpenTelemetry Collector — receives traces and metrics via OTLP
- Grafana Tempo — stores and queries traces
- Grafana Mimir — stores Prometheus metrics
- Grafana — dashboards and trace exploration
Clone it and run docker-compose up from the docker/ directory.
Next, let's build custom plugins and set up CORS for our API.
Custom Plugins and CORS
In the Plugins chapter we saw how Nova's built-in plugins work. Now let's build custom plugins and set up CORS for our blog API.
The nova_plugin behaviour
Every callback in nova_plugin is optional — implement only what you need. A plugin registered as pre_request must export pre_request/4; one registered as post_request must export post_request/4.
Request callbacks
-callback pre_request(Req, Env, Options, State) ->
{ok, Req, State} | %% Continue to the next plugin
{break, Req, State} | %% Skip remaining plugins, go to controller
{stop, Req, State} | %% Stop entirely, plugin handles the response
{error, Reason}. %% Trigger a 500 error
-callback post_request(Req, Env, Options, State) ->
{ok, Req, State} |
{break, Req, State} |
{stop, Req, State} |
{error, Reason}.
-callback plugin_info() ->
#{title := binary(), version := binary(), url := binary(),
authors := [binary()], description := binary(),
options => [{atom(), binary()}]}.
Lifecycle callbacks: init/0 and stop/1
Two optional callbacks manage global, long-lived state that persists across requests:
-callback init() -> State :: any().
-callback stop(State :: any()) -> ok.
init/0 is called once when the plugin is loaded. The state it returns is passed as the State argument to every pre_request/4 and post_request/4 call. stop/1 is called when the application shuts down and receives the current state for cleanup.
This is useful when a plugin needs a long-lived resource — an ETS table, a connection pool reference, or a background process:
-module(blog_stats_plugin).
-behaviour(nova_plugin).
-export([init/0,
stop/1,
pre_request/4,
post_request/4,
plugin_info/0]).
init() ->
Tab = ets:new(request_stats, [public, set]),
ets:insert(Tab, {total_requests, 0}),
#{table => Tab}.
stop(#{table := Tab}) ->
ets:delete(Tab),
ok.
pre_request(Req, _Env, _Options, #{table := Tab} = State) ->
ets:update_counter(Tab, total_requests, 1),
{ok, Req, State}.
post_request(Req, _Env, _Options, State) ->
{ok, Req, State}.
plugin_info() ->
#{title => <<"blog_stats_plugin">>,
version => <<"1.0.0">>,
url => <<"https://github.com/novaframework/nova">>,
authors => [<<"Blog">>],
description => <<"Tracks total request count in ETS">>}.
Without init/0, the plugin state starts as undefined. Without stop/1, no cleanup runs on shutdown.
Example: Request logger
A plugin that logs every request with method, path, and response time.
Create src/plugins/blog_logger_plugin.erl:
-module(blog_logger_plugin).
-behaviour(nova_plugin).
-include_lib("kernel/include/logger.hrl").
-export([pre_request/4,
post_request/4,
plugin_info/0]).
pre_request(Req, _Env, _Options, State) ->
StartTime = erlang:monotonic_time(millisecond),
{ok, Req#{start_time => StartTime}, State}.
post_request(Req, _Env, _Options, State) ->
StartTime = maps:get(start_time, Req, 0),
Duration = erlang:monotonic_time(millisecond) - StartTime,
Method = cowboy_req:method(Req),
Path = cowboy_req:path(Req),
?LOG_INFO("~s ~s completed in ~pms", [Method, Path, Duration]),
{ok, Req, State}.
plugin_info() ->
#{title => <<"blog_logger_plugin">>,
version => <<"1.0.0">>,
url => <<"https://github.com/novaframework/nova">>,
authors => [<<"Blog">>],
description => <<"Logs request method, path and duration">>}.
Register it as both pre-request and post-request in sys.config:
{plugins, [
{pre_request, nova_request_plugin, #{decode_json_body => true,
read_urlencoded_body => true}},
{pre_request, blog_logger_plugin, #{}},
{post_request, blog_logger_plugin, #{}}
]}
Output:
[info] GET /api/posts completed in 3ms
[info] POST /api/posts completed in 12ms
Example: Rate limiter
A plugin that limits requests per IP address using ETS:
-module(blog_rate_limit_plugin).
-behaviour(nova_plugin).
-export([pre_request/4,
post_request/4,
plugin_info/0]).
pre_request(Req, _Env, Options, State) ->
MaxRequests = maps:get(max_requests, Options, 100),
WindowMs = maps:get(window_ms, Options, 60000),
{IP, _Port} = cowboy_req:peer(Req),
Key = {rate_limit, IP},
Now = erlang:monotonic_time(millisecond),
case ets:lookup(blog_rate_limits, Key) of
[{Key, Count, WindowStart}] when Now - WindowStart < WindowMs ->
if Count >= MaxRequests ->
Reply = cowboy_req:reply(429,
#{<<"content-type">> => <<"application/json">>},
<<"{\"error\":\"too many requests\"}">>,
Req),
{stop, Reply, State};
true ->
ets:update_element(blog_rate_limits, Key, {2, Count + 1}),
{ok, Req, State}
end;
_ ->
ets:insert(blog_rate_limits, {Key, 1, Now}),
{ok, Req, State}
end.
post_request(Req, _Env, _Options, State) ->
{ok, Req, State}.
plugin_info() ->
#{title => <<"blog_rate_limit_plugin">>,
version => <<"1.0.0">>,
url => <<"https://github.com/novaframework/nova">>,
authors => [<<"Blog">>],
description => <<"Simple IP-based rate limiting">>,
options => [{max_requests, <<"Max requests per window">>},
{window_ms, <<"Window duration in milliseconds">>}]}.
Create the ETS table on application start in src/blog_app.erl:
start(_StartType, _StartArgs) ->
ets:new(blog_rate_limits, [named_table, public, set]),
blog_sup:start_link().
When the limit is exceeded, the plugin returns {stop, Reply, State} — a 429 response is sent and the controller is never called.
CORS
If your API is consumed by a frontend on a different domain, the browser blocks requests unless your server sends the right CORS (Cross-Origin Resource Sharing) headers. Nova includes a CORS plugin.
Using nova_cors_plugin
Add it to your plugin configuration:
{plugins, [
{pre_request, nova_cors_plugin, #{allow_origins => <<"*">>}},
{pre_request, nova_request_plugin, #{decode_json_body => true}}
]}
Using <<"*">> allows requests from any origin. For production, restrict this to your frontend's domain:
{pre_request, nova_cors_plugin, #{allow_origins => <<"https://myblog.com">>}}
The plugin adds CORS headers to every response and handles preflight OPTIONS requests automatically.
Per-route CORS
Apply CORS only to API routes:
routes(_Environment) ->
[
%% API routes with CORS
#{prefix => "/api",
plugins => [
{pre_request, nova_cors_plugin, #{allow_origins => <<"https://myblog.com">>}},
{pre_request, nova_request_plugin, #{decode_json_body => true}}
],
routes => [
{"/posts", fun blog_posts_controller:list/1, #{methods => [get]}},
{"/posts", fun blog_posts_controller:create/1, #{methods => [post]}},
{"/posts/:id", fun blog_posts_controller:show/1, #{methods => [get]}},
{"/posts/:id", fun blog_posts_controller:update/1, #{methods => [put]}},
{"/posts/:id", fun blog_posts_controller:delete/1, #{methods => [delete]}}
]
},
%% HTML routes without CORS
#{prefix => "",
plugins => [
{pre_request, nova_request_plugin, #{read_urlencoded_body => true}}
],
routes => [
{"/login", fun blog_main_controller:login/1, #{methods => [get, post]}}
]
}
].
When plugins is set on a route group, it overrides the global plugin configuration for those routes.
Custom CORS plugin
The built-in plugin hardcodes Allow-Headers and Allow-Methods to *. For more control:
-module(blog_cors_plugin).
-behaviour(nova_plugin).
-export([pre_request/4,
post_request/4,
plugin_info/0]).
pre_request(Req, _Env, Options, State) ->
Origins = maps:get(allow_origins, Options, <<"*">>),
Methods = maps:get(allow_methods, Options, <<"GET, POST, PUT, DELETE, OPTIONS">>),
Headers = maps:get(allow_headers, Options, <<"Content-Type, Authorization">>),
MaxAge = maps:get(max_age, Options, <<"86400">>),
Req1 = cowboy_req:set_resp_header(<<"access-control-allow-origin">>, Origins, Req),
Req2 = cowboy_req:set_resp_header(<<"access-control-allow-methods">>, Methods, Req1),
Req3 = cowboy_req:set_resp_header(<<"access-control-allow-headers">>, Headers, Req2),
Req4 = cowboy_req:set_resp_header(<<"access-control-max-age">>, MaxAge, Req3),
Req5 = case maps:get(allow_credentials, Options, false) of
true ->
cowboy_req:set_resp_header(
<<"access-control-allow-credentials">>, <<"true">>, Req4);
false ->
Req4
end,
case cowboy_req:method(Req5) of
<<"OPTIONS">> ->
Reply = cowboy_req:reply(204, Req5),
{stop, Reply, State};
_ ->
{ok, Req5, State}
end.
post_request(Req, _Env, _Options, State) ->
{ok, Req, State}.
plugin_info() ->
#{title => <<"blog_cors_plugin">>,
version => <<"1.0.0">>,
url => <<"https://github.com/novaframework/nova">>,
authors => [<<"Blog">>],
description => <<"Configurable CORS plugin">>,
options => [{allow_origins, <<"Allowed origins">>},
{allow_methods, <<"Allowed HTTP methods">>},
{allow_headers, <<"Allowed headers">>},
{max_age, <<"Preflight cache duration">>},
{allow_credentials, <<"Allow credentials">>}]}.
Configure with all options:
{pre_request, blog_cors_plugin, #{
allow_origins => <<"https://myblog.com">>,
allow_methods => <<"GET, POST, PUT, DELETE">>,
allow_headers => <<"Content-Type, Authorization, X-Request-ID">>,
max_age => <<"3600">>,
allow_credentials => true
}}
Testing CORS
Verify headers with curl:
# Check preflight response
curl -v -X OPTIONS localhost:8080/api/posts \
-H "Origin: https://myblog.com" \
-H "Access-Control-Request-Method: POST"
# Check actual response headers
curl -v localhost:8080/api/posts \
-H "Origin: https://myblog.com"
You should see the Access-Control-Allow-Origin header in the response.
Plugin return values
| Return | Effect |
|---|---|
{ok, Req, State} | Continue to the next plugin or controller |
{break, Req, State} | Skip remaining plugins in this phase, go to controller |
{stop, Req, State} | Stop everything — plugin must have already sent a response |
{error, Reason} | Trigger a 500 error page |
Next, let's cover security best practices before preparing for deployment.
Security
This chapter describes how common web vulnerabilities can occur in a Nova application and the secure coding practices to prevent them. Nova provides built-in security plugins, but they must be enabled and configured correctly.
For additional Erlang-specific guidance, see the ERLEF Secure Coding and Deployment Hardening Guidelines.
Remote code execution
Remote code execution (RCE) is the most severe class of vulnerability — it gives an attacker full access to your production server.
Unsafe functions
Never pass untrusted input to any of the following:
%% Code evaluation
erl_eval:exprs(UserInput, Bindings)
erl_eval:expr(UserInput, Bindings)
%% OS command execution
os:cmd(UserInput)
%% Deserialization
erlang:binary_to_term(UserInput)
OS commands
os:cmd/1 passes its argument through the system shell, making it trivial to inject arbitrary commands.
%% VULNERABLE — shell injection
os:cmd("convert " ++ UserFilename ++ " output.png")
%% SAFE — use open_port with explicit argv (no shell)
open_port({spawn_executable, "/usr/bin/convert"},
[{args, [UserFilename, "output.png"]}, exit_status])
Binary deserialization
binary_to_term/2 with the [safe] option only prevents creation of new atoms. It does not prevent construction of executable terms — an attacker can craft a binary that triggers arbitrary function calls when deserialized.
%% DANGEROUS — even with [safe], executable terms can be created
erlang:binary_to_term(UserInput, [safe])
If you need to exchange structured data with clients, use JSON. Nova uses thoas by default.
Atom exhaustion
Atoms are never garbage collected. Converting untrusted input to atoms will eventually crash the VM.
%% VULNERABLE
binary_to_atom(UserInput, utf8)
list_to_atom(UserInput)
%% SAFE — only succeeds if the atom already exists
binary_to_existing_atom(UserInput, utf8)
list_to_existing_atom(UserInput)
SQL injection
SQL injection enables an attacker to read, modify, or delete arbitrary data in your database — and in some cases execute system commands.
Parameterized queries with Kura
Kura uses parameterized queries by default, which prevents injection:
%% SAFE — parameterized automatically
Q = kura_query:from(fruit),
Q1 = kura_query:where(Q, [{quantity, '>=', MinQ}, {secret, false}]),
{ok, Fruits} = my_repo:all(Q1).
%% Generated: SELECT * FROM "fruits" WHERE "quantity" >= $1 AND "secret" = $2
Raw SQL
When using pgo directly, always pass parameters as a list:
%% VULNERABLE — direct interpolation
pgo:query("SELECT * FROM fruits WHERE quantity >= " ++ MinQ)
%% SAFE — parameterized
pgo:query("SELECT * FROM fruits WHERE quantity >= $1", [MinQ])
Mass assignment
Kura changesets require explicit field whitelisting via cast/3. Including sensitive fields like is_admin exposes privilege escalation:
%% VULNERABLE — user can escalate to admin
registration_changeset(User, Params) ->
kura_changeset:cast(User, Params, [name, email, password, is_admin]).
%% SAFE — is_admin cannot be set from user input
registration_changeset(User, Params) ->
kura_changeset:cast(User, Params, [name, email, password]).
Server-side request forgery (SSRF)
SSRF occurs when your application makes HTTP requests using URLs derived from untrusted input. An attacker can route requests to internal services — cloud metadata endpoints (AWS 169.254.169.254), databases, caches, or unpatched microservices.
%% VULNERABLE — user controls the destination
handle(#{json := #{<<"url">> := Url}} = Req) ->
{ok, _Status, _Headers, Body} = hackney:get(Url),
{json, 200, #{}, #{<<"result">> => Body}, Req}.
SSRF has been the root cause of major data breaches, including the 2019 Capital One breach where an attacker exploited SSRF to access AWS metadata credentials.
Mitigations:
- Avoid making HTTP requests based on user input whenever possible.
- Validate URLs against an allowlist of permitted hosts and schemes.
- Block requests to private IP ranges (
10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,169.254.0.0/16,127.0.0.0/8).
allowed_hosts() -> [<<"api.example.com">>, <<"cdn.example.com">>].
validate_url(Url) ->
#{host := Host, scheme := Scheme} = uri_string:parse(Url),
case {Scheme, lists:member(Host, allowed_hosts())} of
{<<"https">>, true} -> ok;
_ -> {error, forbidden}
end.
Cross-site scripting (XSS)
XSS allows an attacker to execute arbitrary JavaScript in a victim's browser, stealing session cookies, credentials, or performing actions on their behalf.
Default protection
Nova uses ErlyDTL for HTML templates. ErlyDTL auto-escapes all variables by default — <script>alert(1)</script> renders as <script>alert(1)</script>.
Dangerous patterns
%% VULNERABLE — raw HTML response with user input
handle(#{parsed_qs := #{<<"name">> := Name}} = Req) ->
Html = <<"<html><body>Hello, ", Name/binary, "</body></html>">>,
{ok, Req2} = cowboy_req:reply(200,
#{<<"content-type">> => <<"text/html">>}, Html, Req),
{ok, Req2}.
%% VULNERABLE — user-controlled content-type
handle(#{parsed_qs := #{<<"type">> := ContentType}} = Req) ->
{ok, Req2} = cowboy_req:reply(200,
#{<<"content-type">> => ContentType}, Body, Req),
{ok, Req2}.
The first example bypasses ErlyDTL's escaping by constructing HTML directly. The second lets an attacker set text/html as the content type for data that shouldn't be rendered as HTML.
Safe patterns
%% SAFE — ErlyDTL template (auto-escaping)
handle(#{parsed_qs := #{<<"name">> := Name}} = Req) ->
{ok, [{name, Name}], Req}.
%% Template: <body>Hello, {{ name }}</body>
%% SAFE — JSON response (no HTML interpretation)
handle(#{parsed_qs := #{<<"name">> := Name}} = Req) ->
{json, 200, #{}, #{<<"greeting">> => <<"Hello, ", Name/binary>>}, Req}.
Content Security Policy
CSP tells browsers which sources of scripts, styles, and other resources are permitted. Enable it via nova_secure_headers_plugin:
{pre_request, nova_secure_headers_plugin, #{
csp => <<"default-src 'self'; script-src 'self'; style-src 'self'">>
}}
File upload XSS
If your application serves user-uploaded files, an attacker can upload an HTML file containing <script> tags. If served with text/html, the script executes in the context of your domain.
Mitigations:
- Validate file MIME types against an allowlist.
- Serve uploads from a separate domain or with
Content-Disposition: attachment. - Never let users control the
Content-Typeresponse header.
Cross-site request forgery (CSRF)
CSRF tricks a user's browser into making state-changing requests to your application using their existing session. For example, a malicious site could include a form that POSTs to your /transfer endpoint — the browser automatically attaches the victim's session cookie.
Nova provides nova_csrf_plugin, which implements the Synchronizer Token Pattern.
Enabling CSRF protection
{plugins, [
{pre_request, nova_request_plugin, #{read_urlencoded_body => true}},
{pre_request, nova_csrf_plugin, #{}}
]}
nova_csrf_plugin must run after nova_request_plugin so that form parameters are parsed before CSRF validation.
How it works
- For safe methods (GET, HEAD, OPTIONS), the plugin generates a token stored in the session.
- For unsafe methods (POST, PUT, PATCH, DELETE), it validates the submitted token against the session.
- Tokens are compared using
crypto:hash_equals/2— constant-time comparison prevents timing attacks. - The token is automatically available in ErlyDTL templates as
csrf_token.
Including the token in forms
<form method="post" action="/update">
<input type="hidden" name="_csrf_token" value="{{ csrf_token }}" />
<button type="submit">Update</button>
</form>
Including the token in AJAX requests
const token = document.querySelector('meta[name="csrf-token"]').content;
fetch('/api/update', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-csrf-token': token
},
body: JSON.stringify(data)
});
GET requests must not change state
Never allow state-changing operations via GET. Query parameters cannot be protected by CSRF tokens in the same way as POST bodies.
%% VULNERABLE — state change via GET
{"/users/update_bio", fun user_controller:update_bio/1, #{methods => [get]}}
%% Attacker: <img src="https://yourapp.com/users/update_bio?bio=Hacked" />
%% SAFE — POST only
{"/users/update_bio", fun user_controller:update_bio/1, #{methods => [post]}}
Excluding API routes
For API endpoints that use token-based authentication (not cookies), exclude them from CSRF validation:
{pre_request, nova_csrf_plugin, #{
excluded_paths => [<<"/api/">>]
}}
CORS misconfiguration
Cross-Origin Resource Sharing (CORS) controls which domains can make requests to your API. An overly permissive policy allows malicious sites to read sensitive data from authenticated users.
This is covered in detail in Custom Plugins & CORS. The key rule is: never use wildcard origins in production if your endpoints use cookie-based authentication.
%% DANGEROUS — any site can read authenticated responses
{pre_request, nova_cors_plugin, #{allow_origins => <<"*">>}}
%% SAFE — explicit allowlist
{pre_request, nova_cors_plugin, #{
allow_origins => <<"https://app.example.com">>
}}
Broken access control
Broken access control means an attacker can perform actions they shouldn't — viewing other users' data, escalating privileges, or bypassing authentication.
Derive authorization from the session
Never trust client-supplied identifiers. Always use the server-side session or auth_data from the security handler:
%% VULNERABLE — trusts client-supplied user ID
update_profile(#{json := #{<<"user_id">> := UserId, <<"bio">> := Bio}} = Req) ->
{ok, _} = my_repo:update(user, UserId, #{bio => Bio}),
{json, 200, #{}, #{<<"status">> => <<"ok">>}, Req}.
%% SAFE — user ID from authenticated session
update_profile(#{auth_data := #{id := UserId}, json := #{<<"bio">> := Bio}} = Req) ->
{ok, _} = my_repo:update(user, UserId, #{bio => Bio}),
{json, 200, #{}, #{<<"status">> => <<"ok">>}, Req}.
Resource ownership
Check ownership in the controller, as shown in the Authorization chapter:
update(#{bindings := #{<<"id">> := Id}, json := Params,
auth_data := #{id := UserId}}) ->
case my_repo:get(post, binary_to_integer(Id)) of
{ok, #{user_id := UserId} = Post} ->
CS = post:changeset(Post, Params),
case my_repo:update(CS) of
{ok, Updated} -> {json, post_to_json(Updated)};
{error, CS1} -> {json, 422, #{}, #{errors => changeset_errors(CS1)}}
end;
{ok, _Post} ->
{status, 403};
{error, not_found} ->
{status, 404}
end.
Session security
Nova's session system is covered in the Sessions chapter. Here are the security-critical aspects.
Secure cookie defaults
Nova sets these cookie attributes by default:
| Attribute | Default | Purpose |
|---|---|---|
http_only | true | Prevents JavaScript access (mitigates XSS) |
secure | true | Cookie only sent over HTTPS |
same_site | lax | Browser-level CSRF mitigation |
path | / | Cookie applies to all paths |
Do not weaken these defaults unless you have a specific reason and understand the implications.
Session fixation
After a user authenticates, rotate the session ID to prevent session fixation attacks — where an attacker sets a known session ID before the victim logs in:
login(#{json := #{<<"email">> := Email, <<"password">> := Pass}} = Req) ->
case blog_accounts:authenticate(Email, Pass) of
{ok, User} ->
ok = nova_session:rotate(Req),
ok = nova_session:set(Req, <<"user_id">>, maps:get(id, User)),
{json, 200, #{}, #{<<"status">> => <<"ok">>}, Req};
error ->
{json, 401, #{}, #{<<"error">> => <<"invalid credentials">>}, Req}
end.
Session expiration
Configure appropriate timeouts:
{nova, [
{session_max_age, 86400}, %% 24 hours absolute maximum
{session_idle_timeout, 3600} %% 1 hour idle timeout
]}
Expired and idle sessions are automatically cleaned up every 60 seconds.
Sensitive data
Avoid storing sensitive data (passwords, API keys, credit card numbers) in sessions. If a process handles secrets, mark it:
process_flag(sensitive, true)
This prevents the process state from appearing in crash dumps.
Rate limiting
Rate limiting protects against brute-force attacks and abuse. Nova provides nova_rate_limit_plugin.
Basic configuration
{pre_request, nova_rate_limit_plugin, #{
max_requests => 100,
window_ms => 60000 %% 100 requests per minute
}}
Targeted rate limiting for sensitive endpoints
Apply stricter limits to authentication endpoints:
{pre_request, nova_rate_limit_plugin, #{
max_requests => 5,
window_ms => 300000, %% 5 attempts per 5 minutes
paths => [<<"/login">>, <<"/api/auth">>]
}}
Custom key function
By default, rate limiting is per client IP. For API token-based limiting:
{pre_request, nova_rate_limit_plugin, #{
max_requests => 1000,
window_ms => 3600000,
key_fun => fun(Req) ->
case cowboy_req:header(<<"authorization">>, Req) of
undefined -> cowboy_req:peer(Req);
Token -> Token
end
end
}}
When a client exceeds the limit, Nova returns 429 Too Many Requests with a Retry-After header.
HTTPS and transport security
Force HTTPS
Use nova_force_ssl_plugin to redirect all HTTP traffic to HTTPS:
{pre_request, nova_force_ssl_plugin, #{
excluded_paths => [<<"/.well-known/">>, <<"/health">>]
}}
HSTS
HTTP Strict Transport Security tells browsers to always use HTTPS for your domain. Enable it via nova_secure_headers_plugin:
{pre_request, nova_secure_headers_plugin, #{
hsts => true,
hsts_max_age => 31536000, %% 1 year
hsts_include_subdomains => true
}}
TLS configuration
Configure Cowboy with strong TLS settings:
{cowboy_configuration, #{
use_ssl => true,
ssl_port => 443,
ssl_options => #{
certfile => "/path/to/cert.pem",
keyfile => "/path/to/key.pem",
versions => ['tlsv1.3', 'tlsv1.2'],
honor_cipher_order => true
}
}}
Secure headers
Nova's nova_secure_headers_plugin sets defensive HTTP headers on every response:
| Header | Default | Protection |
|---|---|---|
x-frame-options | DENY | Clickjacking |
x-content-type-options | nosniff | MIME sniffing |
x-xss-protection | 1; mode=block | Reflected XSS (legacy browsers) |
referrer-policy | strict-origin-when-cross-origin | Information leakage via Referer |
permissions-policy | geolocation=(), camera=(), microphone=() | Browser feature restriction |
Enable with all protections:
{pre_request, nova_secure_headers_plugin, #{
hsts => true,
csp => <<"default-src 'self'; script-src 'self'; style-src 'self'; img-src 'self' data:; font-src 'self'">>
}}
Error handling and information leakage
In development, detailed error pages with stack traces help you debug. In production, they help attackers. See the Error Handling chapter for the full picture — the security essentials are:
%% Production — generic error pages, no stacktraces
{nova, [
{environment, prod},
{use_stacktrace, false},
{render_error_pages, true}
]}
Never log passwords, tokens, or PII:
%% GOOD — structured, no secrets
logger:info(#{msg => "user_login", user_id => UserId}).
%% BAD — leaks credentials
logger:info("Login: ~p with password ~p", [Email, Password]).
File serving
Nova's nova_file_controller blocks path traversal by rejecting .. and . segments in file paths. Configure static file routes securely:
{"/static/[...]", "priv/static", #{
list_dir => false, %% Never expose directory listings
index_files => ["index.html"]
}}
When accepting file uploads:
- Validate MIME types against an allowlist — never trust the client-supplied
Content-Type. - Limit file sizes via Cowboy's body reading options.
- Store uploads outside the web root to prevent direct execution.
- Generate random filenames to prevent path traversal via crafted filenames.
WebSocket security
Nova WebSocket connections go through the same plugin and security handler chain as HTTP requests — so authentication works the same way.
Authenticate connections
#{prefix => "/ws",
security => fun blog_auth:session_auth/1,
routes => [
{"/chat", {blog_ws_handler, []}, #{protocol => ws}}
]}
Validate incoming messages
All incoming WebSocket messages are untrusted input. Validate and size-limit them:
websocket_handle({text, RawMsg}, State) ->
case thoas:decode(RawMsg) of
{ok, #{<<"type">> := <<"chat">>, <<"body">> := Body}}
when is_binary(Body), byte_size(Body) =< 4096 ->
handle_chat(Body, State);
_ ->
{reply, {text, <<"{\"error\":\"invalid message\"}">>}, State}
end.
Recommended plugin order
The order of plugins matters. Here is a recommended configuration for production:
{plugins, [
%% 1. Force HTTPS first
{pre_request, nova_force_ssl_plugin, #{
excluded_paths => [<<"/health">>]
}},
%% 2. Security headers on every response
{pre_request, nova_secure_headers_plugin, #{
hsts => true,
csp => <<"default-src 'self'">>
}},
%% 3. Rate limiting before expensive processing
{pre_request, nova_rate_limit_plugin, #{
max_requests => 100,
window_ms => 60000
}},
%% 4. Correlation ID for request tracing
{pre_request, nova_correlation_plugin, #{}},
%% 5. Parse request body
{pre_request, nova_request_plugin, #{
decode_json_body => true,
read_urlencoded_body => true,
parse_qs => true
}},
%% 6. CSRF validation (must be after request_plugin)
{pre_request, nova_csrf_plugin, #{
excluded_paths => [<<"/api/">>]
}},
%% 7. CORS for API routes
{pre_request, nova_cors_plugin, #{
allow_origins => <<"https://app.example.com">>
}}
]}
Production security checklist
Before deploying a Nova application:
-
environmentset toprod,use_stacktraceset tofalse -
HTTPS enforced via
nova_force_ssl_plugin -
HSTS enabled via
nova_secure_headers_plugin - CSP configured for your application's needs
- CSRF protection enabled for all cookie-authenticated routes
- CORS origins explicitly allowlisted (no wildcards)
- Rate limiting on sensitive endpoints (login, registration, password reset)
-
Session cookies:
http_only,secure,same_siteall set -
Session rotation on authentication (
nova_session:rotate/1) -
Session timeouts configured (
session_max_age,session_idle_timeout) - File uploads validated (MIME type, size, filename)
- Directory listing disabled for static file serving
-
No
os:cmd/1— useopen_port/2with explicit args -
No
binary_to_term/1on untrusted input -
No
binary_to_atom/2on untrusted input — usebinary_to_existing_atom/2 -
All database queries parameterized (Kura or explicit
$Nplaceholders) -
Changeset
cast/3fields explicitly whitelisted - Authorization derived from server-side session, not client input
- WebSocket messages validated and size-limited
- Sensitive data not logged or stored in sessions
- Custom error pages configured (no stacktrace leakage)
- Erlang distribution secured or disabled if not needed
With security covered, let's look at deployment.
Deployment
In development we use rebar3 nova serve with hot-reloading and debug logging. For production we need a proper OTP release — a self-contained package with your application, all dependencies, and optionally the Erlang runtime.
Release basics
Rebar3 uses relx to build releases. The generated rebar.config includes a release configuration:
{relx, [{release, {blog, "0.1.0"},
[blog,
sasl]},
{dev_mode, true},
{include_erts, false},
{extended_start_script, true},
{sys_config_src, "config/dev_sys.config.src"},
{vm_args_src, "config/vm.args.src"}
]}.
This is the development release config — dev_mode symlinks to source, and ERTS is not included.
Production profile
Override settings for production using a rebar3 profile:
{profiles, [
{prod, [
{relx, [
{dev_mode, false},
{include_erts, true},
{sys_config_src, "config/prod_sys.config.src"}
]}
]}
]}.
Key differences:
dev_modeisfalse— files are copied into the releaseinclude_ertsistrue— the Erlang runtime is bundled- Uses
prod_sys.config.srcwith production settings
Production configuration
config/prod_sys.config.src:
[
{kernel, [
{logger_level, info},
{logger, [
{handler, default, logger_std_h,
#{config => #{file => "log/erlang.log"},
formatter => {flatlog, #{
map_depth => 3,
term_depth => 50,
colored => false,
template => ["[", level, "] ", msg, "\n"]
}}}}
]}
]},
{nova, [
{use_stacktrace, false},
{environment, prod},
{cowboy_configuration, #{port => 8080}},
{dev_mode, false},
{bootstrap_application, blog},
{plugins, [
{pre_request, nova_request_plugin, #{
decode_json_body => true,
read_urlencoded_body => true
}}
]}
]},
{blog, [
{database, <<"${DB_NAME}">>},
{db_host, <<"${DB_HOST}">>},
{db_user, <<"${DB_USER}">>},
{db_password, <<"${DB_PASSWORD}">>}
]}
].
- Logger level is
infoinstead ofdebug use_stacktraceisfalse— don't leak stack traces to users- Environment variables use
${VAR}syntax — rebar3 substitutes these at release build time
VM arguments
config/vm.args.src controls Erlang VM settings. For production:
-name blog@${HOSTNAME}
-setcookie ${RELEASE_COOKIE}
+K true
+A30
+sbwt very_long
+swt very_low
-nameinstead of-snamefor full node names (needed for clustering)+sbwtand+swttune scheduler busy-wait for lower latency
Building and running
Build a production release:
rebar3 as prod release
If you have JSON schemas in priv/schemas/, you can use nova release instead. It automatically regenerates the OpenAPI spec before building:
rebar3 nova release
===> Generated priv/assets/openapi.json
===> Generated priv/assets/swagger.html
===> Release successfully assembled: _build/prod/rel/blog
This ensures your deployed application always ships with up-to-date API documentation. See OpenAPI, Inspection & Audit for details.
Start it:
_build/prod/rel/blog/bin/blog foreground
Or as a daemon:
_build/prod/rel/blog/bin/blog daemon
Other commands:
# Check if the node is running
_build/prod/rel/blog/bin/blog ping
# Attach a remote shell
_build/prod/rel/blog/bin/blog remote_console
# Stop the node
_build/prod/rel/blog/bin/blog stop
Building a tarball
For deployment to another machine:
rebar3 as prod tar
This creates blog-0.1.0.tar.gz. Since ERTS is included, the target server does not need Erlang installed:
# On the server
mkdir -p /opt/blog
tar -xzf blog-0.1.0.tar.gz -C /opt/blog
/opt/blog/bin/blog daemon
SSL/TLS
Configure HTTPS in Nova:
{nova, [
{cowboy_configuration, #{
use_ssl => true,
ssl_port => 8443,
ssl_options => #{
certfile => "/etc/letsencrypt/live/myblog.com/fullchain.pem",
keyfile => "/etc/letsencrypt/live/myblog.com/privkey.pem"
}
}}
]}
Alternatively, put a reverse proxy (Nginx, Caddy) in front and let it handle SSL termination. This is the more common approach.
Systemd service
Run as a system service:
[Unit]
Description=Blog Application
After=network.target postgresql.service
[Service]
Type=forking
User=blog
Group=blog
WorkingDirectory=/opt/blog
ExecStart=/opt/blog/bin/blog daemon
ExecStop=/opt/blog/bin/blog stop
Restart=on-failure
RestartSec=5
Environment=DB_HOST=localhost
Environment=DB_NAME=blog_prod
Environment=DB_USER=blog
Environment=DB_PASSWORD=secret
Environment=RELEASE_COOKIE=my_secret_cookie
[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable blog
sudo systemctl start blog
Docker
A multi-stage Dockerfile:
FROM erlang:28 AS builder
WORKDIR /app
COPY . .
RUN rebar3 as prod tar
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y libssl3 libncurses6 && rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY --from=builder /app/_build/prod/rel/blog/*.tar.gz .
RUN tar -xzf *.tar.gz && rm *.tar.gz
EXPOSE 8080
CMD ["/app/bin/blog", "foreground"]
Build and run:
docker build -t blog .
docker run -p 8080:8080 \
-e DB_HOST=host.docker.internal \
-e DB_NAME=blog_prod \
-e DB_USER=blog \
-e DB_PASSWORD=secret \
blog
For sub-applications like Nova Admin, add them to your release deps and nova_apps config. They are bundled automatically in the release. See Custom Plugins and CORS for plugin configuration that carries over to production.
Summary
Deploying a Nova application follows standard OTP release practices:
- Configure a production profile in
rebar.config - Set up production config with proper logging and secrets
- Build with
rebar3 as prod releaseorrebar3 as prod tar - Deploy using systemd, Docker, or any process manager
OTP releases are self-contained — once built, everything you need is in a single directory or archive.
That wraps up the main content. For quick reference, see the Erlang Essentials appendix and the Cheat Sheet.
Erlang Essentials
This appendix is not a full Erlang tutorial. It provides a quick reference for the Erlang concepts used in this book and links to comprehensive learning resources.
Learning resources
- Learn You Some Erlang for Great Good! — The best free online book for learning Erlang from scratch. Covers everything from syntax to OTP.
- Adopting Erlang — Practical guide for teams adopting Erlang, covering development setup, building, and running in production.
- Erlang/OTP Documentation — Official reference documentation.
Installing Erlang and Rebar3
We recommend mise for managing tool versions:
# Install mise (if not already installed)
curl https://mise.run | sh
# Install Erlang and rebar3
mise use erlang@28
mise use rebar@3.23
# Verify
erl -eval 'io:format("~s~n", [erlang:system_info(otp_release)]), halt().' -noshell
rebar3 version
Alternatively, use asdf:
asdf plugin add erlang
asdf plugin add rebar
asdf install erlang 28.0
asdf install rebar 3.23.0
Quick reference
Atoms
Atoms are constants. They start with a lowercase letter or are quoted with single quotes:
ok, error, true, false, undefined
'Content-Type', 'my-atom'
Binaries and strings
Erlang has two string types. Binaries (double quotes with <<>>) are what you will use most:
<<"hello">> %% binary string
"hello" %% list of integers (less common in Nova)
Tuples
Fixed-size containers, often used for tagged return values:
{ok, Value}
{error, not_found}
{json, #{users => []}}
Maps
Key-value data structures. Nova uses maps extensively for requests, responses, and configuration:
%% Creating
#{name => <<"Alice">>, age => 30}
%% Pattern matching
#{name := Name} = Map
%% Updating
Map#{age => 31}
Pattern matching
Erlang's most powerful feature. Used in function heads, case expressions, and assignments:
%% Function clause matching
handle(#{method := <<"GET">>} = Req) -> get_handler(Req);
handle(#{method := <<"POST">>} = Req) -> post_handler(Req).
%% Case expression
case blog_repo:get(post, Id) of
{ok, Post} -> handle_post(Post);
{error, not_found} -> not_found
end.
Lists and list comprehensions
[1, 2, 3]
[Head | Tail] = [1, 2, 3] %% Head = 1, Tail = [2, 3]
%% List comprehension
[X * 2 || X <- [1, 2, 3]] %% [2, 4, 6]
%% With maps
[row_to_map(R) || R <- Rows]
Modules and functions
-module(my_module).
-export([my_function/1]).
my_function(Arg) ->
%% function body
ok.
Anonymous functions (funs)
Used extensively in Nova for route handlers and security functions:
fun my_module:my_function/1 %% Reference to named function
fun(X) -> X + 1 end %% Anonymous function
fun(_) -> {status, 200} end %% Ignore argument
OTP in 5 minutes
Applications
An OTP application is a component with a defined start/stop lifecycle. Your Nova project is an application. It has:
- An
.app.srcfile describing metadata and dependencies - An
_app.erlmodule implementing theapplicationbehaviour - A
_sup.erlmodule implementing thesupervisorbehaviour
Supervisors
Supervisors manage child processes and restart them if they crash. The generated blog_sup.erl is your application's supervisor.
gen_server
A generic server process. Used for stateful workers:
-module(my_server).
-behaviour(gen_server).
-export([start_link/0, init/1, handle_call/3, handle_cast/2, handle_info/2]).
start_link() ->
gen_server:start_link({local, ?MODULE}, ?MODULE, [], []).
init([]) ->
{ok, #{}}. %% Initial state
handle_call(Request, _From, State) ->
{reply, ok, State}.
handle_cast(_Msg, State) ->
{noreply, State}.
handle_info(_Info, State) ->
{noreply, State}.
Rebar3 basics
rebar3 compile # Compile the project
rebar3 shell # Start an interactive shell
rebar3 eunit # Run EUnit tests
rebar3 ct # Run Common Test suites
rebar3 as prod release # Build a production release
rebar3 as prod tar # Build a release tarball
rebar3 nova serve # Development server with hot-reload
rebar3 nova routes # List all registered routes
Cheat Sheet
Quick reference for Nova's APIs, return values, configuration, and Kura's database layer.
Controller return tuples
| Return | Description |
|---|---|
{ok, Variables} | Render the default template with variables |
{ok, Variables, #{view => Name}} | Render a specific template |
{ok, Variables, #{view => Name, status_code => Code}} | Render template with custom status |
{json, Data} | JSON response (status 200) |
{json, StatusCode, Headers, Body} | JSON response with custom status and headers |
{status, StatusCode} | Bare status code response |
{status, StatusCode, Headers, Body} | Status with headers and body |
{redirect, Path} | HTTP redirect |
{sendfile, StatusCode, Headers, {Offset, Length, Path}, MimeType} | Send a file |
Route configuration
#{
prefix => "/api", %% Path prefix (string)
security => false | fun Module:Function/1, %% Security function
plugins => [{Phase, Module, Options}], %% Per-route plugins (optional)
routes => [
{Path, fun Module:Function/1, #{methods => [get, post, put, delete]}},
{Path, WebSocketModule, #{protocol => ws}}, %% WebSocket route
{StatusCode, fun Module:Function/1, #{}} %% Error handler
]
}
Path parameters
{"/users/:id", fun my_controller:show/1, #{methods => [get]}}
%% Access in controller:
show(#{bindings := #{<<"id">> := Id}}) -> ...
Security functions
%% Return {true, AuthData} to allow, false to deny
my_security(#{params := Params}) ->
case check_credentials(Params) of
ok -> {true, #{user => <<"alice">>}};
_ -> false
end.
%% AuthData is available in the controller as auth_data
index(#{auth_data := #{user := User}}) -> ...
Plugin callbacks
-behaviour(nova_plugin).
pre_request(Req, Env, Options, State) ->
{ok, Req, State} | %% Continue
{break, Req, State} | %% Skip remaining plugins
{stop, Req, State} | %% Stop — plugin sent response
{error, Reason}. %% 500 error
post_request(Req, Env, Options, State) ->
%% Same return values as pre_request.
{ok, Req, State}.
plugin_info() ->
#{title := binary(), version := binary(), url := binary(),
authors := [binary()], description := binary(),
options => [{atom(), binary()}]}.
Plugin configuration
%% Global (sys.config)
{plugins, [
{pre_request, Module, Options},
{post_request, Module, Options}
]}
%% Per-route (in router)
#{plugins => [{pre_request, Module, Options}],
routes => [...]}
Session API
nova_session:get(Req, <<"key">>) -> {ok, Value} | {error, not_found}
nova_session:set(Req, <<"key">>, Value) -> ok
nova_session:delete(Req) -> {ok, Req1}
nova_session:delete(Req, <<"key">>) -> {ok, Req1}
nova_session:generate_session_id() -> {ok, SessionId}
Cookie setup
Req1 = cowboy_req:set_resp_cookie(<<"session_id">>, SessionId, Req, #{
path => <<"/">>,
http_only => true,
secure => true,
max_age => 86400
}).
WebSocket callbacks
-behaviour(nova_websocket).
init(State) ->
{ok, State}. %% Accept connection
websocket_handle({text, Msg}, State) ->
{ok, State} | %% Do nothing
{reply, {text, Response}, State} | %% Send message
{stop, State}. %% Close connection
websocket_info(ErlangMsg, State) ->
%% Same return values as websocket_handle
WebSocket route
{"/ws", my_ws_handler, #{protocol => ws}}
Pub/Sub API
nova_pubsub:join(Channel)
nova_pubsub:leave(Channel)
nova_pubsub:broadcast(Channel, Topic, Payload)
nova_pubsub:local_broadcast(Channel, Topic, Payload)
nova_pubsub:get_members(Channel)
nova_pubsub:get_local_members(Channel)
%% Message format received by processes:
{nova_pubsub, Channel, SenderPid, Topic, Payload}
Nova request plugin options
{pre_request, nova_request_plugin, #{
decode_json_body => true, %% Decode JSON body into `json` key
read_urlencoded_body => true, %% Decode URL-encoded form data into `params` key
parse_qs => true %% Parse query string into `parsed_qs` key
}}
Nova configuration (sys.config)
{nova, [
{environment, dev | prod},
{bootstrap_application, my_app},
{dev_mode, true | false},
{use_stacktrace, true | false},
{session_manager, nova_session_ets},
{render_error_pages, true | false},
{cowboy_configuration, #{
port => 8080,
use_ssl => false,
ssl_port => 8443,
ssl_options => #{certfile => "...", keyfile => "..."},
stream_handlers => [cowboy_stream_h]
}},
{plugins, [...]}
]}
Sub-applications
{my_app, [
{nova_apps, [
{nova_admin, #{prefix => "/admin"}},
{other_app, #{prefix => "/other"}}
]}
]}
Kura — Schema definition
-module(my_schema).
-behaviour(kura_schema).
-include_lib("kura/include/kura.hrl").
-export([table/0, fields/0, associations/0, embeds/0]).
table() -> <<"my_table">>.
fields() ->
[
#kura_field{name = id, type = id, primary_key = true, nullable = false},
#kura_field{name = name, type = string, nullable = false},
#kura_field{name = status, type = {enum, [active, inactive]}},
#kura_field{name = metadata, type = {embed, embeds_one, metadata_schema}},
#kura_field{name = inserted_at, type = utc_datetime},
#kura_field{name = updated_at, type = utc_datetime}
].
associations() ->
[
#kura_assoc{name = author, type = belongs_to, schema = user, foreign_key = author_id},
#kura_assoc{name = comments, type = has_many, schema = comment, foreign_key = post_id},
#kura_assoc{name = tags, type = many_to_many, schema = tag,
join_through = <<"posts_tags">>, join_keys = {post_id, tag_id}}
].
embeds() ->
[#kura_embed{name = metadata, type = embeds_one, schema = metadata_schema}].
Kura field types
| Type | PostgreSQL | Erlang |
|---|---|---|
id | BIGSERIAL | integer |
integer | INTEGER | integer |
float | DOUBLE PRECISION | float |
string | VARCHAR(255) | binary |
text | TEXT | binary |
boolean | BOOLEAN | boolean |
date | DATE | {Y, M, D} |
utc_datetime | TIMESTAMPTZ | {{Y,M,D},{H,Mi,S}} |
uuid | UUID | binary |
jsonb | JSONB | map/list |
{enum, [atoms]} | VARCHAR(255) | atom |
{array, Type} | Type[] | list |
{embed, embeds_one, Mod} | JSONB | map |
{embed, embeds_many, Mod} | JSONB | list of maps |
Kura — Changeset API
%% Create a changeset
CS = kura_changeset:cast(SchemaModule, ExistingData, Params, AllowedFields).
%% Validations
kura_changeset:validate_required(CS, [field1, field2])
kura_changeset:validate_format(CS, field, <<"regex">>)
kura_changeset:validate_length(CS, field, [{min, 3}, {max, 200}])
kura_changeset:validate_number(CS, field, [{greater_than, 0}])
kura_changeset:validate_inclusion(CS, field, [val1, val2, val3])
kura_changeset:validate_change(CS, field, fun(Val) -> ok | {error, Msg} end)
%% Constraint declarations
kura_changeset:unique_constraint(CS, field)
kura_changeset:foreign_key_constraint(CS, field)
kura_changeset:check_constraint(CS, ConstraintName, field, #{message => Msg})
%% Association/embed casting
kura_changeset:cast_assoc(CS, assoc_name)
kura_changeset:cast_assoc(CS, assoc_name, #{with => Fun})
kura_changeset:put_assoc(CS, assoc_name, Value)
kura_changeset:cast_embed(CS, embed_name)
%% Changeset helpers
kura_changeset:get_change(CS, field) -> Value | undefined
kura_changeset:get_field(CS, field) -> Value | undefined
kura_changeset:put_change(CS, field, Val) -> CS1
kura_changeset:add_error(CS, field, Msg) -> CS1
kura_changeset:apply_changes(CS) -> DataMap
kura_changeset:apply_action(CS, Action) -> {ok, Data} | {error, CS}
Schemaless changesets
Types = #{email => string, age => integer},
CS = kura_changeset:cast(Types, #{}, Params, [email, age]).
Kura — Query builder
Q = kura_query:from(schema_module),
%% Where conditions
Q1 = kura_query:where(Q, {field, value}), %% =
Q1 = kura_query:where(Q, {field, '>', value}), %% comparison
Q1 = kura_query:where(Q, {field, in, [val1, val2]}), %% IN
Q1 = kura_query:where(Q, {field, ilike, <<"%term%">>}), %% ILIKE
Q1 = kura_query:where(Q, {field, is_nil}), %% IS NULL
Q1 = kura_query:where(Q, {'or', [{f1, v1}, {f2, v2}]}), %% OR
%% Ordering, pagination
Q2 = kura_query:order_by(Q, [{field, asc}]),
Q3 = kura_query:limit(Q, 10),
Q4 = kura_query:offset(Q, 20),
%% Preloading associations
Q5 = kura_query:preload(Q, [author, {comments, [author]}]).
Kura — Repository API
%% Read
blog_repo:all(Query) -> {ok, [Map]}
blog_repo:get(Schema, Id) -> {ok, Map} | {error, not_found}
blog_repo:get_by(Schema, Clauses) -> {ok, Map} | {error, not_found}
blog_repo:one(Query) -> {ok, Map} | {error, not_found}
%% Write
blog_repo:insert(Changeset) -> {ok, Map} | {error, Changeset}
blog_repo:insert(Changeset, Opts) -> {ok, Map} | {error, Changeset}
blog_repo:update(Changeset) -> {ok, Map} | {error, Changeset}
blog_repo:delete(Changeset) -> {ok, Map} | {error, Changeset}
%% Bulk
blog_repo:insert_all(Schema, [Map]) -> {ok, Count}
blog_repo:update_all(Query, Updates) -> {ok, Count}
blog_repo:delete_all(Query) -> {ok, Count}
%% Preloading
blog_repo:preload(Schema, Records, Assocs) -> Records
%% Transactions
blog_repo:transaction(Fun) -> {ok, Result} | {error, Reason}
blog_repo:multi(Multi) -> {ok, Results} | {error, Step, Value, Completed}
Upsert options
blog_repo:insert(CS, #{on_conflict => {field, nothing}})
blog_repo:insert(CS, #{on_conflict => {field, replace_all}})
blog_repo:insert(CS, #{on_conflict => {field, {replace, [fields]}}})
Kura — Multi (transaction pipelines)
M = kura_multi:new(),
M1 = kura_multi:insert(M, step_name, Changeset),
M2 = kura_multi:update(M1, step_name, fun(Results) -> Changeset end),
M3 = kura_multi:delete(M2, step_name, Changeset),
M4 = kura_multi:run(M3, step_name, fun(Results) -> {ok, Value} end),
{ok, #{step1 := V1, step2 := V2}} = blog_repo:multi(M4).
Common rebar3 commands
| Command | Description |
|---|---|
rebar3 compile | Compile the project (also triggers kura migration generation) |
rebar3 shell | Start interactive shell |
rebar3 nova serve | Dev server with hot-reload |
rebar3 nova routes | List registered routes |
rebar3 eunit | Run EUnit tests |
rebar3 ct | Run Common Test suites |
rebar3 do eunit, ct | Run both |
rebar3 as prod release | Build production release |
rebar3 as prod tar | Build release tarball |
rebar3 dialyzer | Run type checker |
rebar3_nova commands
| Command | Description |
|---|---|
rebar3 nova gen_controller --name NAME | Generate a controller with stub actions |
rebar3 nova gen_resource --name NAME | Generate controller + JSON schema + route hints |
rebar3 nova gen_test --name NAME | Generate a Common Test suite |
rebar3 nova openapi | Generate OpenAPI 3.0.3 spec + Swagger UI |
rebar3 nova config | Show Nova configuration with defaults |
rebar3 nova middleware | Show global and per-group plugin chains |
rebar3 nova audit | Find routes missing security callbacks |
rebar3 nova release | Build release with auto-generated OpenAPI |
rebar3_kura commands
| Command | Description |
|---|---|
rebar3 kura setup --name REPO | Generate a repo module and migrations directory |
rebar3 kura compile | Diff schemas vs migrations and generate new migrations |
Generator options
# Controller with specific actions
rebar3 nova gen_controller --name products --actions list,show,create
# OpenAPI with custom output
rebar3 nova openapi --output priv/assets/openapi.json --title "My API" --api-version 1.0.0
# Kura setup with custom repo name
rebar3 kura setup --name my_repo
Arizona — Live Views
Live view callbacks
-compile({parse_transform, arizona_parse_transform}).
-behaviour(arizona_view).
mount(Params, Req) ->
arizona_view:new(?MODULE, #{
id => ~"my_view",
title => <<"Hello">>
}, none).
render(Bindings) ->
arizona_template:from_html(~"""
<h1>{arizona_template:get_binding(title, Bindings)}</h1>
<button az-click="my_event">Click</button>
""").
handle_event(EventName, Params, View) ->
State = arizona_view:get_state(View),
NewState = arizona_stateful:put_binding(key, value, State),
{Actions, arizona_view:update_state(NewState, View)}.
handle_info(ErlangMessage, View) ->
{Actions, UpdatedView}.
State management
%% Read state from view
State = arizona_view:get_state(View),
Value = arizona_stateful:get_binding(key, State),
%% Update state
NewState = arizona_stateful:put_binding(key, NewValue, State),
UpdatedView = arizona_view:update_state(NewState, View).
Event bindings
| Attribute | Triggers on |
|---|---|
az-click | Click |
az-submit | Form submission |
az-change | Input change |
az-keydown | Key press |
az-keyup | Key release |
az-focus | Element focus |
az-blur | Element blur |
az-value-* | Pass data with events |
az-debounce | Delay event (ms) |
Navigation
<a href="/path" az-live-redirect>Navigate (new view)</a>
<a href="/path?q=x" az-live-patch>Navigate (same view, new params)</a>
Actions
{[{redirect, "/path"}], View}
{[{patch, "/path?page=2"}], View}
{[{dispatch, EventName, Payload}], View}
Components
%% Stateless — pure function (no behaviour)
my_component(Bindings) ->
arizona_template:from_html(~"<h2>{maps:get(title, Bindings)}</h2>").
%% Render stateless in template
arizona_template:render_stateless(my_module, my_component, #{title => ~"Hi"})
%% Stateful — behaviour with mount/render/handle_event
-behaviour(arizona_stateful).
mount(Bindings) ->
arizona_stateful:new(?MODULE, #{id => maps:get(id, Bindings), ...}).
%% Render stateful in template (must have unique id)
arizona_template:render_stateful(my_component, #{id => ~"my-id", ...})
Client JS API
arizona.pushEvent("event", {key: "value"})
arizona.pushEventTo("#component-id", "event", {})
await arizona.callEvent("event", {})
await arizona.callEventFrom("#id", "event", {})
Hikyaku — Email
Mailer behaviour
-module(my_mailer).
-behaviour(hikyaku_mailer).
-export([config/0]).
config() ->
#{adapter => hikyaku_adapter_smtp,
relay => <<"smtp.example.com">>,
port => 587,
username => <<"user">>,
password => <<"pass">>,
tls => always}.
Building and sending
E0 = hikyaku_email:new(),
E1 = hikyaku_email:from(E0, {<<"Name">>, <<"addr@example.com">>}),
E2 = hikyaku_email:to(E1, <<"recipient@example.com">>),
E3 = hikyaku_email:cc(E2, <<"cc@example.com">>),
E4 = hikyaku_email:bcc(E3, <<"bcc@example.com">>),
E5 = hikyaku_email:reply_to(E4, <<"reply@example.com">>),
E6 = hikyaku_email:subject(E5, <<"Subject line">>),
E7 = hikyaku_email:text_body(E6, <<"Plain text body">>),
E8 = hikyaku_email:html_body(E7, <<"<h1>HTML body</h1>">>),
E9 = hikyaku_email:header(E8, <<"X-Custom">>, <<"value">>),
{ok, _} = hikyaku_mailer:deliver(my_mailer, E9).
Attachments
Att = hikyaku_attachment:from_data(Data, <<"file.pdf">>),
E1 = hikyaku_email:attachment(E0, Att).
%% Inline attachment with Content-ID
Att2 = hikyaku_attachment:from_data(ImgData, <<"logo.png">>),
Att3 = hikyaku_attachment:inline(Att2, <<"logo">>),
E2 = hikyaku_email:attachment(E1, Att3).
Adapters
| Adapter | Config keys |
|---|---|
hikyaku_adapter_smtp | relay, port, username, password, tls |
hikyaku_adapter_sendgrid | api_key |
hikyaku_adapter_mailgun | api_key, domain |
hikyaku_adapter_ses | access_key, secret_key, region |
hikyaku_adapter_logger | level |
hikyaku_adapter_test | pid |