I am going to make a small claim, then back it up. No URL shortener
currently ships a first-class Terraform provider. Bitly, TinyURL,
Rebrandly, Short.io, Dub.co — all five publish REST APIs, several
publish webhooks, none publish a terraform-provider-*. A community
provider for Bitly's v3 API exists on GitHub; it is unmaintained and
covers maybe a quarter of the API surface. That is the gap.
A few weeks ago we sat down to close it. The result is
terraform-provider-elido, which the Elido API exposes today as
elido_link (resource), elido_workspace (data source), and
elido_custom_domain (data source for now — read on). What follows
is a tour of what shipped, the engineering choices behind it, and
the parts we deliberately did not ship in v0.1.0. The provider is
open source under the same license as the rest of Elido and lives at
tools/terraform-provider-elido/.
Why short links belong in Terraform#
The argument is short. If you are running marketing redirects, you already have other infrastructure pieces that converge on the same campaign:
- A Cloudflare DNS record pointing at a lander.
- An S3 bucket and a CloudFront distribution serving that lander.
- A Lambda or a Cloud Run service generating signed URLs.
- A campaign tag baked into Google Tag Manager or Segment.
All five of those, in 2026, are managed as Terraform. The short link that sits at the front of the funnel — the actual entry point a user clicks on — is in a Google Doc. That gap is where drift comes from. A lander gets deprecated and the redirect that points at it lives on, gathering 404s, until somebody pings marketing on Slack.
You can fix that gap two ways. You can write a glue script in
TypeScript that sits between your Terraform output and our REST API.
That works; we have customers doing exactly that. Or we can give you
a real Terraform provider, where the redirect is a resource block
beside your Cloudflare record, and terraform plan / terraform destroy know about it the same way they know about everything else.
We picked the second path. The first one was already on you.
What terraform-provider-elido does today#
The minimal v0.1.0 surface, in HCL:
terraform {
required_providers {
elido = {
source = "elidoapp/elido"
version = "~> 0.1"
}
}
}
provider "elido" {
# api_url defaults to https://api.elido.app
# api_token reads ELIDO_API_TOKEN
}
data "elido_workspace" "main" {
id = 42
}
data "elido_custom_domain" "links" {
workspace_id = data.elido_workspace.main.id
hostname = "links.example.com"
}
resource "elido_link" "spring_campaign" {
workspace_id = data.elido_workspace.main.id
domain_id = data.elido_custom_domain.links.id
slug = "spring-2026"
destination_url = "https://example.com/landing/spring"
title = "Spring 2026 email campaign"
tags = ["spring-2026", "email"]
redirect_status = 301
}
terraform apply and you are done. Drift detection works on the
fields the API echoes back. Renaming the Terraform resource label,
or changing the slug mid-flight, does not force a replacement —
the provider issues a PATCH against the same numeric ID. Changing
workspace_id or domain_id does force replacement, because at
that point you are talking about a different edge route. That is
the common-sense lifecycle, and it is what HashiCorp's
plugin framework
guides nudge you towards.
The bulk-rollout shape is the part that justifies the work for most teams:
locals {
channels = ["email", "twitter", "linkedin", "reddit", "hn"]
regions = ["us", "eu", "apac", "latam"]
}
resource "elido_link" "campaign_launch" {
for_each = {
for pair in setproduct(local.channels, local.regions) :
"${pair[0]}-${pair[1]}" => pair
}
workspace_id = data.elido_workspace.main.id
domain_id = data.elido_custom_domain.links.id
slug = "launch-${each.key}"
destination_url = "https://example.com/launch?ch=${each.value[0]}&r=${each.value[1]}"
tags = ["launch-2026", each.value[0], each.value[1]]
}
Twenty links, one apply. Delete the block, twenty deletes, one apply. That is roughly the use case that turned up in three customer-call notes last quarter: marketing wants per-channel-per-region UTM links for a launch, engineering builds a Sheets-to-API script every time, the script falls out of date, the script's author leaves the company. Terraform's strength here is not novelty — it is that we have made the pattern boring.
The full guide with attribute reference and import examples is at
/docs/guides/terraform. The provider source
ships with examples/main.tf that is a more elaborate version of
the snippet above.
How the provider is built#
Roughly 600 lines of Go, of which ~200 are schema definitions. The shape:
tools/terraform-provider-elido/
├── main.go # plugin entrypoint
├── internal/provider/
│ ├── provider.go # config + auth
│ ├── link_resource.go # CRUD + import
│ ├── workspace_data_source.go # GET /v1/workspaces/{id}
│ ├── custom_domain_data_source.go # GET /v1/workspaces/{id}/domains
│ ├── helpers.go # tag conversion
│ └── provider_test.go # 7 unit tests
├── go.mod # depends on packages/sdk-go
├── .goreleaser.yml # signed-checksum builds
├── terraform-registry-manifest.json # protocol_versions: ["6.0"]
├── Makefile # build + install-local + testacc
└── examples/main.tf
A few choices worth calling out.
We use the plugin framework, not the legacy SDK. HashiCorp
explicitly steered new providers to
terraform-plugin-framework
in 2023. Most of the popular providers (aws, cloudflare,
google) are mid-migration; the smaller, newer ones are
framework-native. Building greenfield on the legacy SDK would have
meant taking on a migration task the moment we shipped. We avoided
the migration by not creating one. The framework has a stricter
type system, real schema validation at the plugin protocol level,
and a much cleaner planning model (PlanModifiers instead of
CustomizeDiff callbacks). For a small provider, the ergonomics
gap is large.
The provider does not duplicate the SDK. Every resource method
delegates to packages/sdk-go,
which is the same SDK we publish for plain-Go integrations. The
provider is, by design, a thin Schema-to-SDK adapter. That has two
consequences. The good one: any bug we fix in the SDK lands in the
provider for free. The bad one: any gap in the SDK is a gap in the
provider. The honest example is custom domains. api-core does not
yet expose POST/DELETE for /v1/workspaces/{id}/domains; the
write path lives in domain-manager behind the dashboard. Until
api-core proxies the writes, the SDK has no Domains.Create, and
the provider has no elido_custom_domain resource — only a data
source that looks an existing one up by hostname. We will close
that gap in v0.2.0; the proxy shim is a sub-week change and the
SDK + provider PR is already drafted.
Auth is the same shape as every other Elido client. Bearer API
key in the Authorization header, falling back to ELIDO_API_TOKEN
in the environment. We do not expose cookie auth or X-Dev-User-ID
in the provider; those are local-development conveniences that have
no business in IaC where the config sits in version control and
runs in CI. Your CI either has a token or it does not.
Drift detection: the part that is harder than it looks#
If you have read past the obvious bits, this is the section worth
reading. Terraform diffing is fundamentally a question of: given
what the user wrote (Plan), and what the server returned last
time (State), and what the server returns now (Read), what
should we propose to do?
For a resource like elido_link, three things make this non-trivial:
Optional + Computed fields with server defaults. The user can
omit redirect_status. The server fills in 302. The next Read
returns 302. Without care, this looks like drift on every plan —
"I asked for nothing, I got 302 back, propose to set it to nothing
again". The framework gives you a UseStateForUnknown plan modifier
that says "if I do not have a planned value, keep what is in
state". We use it on every server-defaulted field. That sounds
trivial; it is the source of the most frequent provider bugs in the
ecosystem ("provider produced inconsistent result after apply").
Tags with server-side normalisation. Our API stores tags as a
set; Terraform sees them as an ordered list. Right now we punt on
this. The server preserves order on echo, so the diff is stable in
practice, but a user who reorders tags in HCL will see a no-op
update. That is correct behaviour; the alternative — silently
sorting on input — would mean terraform plan and terraform apply disagree on what changes, which is the cardinal Terraform
sin. We will revisit if real customers complain. The HashiCorp
best-practices guide
is firmly on the "do nothing surprising" side here.
Status as a tri-state. A link can be active, paused, or
archived. Setting status = "paused" in HCL but not on Create
(the server defaults to active) means we have to issue a follow-up
PATCH inside the same Create. That is implemented as a
post-Create reconciliation step — bear it in mind if you are reading
the source. The alternative — exposing status as a separate
resource (elido_link_status keyed by link_id) — is what the AWS
provider does for a few resources. We considered it; for one
optional field, the cost outweighs the benefit. If we add a second
post-Create knob, we will rethink.
Import. terraform import elido_link.spring_campaign 42:7 —
that is <workspace_id>:<link_id>. We pick the colon-separated
form because the framework's ImportState callback gives you a
single string and you parse it yourself. The <id>:<id> shape is
common in providers that key resources by a tuple — see the
google_compute_instance import documentation
for the canonical reference. We are deliberate about not overloading
the human-readable slug; the resource state is keyed by the
numeric ID, and that is the only thing you should put in an import.
Tests, CI, the registry#
The unit suite (7 tests today) covers the schema-validation layer
plus the pure-function helpers — splitImportID, linkToModel,
apiErrorString, optString. It runs in 0.5 seconds and gates
every PR through the same go matrix that builds our 13 services.
There is also a testacc target that runs against a live api-core
when TF_ACC=1 is set, but that is opt-in: it requires a token,
and we do not run it on every commit because each test creates and
deletes a real link. HashiCorp's
testing framework
documents the pattern; we do not deviate.
The release pipeline is wired to goreleaser with the exact build
matrix the Terraform Registry expects: linux, darwin, freebsd,
windows × amd64/arm64 (plus arm and 386 on Linux),
SHA256SUMS over the archives, GPG signature on the SHA256SUMS, and
a terraform-registry-manifest.json declaring protocol_versions: ["6.0"]. Tag a commit terraform-provider-vX.Y.Z, the GitHub
Actions workflow runs goreleaser release --clean, and the GitHub
Release goes live. The
Terraform Registry
polls the release on its own schedule and ingests the version. The
only thing currently missing is the GPG key — we are minting one
dedicated to provider releases this week, which means
v0.1.0 lands on the registry around the same time as this post.
In the meantime, install via dev_overrides in ~/.terraformrc:
provider_installation {
dev_overrides {
"elidoapp/elido" = "/Users/<you>/.terraform.d/plugins/elidoapp/elido"
}
direct {}
}
Then make install-local from tools/terraform-provider-elido/,
and terraform plan resolves the binary directly without terraform init. This is the official HashiCorp pattern for provider development
and works equally well as an interim install path until v1.0.0.
What is deliberately not in v0.1.0#
Three things we considered, did not ship, and want to call out so nobody is surprised.
No elido_custom_domain as a resource. Discussed above. The
data source is enough to chain domain_id into elido_link, which
is the load-bearing use case; full-lifecycle management waits on
api-core. ETA: v0.2.0, mid-2026.
No elido_folder, no elido_api_key. The SDK has both; we
chose not to add Schemas in v0.1.0 because their lifecycles are not
where the customer pain is. Folders are organisational metadata; API
keys are typically issued once and rotated through the dashboard.
We will add them when somebody asks.
No code generation from the OpenAPI spec. HashiCorp ships
terraform-plugin-codegen-openapi
as a beta tool. We tried it on our spec; the generated Schemas are
mediocre — every nullable field becomes Optional + Computed,
every list becomes a Set, the result requires as much fixup as a
hand-written Schema and is harder to evolve. With three resources
on the table, hand-written wins. We will revisit the generator in
six months when more of our peers have battle-tested it.
What broke while we built it#
Three things we got wrong on the first pass.
The first was state on Optional + Computed. We initially modelled
title as a plain Optional string. Customers who omitted it from
HCL got a clean Create — and then every subsequent terraform plan
proposed setting it back to null, because the server stored an empty
string and Terraform read that as drift. The fix was the
UseStateForUnknown plan modifier; the lesson was that the
provider's interpretation of "the user did not specify" has to match
the server's idea of "default value". The framework documentation
warns about this in the introduction; we read past the warning the
first time. Saved you the embarrassment by writing it down here.
The second was the import format. We initially shipped
<workspace_id>/<link_id> with a slash, on the theory that paths
read more naturally. The framework had no problem with it; HCL
linters and terminals did. A path with two slashes inside a single
shell-quoted argument turns into something that looks like a typo
in support tickets. We switched to a colon, which has zero ambiguity
and matches Google's provider conventions. Lesson: import strings
are user-facing UI, design them like UI.
The third was tag ordering. Discussed above — we punted, and we
will keep punting until somebody asks. The version we almost
shipped silently sorted tags on input, which made terraform plan
report no changes when the customer had clearly reordered them.
That is a worse experience than a noisy diff; we caught it during
internal testing. Worth saying because the temptation to "be
helpful" by normalising user input is constant when you write a
provider, and it is almost always the wrong call.
How to use this with the rest of Elido#
The provider is one shape. The other shapes still exist and are not going anywhere:
- The REST API is the source of truth.
Everything the provider does is also doable with
curl. - The Go SDK is what the provider itself uses internally; you can pull it in as a library.
- The TypeScript and Python SDKs cover the same surface for the language you happen to be in.
- The GraphQL endpoint covers the same reads with a single round-trip when you need it shaped to your screen.
Pick whichever fits the shape of the problem. Terraform is right when you have a lifecycle to manage. The SDK is right when you have a script. The REST API is right when you are doing one thing once. We think it should be that obvious; we will keep all four working.
If you have a favourite Terraform pattern we are missing — bulk
imports from CSV via for_each over a data "external" block, a
for_each shaped to a Linear API for campaign tracking, a wrapping
module for the agency-managing-multiple-tenants case — open an issue
on the GitHub repo with the
area:terraform label. The provider exists to make those patterns
boring; we want to know which ones still feel surprising.
Where to start#
If you read this and want to try it: install the provider per the
guide, point it at a sandbox workspace,
write resource "elido_link" for the redirect you have always wanted
to declare in code, and terraform apply. We bet a coffee that the
first thing that surprises you, in a good way, is terraform destroy
working exactly the way you expect.
If you read this and want to compare us to the alternatives — there is a longer write-up in our Bitly alternatives feature gap post, and the side-by-side at /compare/vs-bitly shows where Terraform sits on the matrix. The matrix has gotten shorter for them since this post landed.
— Marius