Friday, January 16, 2026

Deploying Percona Operator for MySQL with OpenTaco for IaC Automation


Deploying databases on Kubernetes is getting simpler yearly. The half that also hurts is making deployments repeatable and predictable throughout clusters and environments, particularly from Steady Integration(CI) perspective. That is the place PR-based automation helps; you’ll be able to evaluate a plan, validate modifications, and solely apply after approval, earlier than something touches your cluster. 

If you happen to’ve ever put in an operator by hand, utilized a couple of YAMLs, modified a script “only a bit”, after which watched the identical setup behave in another way in one other setting, this publish is for you.

On this tutorial, we’ll deploy Percona Operator for MySQL and a pattern three-node MySQL cluster utilizing OpenTofu – a totally open-source Terraform fork. Then we’ll take the very same deployment and run it by CI utilizing OpenTaco (previously generally known as Digger), in order that infrastructure modifications might be validated and utilized from Pull Requests.

We’ll use this demo repository all through the information:  GitHub Demo Percona Operator MySQL OpenTaco.

What OpenTaco provides to OpenTofu

OpenTofu and OpenTaco shine when infrastructure and databases have to be reviewed, validated, and permitted earlier than they ever contact the cluster.

Databases are central to most stacks, and modifications needs to be dealt with with extra care. We would like a workflow the place updates are reviewed and validated earlier than they’re ever deployed to a cluster. That’s precisely what OpenTofu + OpenTaco allows: a PR reveals the plan output for evaluate, and apply occurs solely whenever you approve it.

OpenTofu (and Terraform) already provides us the “Infrastructure as Code” half: plan what is going to change, apply it, and retailer state. The issue is operational, for instance, in a workforce: who runs “apply”, when do they run it, and the way can we keep away from collisions?

OpenTaco sits on prime of your present CI system (in our case, GitHub Actions). As an alternative of somebody manually working tofu plan and tofu apply, you’ll be able to run these steps by a Pull Request workflow, the place:

  • A pull request can set off a plan and present leads to the PR
  • Apply occurs in a managed method (for instance, after approval/merge, or when somebody explicitly requests it)
  • Concurrent modifications are prevented through locking
  • The identical steps are repeatable in each setting

By the top of this weblog publish, we could have:

  • Percona Operator for MySQL working in your Kubernetes cluster
  • A pattern PerconaServerMySQL customized useful resource deployed
  • A 3-node MySQL cluster (Group Replication) and HAProxy pods created by the operator
  • OpenTofu state saved remotely (GCS or S3), which issues for CI
  • OpenTaco for IaC PR automation

Conditions

You want a Kubernetes cluster you’ll be able to attain utilizing kubectl. That may be native (type/minikube) or managed (GKE/EKS/AKS). Earlier than going additional, make certain these work:

You’ll additionally want:

  • OpenTofu (tofu)
  • Git
  • Non-obligatory: a MySQL shopper to check connectivity

Demo repository construction

This mission automates deploying the Percona Operator for MySQL to Kubernetes utilizing OpenTofu (a Terraform fork) and Digger for CI/CD.

What every half does:

  • opentofu/ doesn’t handle Kubernetes objects straight. As an alternative, it manages the motion of working these scripts in a repeatable method.
  • digger.yml tells OpenTaco what “mission” to run and which steps to execute for plan/apply/destroy.
  • digger_workflow.yml, right here now we have GitHub Actions workflow(s) that run OpenTaco

Run it regionally with OpenTofu (Primarily based on Helm)

Earlier than we carry OpenTaco into the image, it’s value working the deployment as soon as regionally with OpenTofu. This isn’t a separate method; it’s the identical OpenTofu mission that OpenTaco will run later in CI. Doing it regionally first helps you affirm your Kubernetes entry and Helm chart behaviour with out additionally debugging CI credentials.

Image01: Native workflow overview.

  1. Let’s begin by cloning the repo and transferring into the OpenTofu mission
  2. Select the place the OpenTofu state will reside (native vs distant)

OpenTofu makes use of a state file to recollect what it deployed, so it could actually plan modifications and later destroy the identical sources cleanly.

  • For native studying, the native state is ok.
  • For CI and workforce utilization, want a distant state (shared and constant throughout runs, and might assist locking).

Choice A: native state (quickest to start out)

Remark out the backend block in variations.tf and run:

Choice B: distant state on S3 (really useful for CI)

Choice C: distant state on GCS (additionally nice for CI)

 

3. Run the tofu plan and tofu apply

tofu plan is a dry-run: it reveals precisely what OpenTofu would create/change/destroy, with out touching your cluster. In our case, it plans to create one namespace and two Helm releases (the Percona Operator chart and the MySQL cluster chart)

Whenever you run tofu apply, OpenTofu executes this plan and really installs these Helm charts into the cluster. On this run, OpenTofu created the opentaco-mysql namespace first, then put in two Helm releases: percona-ps-operator (the Percona Operator) and percona-ps-db (the demo MySQL cluster). The ultimate “Outputs” part confirms what was deployed, together with the chart variations and the namespace.

On this demo, OpenTofu is the “orchestrator”: it describes what needs to be put in, after which makes use of the Helm supplier to put in it into your Kubernetes cluster.

Whenever you run tofu destroy, OpenTofu uninstalls the Helm releases, which removes the operator and the demo MySQL cluster (and regardless of the charts are configured to scrub up).

Confirm that the operator and cluster are working

Now, let’s affirm every part is working. Let’s verify the pods:

You must see the operator pod, in addition to the MySQL and HAProxy pods created by the operator.

Verify the customized useful resource:

This could present the operator is reconciling, and the cluster turns into prepared.

Now, let’s verify Companies:

We will see the providers for main and HAProxy

Fast connectivity check

Extract the MySQL root password:

Port-forward to the first service

Then join utilizing a MySQL shopper:

As soon as linked, let’s strive:

If this works, our MySQL cluster is working accurately! Wohoo!!

Clear Up (Destroy Every part)

After we’re performed, we will run: tofu destroy. It will uninstall each Helm releases (the operator and the demo MySQL cluster) after which delete the opentaco-mysql namespace, leaving your Kubernetes cluster itself untouched.

That is the half that makes the demo helpful for groups.

As soon as your repo is linked to OpenTaco Cloud (through the GitHub App), OpenTaco makes use of GitHub Actions to run your OpenTofu mission and report outcomes again to the Pull Request.

So that you don’t want somebody to run tofu manually on their laptop computer; your PR turns into the workflow.

Image02: PR-based workflow overview.

1. Join your repository to OpenTaco Cloud

1. Set up the GitHub App:

  • Go to: otaco.app, create an account
  • Choose your repo
  • Approve permissions

After that, OpenTaco can react to PRs and run workflows.

2. How OpenTaco is aware of what to run: digger.yml

OpenTaco (Digger) reads digger.yml in your repo to seek out:

  • the place your OpenTofu mission lives (opentofu/)
  • which instrument to make use of (opentofu)
  • which steps to run for plan and apply

3. Register Actions Secrets and techniques

Let’s configure first GitHub Actions secrets and techniques as a result of the runner wants entry to:

  • your Kubernetes cluster (GKE in our instance), and
  • your distant state backend (GCS or S3),

You want these secrets and techniques in GitHub:

For GKE entry:

  • GOOGLE_CLOUD_CREDENTIALS (the complete service account JSON)
  • GCP_PROJECT_ID
  • GKE_CLUSTER_NAME
  • GKE_CLUSTER_REGION

For S3 backend (if you happen to use it):

  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
  • AWS_REGION

Observe: When your state backend is S3, OpenTofu wants AWS credentials throughout init to learn/write the state file. So in CI, it’s essential to authenticate to AWS (along with GKE/GCP). On this instance, we use S3, so our GitHub Actions workflow contains an AWS credentials step earlier than Digger runs.

4. Testing PR-based workflow with OpenTaco

Now that now we have the credentials, we create a department and alter one thing underneath opentofu/ (for instance: bump the chart model, alter MySQL reminiscence limits, change values handed to the chart). On this instance, we’re altering the identify of the namespace to opentaco-mysql-test within the variables.tf file.

Subsequent step is to open a Pull Request and add a remark, with:

OpenTaco will run tofu init + tofu plan in GitHub Actions and publish the plan output again to the PR.

Nothing is deployed but; it is a dry run.

Whenever you’re prepared, run:

OpenTaco will run tofu init + tofu apply. This installs/updates the identical Helm releases you examined regionally:

  • The Percona Operator chart
  • the MySQL cluster chart

After digger apply is utilized efficiently, we will verify the output that appears like this:

5. OpenTaco UI (otaco.app): what it’s for

Apart from PR feedback, otaco.app  provides you a fast view of:

  • Which repos are linked
  • current plan/apply jobs
  • timestamps and standing (succeeded/failed)
  • outputs captured from runs

 

6. Confirming it labored (CI and state)

You must be capable to confirm that the cluster sources exist. Let’s discover the pods.

You also needs to see the state object created in your backend.

For S3:

You’ll see JSON describing the OpenTofu useful resource (the sort as “helm_release”) and the outputs like namespace and be aware. That’s anticipated: OpenTofu is monitoring the execution wrapper, not every Kubernetes object.

7. Clean up 

If you prefer to remove the database cluster and operator directly with Kubernetes/Helm, follow the official docs: Percona Operator for MySQL uninstall/delete cluster steps 

Closing

At this point, you have a repeatable method for deploying the Percona Operator for MySQL and a demo MySQL cluster on Kubernetes. You can now run the same workflow locally or from a CI/CD pipeline. Your deployment becomes documented, reproducible, and team-friendly.

If you’d like to explore further, check out the demo-percona-operator-mysql-opentaco repo and try a few changes of your own. We’re happy to help if you run into issues with OpenTaco or the Percona Operator for MySQL. And if you do play with it, tell us how it went, share your findings, ideas, or improvements with us!

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles