Open Supply Isn’t What It Used to Be
The panorama of open supply has undergone important modifications in recent times, and deciding on the correct operator and tooling for PostgreSQL clusters in Kubernetes has by no means been extra essential.
MinIO, for instance, was a broadly used open supply S3-compatible storage backend. Over the previous few years, it has:
- Switched to AGPLv3 with commercial-only extras
- Entered “upkeep mode,” closing neighborhood points, limiting help to paid subscriptions, and stopping acceptance of neighborhood PRs
Equally, Bitnami Docker photographs, which have lengthy been a staple for databases, together with Postgres, middleware, and developer tooling, now have stricter utilization phrases. VMware’s modifications to Bitnami picture licensing disrupted many Kubernetes Helm charts that trusted them.
Crunchy Information illustrates how licensing and distribution modifications can have an effect on open supply operators instantly. For years, Crunchy provided absolutely open supply PostgreSQL Docker photographs. Between 2022 and 2024, a number of key shifts occurred:
- Redistribution restrictions: Whereas the PostgreSQL code is open supply, Crunchy’s Docker photographs embrace branding and enterprise options that can’t be freely redistributed.
- Crunchy Information software program made obtainable by way of their Developer Program is meant for inner or private use solely. Use in manufacturing environments by bigger organizations usually requires an energetic help subscription.
- The phrases explicitly prohibit utilizing Crunchy’s photographs to ship help or consulting companies to others until you’ve a licensed settlement. Whereas the supply code itself stays open supply, these restrictions imply the official photographs are not absolutely redistributable, which limits the sensible use of the undertaking in manufacturing and industrial settings. In different phrases, it’s open supply in principle, however can’t be freely used or shared like really open-source software program.
- Registry transfer: Most photographs have been moved to registry.builders.crunchydata.com, requiring authentication and acceptance of phrases, marking a transparent line between open-source code and proprietary builds.
What These Restrictions Actually Imply for Kubernetes Customers
When container photographs and operators include redistribution limits, authentication necessities, or “internal-use-only” clauses, the affect on Kubernetes environments is fast and painful. Groups can not:
- Construct air-gapped clusters as a result of photographs can’t be mirrored to personal registries
- Depend on GitOps workflows that count on publicly accessible OCI photographs
- Fork or customise operators freely, since official photographs can’t be redistributed with modifications
- Use the software program in industrial or customer-facing merchandise with out further licensing
- Run multi-cluster or multi-tenant Postgres with out violating utilization phrases
For database operators, the place every part depends upon container photographs, these restrictions successfully flip a undertaking right into a “source-available however not operationally open” answer.
Because of this, many groups are switching to totally open-source options like Percona Operator for PostgreSQL, StackGres, Zalando Postgres Operator, and CloudNativePG.
The larger image? Open supply right now typically exists extra in principle than in observe. It has turn into more and more vital to research what the authors’ “open supply” claims truly imply. In lots of instances, the worth marketed as open supply is barely theoretical: licensing restrictions, redistribution limits, and different utilization constraints could make merchandise far much less open than anticipated. The boundaries of open supply are being examined on a number of ranges, so even tasks formally licensed as open supply might not present the liberty, transparency, and value that the time period implies. Code is likely to be obtainable, however usable photographs, updates, and neighborhood collaboration will be restricted.
Kubernetes customers should be strategic: select tasks with open photographs, clear governance, and sustainable neighborhood help. And since the panorama can shift rapidly, migration methods are essential.
For Percona Operator for PostgreSQL and Crunchy Information PostgreSQL Operator, migration is surprisingly simple: Percona’s operator is a tough fork of Crunchy’s. Transferring knowledge will be accomplished in a number of methods, typically practically with out downtime, different instances quicker with minimal downtime, relying in your use case.
Migrate to Freedom
On this information, we’ll present you how one can migrate from Crunchy Information PostgreSQL Operator to Percona PostgreSQL Operator, a really open supply different.
Variations Utilized in This Information
For readability and reproducibility, all migration examples on this weblog submit have been created utilizing the next variations:
- Crunchy Information PostgreSQL Kubernetes Operator: v5.8.6
- Percona PostgreSQL Kubernetes Operator: v2.8.0
- PostgreSQL: 17
Totally different variations might have slight variations in CR fields or conduct. At all times seek the advice of the official documentation on your particular operator and Postgres model.
As a result of the Percona PostgreSQL Operator is a fork of the Crunchy Information PostgreSQL Operator, each operators can not handle the identical namespaces concurrently. The Crunchy Operator is cluster-wide by default, which may result in useful resource conflicts if each operators watch overlapping namespaces.
To keep away from this:
- Set PGO_TARGET_NAMESPACES for the Crunchy Information Operator so it watches solely the namespaces the place current Crunchy clusters are deployed.
- Deploy the Percona PostgreSQL Operator in a separate namespace (e.g., percona-postgres-operator) to make sure clear separation and keep away from controller possession conflicts.
These precautions make sure the migration setting stays secure and predictable, significantly when working each operators concurrently throughout the transition. In case your operator was deployed utilizing a single namespace kustomize/set up/namespace PGO_TARGET_NAMESPACES env variable ought to already be set.
1. Migration Utilizing a Standby Cluster
(pgBackRest repo based mostly standby + Streaming Replication)
One of many easiest and most secure methods emigrate from the Crunchy Information PostgreSQL Operator to the Percona PG Operator is by deploying a standby cluster. You are able to do this utilizing both of the next choices or each collectively:
- pgBackRest repo–based mostly standby
- Streaming replication
In our instance, we’ll use each strategies to supply most security and knowledge integrity. For a extra in-depth exploration of every method, you may check with our official documentation for full particulars.
Step 1: Begin with Your Current Crunchy Information Cluster
Earlier than the rest, you want an operational Crunchy Information PostgreSQL cluster (known as the source-cluster). Let’s suppose that it was deployed beneath cpgo namespace.
For this instance, we assume the source-cluster was deployed utilizing a Customized Useful resource much like the one under, and that it makes use of AWS S3 as a pgBackRest backup repository.
The instance of source-cluster will be deployed from the GitHub repo:
|
kubectl –f https://uncooked.githubusercontent.com/percona/percona-postgresql-operator/refs/heads/migration/deploy/source-cluster-cr.yaml -n cpgo |
Or utilizing the next command:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
echo ‘apiVersion: postgres-operator.crunchydata.com/v1beta1 sort: PostgresCluster metadata: title: source-cluster spec: service: sort: LoadBalancer postgresVersion: 17 cases: – title: instance1 replicas: 3 dataVolumeClaimSpec: accessModes: – ReadWriteOnce sources: requests: storage: 10Gi proxy: pgBouncer: service: sort: LoadBalancer replicas: 3 config: world: pool_mode: transaction backups: pgbackrest: configuration: – secret: title: pgo-s3-creds world: repo1-path: /pgo-migration-testing/crunchydata repos: – title: repo1 s3: bucket: pgo-migration-testing endpoint: s3.amazonaws.com area: us-east-1 schedules: full: “0 0 * * 0″‘ | kubectl apply -n cpgo -f – |
Essential Settings for the Migration
pgBackRest configuration
These fields are required as a result of the Percona PG Operator will use the identical repository:
|
backups: pgbackrest: configuration: – secret: title: pgo-s3-creds world: repo1-path: /pgo-migration-testing/crunchydata repos: – title: repo1 s3: bucket: pgo-migration-testing endpoint: s3.amazonaws.com area: us-east-1 |
LoadBalancer service
Percona standby cluster will need to have community entry to the supply cluster. In my instance, that is accomplished utilizing a public IP (Service sort: LoadBalancer), however you should use any Service sort that ensures the identical outcome. The important thing requirement is that the source-cluster is reachable from the target-cluster.
|
spec: service: sort: LoadBalancer |
Gather Info Required for Streaming Replication
When you plan to make use of streaming replication (advisable for minimal knowledge lag), the goal Percona cluster will want authenticated community connectivity to the first supply occasion.
Get the LoadBalancer IP
Instance output:
|
kubectl get service supply–cluster–ha –o jsonpath=‘{.standing.loadBalancer.ingress[0].ip}:{.spec.ports[0].port}{“n”}’ –n cpgo 34.27.90.225:5432 |
Export replication and TLS certificates
Instance output:
|
kubectl get secret supply–cluster–cluster–cert supply–cluster–replication–cert –n cpgo
NAME TYPE DATA AGE supply–cluster–cluster–cert Opaque 3 24h supply–cluster–replication–cert Opaque 12 24h |
Then export them:
|
kubectl get secret supply–cluster–cluster–cert –o json –n cpgo | yq ‘{“apiVersion”: .apiVersion, “sort”: .sort, “knowledge”: .knowledge, “metadata”: {“title”: .metadata.title}, “sort”: .sort}’ –o yaml > ~/supply–cluster–cluster–cert.yaml
kubectl get secret supply–cluster–replication–cert –o json –n cpgo | yq ‘{“apiVersion”: .apiVersion, “sort”: .sort, “knowledge”: .knowledge, “metadata”: {“title”: .metadata.title}, “sort”: .sort}’ –o yaml > ~/supply–cluster–replication–cert.yaml |
Step 2: Deploy the Percona PG Operator and the Standby Cluster (target-cluster)
Set up the Percona PG Operator
|
kubectl create ns percona–postgres–operator
kubectl apply —server–facet –f https://uncooked.githubusercontent.com/percona/percona–postgresql–operator/v2.8.0/deploy/bundle.yaml –n percona–postgres–operator |
Create an AWS credentials secret (shared repository)
|
echo “apiVersion: v1 sort: Secret metadata: title: pgo–s3–creds stringData: S3.conf: | [global] repo1–s3–key=XXXXXXXXXXXXXXXXXXXX repo1–s3–key–secret=XXXXXXXXXXXXXXXXXXXX“ | kubectl apply –n percona–postgres–operator –f – |
Import the certificates (required just for streaming replication)
|
kubectl apply –f ~/supply–cluster–cluster–cert.yaml –n percona–postgres–operator kubectl apply –f ~/supply–cluster–replication–cert.yaml –n percona–postgres–operator |
Step 3: Standby Cluster Setup for Migration from Crunchy Information PostgreSQL Operator
The cluster will be deployed from from GitHub repo:
|
kubectl apply –f https://uncooked.githubusercontent.com/percona/percona–postgresql–operator/refs/heads/migration/deploy/goal–cluster–cr.yaml –n percona–postgres–operator |
Or utilizing the next command:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
echo “apiVersion: pgv2.percona.com/v2 sort: PerconaPGCluster metadata: title: goal–cluster–percona annotations: pgv2.percona.com/patroni–model: “4” spec: crVersion: 2.8.0 # Customized certificates that have been obtained from source-cluster needs to be utilized in case of “Streaming replication”. It may be keep away from for those who use solely “pgBackrest repo based mostly standby” replication. secrets and techniques: customReplicationTLSSecret: title: supply–cluster–replication–cert customTLSSecret: title: supply–cluster–cluster–cert standby: enabled: true # Public IP of source-cluster-percona-ha service from source-cluster for “Streaming replication” host: 34.27.90.225 # PostgreSQL port of source-cluster-percona-ha service from source-cluster for “Streaming replication” port: 5432 # AWS pgBackrest repo, which is utilized by source-cluster repoName: repo1 picture: docker.io/percona/percona–distribution–postgresql:17.6–1 imagePullPolicy: At all times postgresVersion: 17 cases: – title: instance1 replicas: 3
affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: – weight: 1 podAffinityTerm: labelSelector: matchLabels: postgres–operator.crunchydata.com/knowledge: postgres topologyKey: kubernetes.io/hostname dataVolumeClaimSpec: accessModes: – ReadWriteOnce sources: requests: storage: 10Gi proxy: pgBouncer: replicas: 3 picture: docker.io/percona/percona–pgbouncer:1.24.1–1 affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: – weight: 1 podAffinityTerm: labelSelector: matchLabels: postgres–operator.crunchydata.com/function: pgbouncer topologyKey: kubernetes.io/hostname backups: pgbackrest: repos: # AWS pgBackrest repo, which is utilized by source-cluster – title: repo1 s3: bucket: pg–operator–testing endpoint: s3.amazonaws.com area: us–east–1 picture: docker.io/percona/percona–pgbackrest:2.56.0–1 configuration: – secret: title: pgo–s3–creds world: repo1–path: /pg–operator–testing/crunchydata“ | kubectl apply –n percona–postgres–operator –f – |
Step 4: Await the Cluster to Begin Syncing
Instance:
|
kubectl get pg –n percona–postgres–operator –w
NAME ENDPOINT STATUS POSTGRES PGBOUNCER AGE goal–cluster–percona goal–cluster–percona–pgbouncer.default.svc prepared 3 3 17h |
Verify replication standing.
Instance output:
|
kubectl exec supply–cluster–instance1–xg46–0 –n cpgo –it — bash
postgres=# SELECT application_name, client_addr, state, sent_offset – (replay_offset – (sent_lsn – replay_lsn) * 255 * 16 ^ 6 ) AS byte_lag, write_lag, flush_lag, replay_lag FROM ( SELECT application_name, client_addr, client_hostname, state, (‘x’ || lpad(split_part(sent_lsn::TEXT, ‘/’, 1), 8, ‘0’))::bit(32)::bigint AS sent_lsn, (‘x’ || lpad(split_part(replay_lsn::TEXT, ‘/’, 1), 8, ‘0’))::bit(32)::bigint AS replay_lsn, (‘x’ || lpad(split_part(sent_lsn::TEXT, ‘/’, 2), 8, ‘0’))::bit(32)::bigint AS sent_offset, (‘x’ || lpad(split_part(replay_lsn::TEXT, ‘/’, 2), 8, ‘0’))::bit(32)::bigint AS replay_offset, write_lag, flush_lag, replay_lag FROM pg_stat_replication ) AS s; application_name | client_addr | state | byte_lag | write_lag | flush_lag | replay_lag ————————————————————–+———————+—————–+—————+————————–+————————–+————————– supply–cluster–instance1–bs4k–0 | 10.16.1.7 | streaming | 0 | 00:00:00.000971 | 00:00:00.001979 | 00:00:00.002072 supply–cluster–instance1–9jp5–0 | 10.16.0.14 | streaming | 0 | 00:00:00.000903 | 00:00:00.002108 | 00:00:00.002164 goal–cluster–percona–instance1–thn5–0 | 10.128.0.103 | streaming | 0 | 00:00:00.000957 | 00:00:00.00201 | 00:00:00.002038 (3 rows) |
At this level, the Percona cluster is absolutely caught up and useful as a read-only standby. Now you can already swap read-only visitors to the brand new cluster for testing.
Step 5: Carry out the Closing Cutover
1.Convert the supply cluster to standby mode.
|
❯ kubectl patch postgrescluster supply–cluster —sort=merge –n cpgo –p ‘ { “spec”: { “standby”: { “enabled”: true } } }‘ |
Await replication to totally catch up.
2. (Non-compulsory) Shut down the supply cluster
Prevents unintended writes or split-brain eventualities.
|
kubectl patch postgrescluster supply–cluster –n cpgo —sort merge —patch ‘{“spec”:{“shutdown”: true}}’ |
3.Promote the Percona standby cluster
|
kubectl patch perconapgcluster goal–cluster–percona –n percona–postgres–operator —sort=merge –p ‘ { “spec”: { “standby”: { “enabled”: false } } }‘ |
4.Confirm that the cluster is now writable and wholesome
|
kubectl get pg goal–cluster–percona –n percona–postgres–operator
NAME ENDPOINT STATUS POSTGRES PGBOUNCER AGE cluster1–percona cluster1–percona–pgbouncer.default.svc prepared 3 3 23h |
As you may see, this migration path works virtually completely out of the field. For customers coming from the Crunchy Information PostgreSQL Operator, this technique feels pure as a result of it leverages the identical native standby/reproduction mechanisms used for HA and catastrophe restoration. The important thing distinction is that now you can even use this acquainted mechanism emigrate safely to the Percona PostgreSQL Operator, a really open-source different.
2. Migrate Information Utilizing Backup Restore
The second migration possibility is restoring your Percona cluster instantly from a backup created by the Crunchy Information PostgreSQL Operator. That is typically the quickest and easiest method to migrate, particularly while you don’t require a stay standby or steady replication.
Step 1: Begin with Your Current Crunchy Information Cluster (source-cluster)
Beneath is the instance Crunchy Information cluster we used earlier. It performs pgBackRest backups to AWS S3, and we are going to restore from the newest full backup created by this cluster.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
apiVersion: postgres-operator.crunchydata.com/v1beta1 sort: PostgresCluster metadata: title: source-cluster spec: postgresVersion: 17 cases: – title: instance1 replicas: 3 dataVolumeClaimSpec: accessModes: – ReadWriteOnce sources: requests: storage: 10Gi proxy: pgBouncer: replicas: 3 config: world: pool_mode: transaction backups: pgbackrest: configuration: – secret: title: pgo-s3-creds world: repo1-path: /pgo-migration-testing/crunchydata repos: – title: repo1 s3: bucket: pgo-migration-testing endpoint: s3.amazonaws.com area: us-east-1 schedules: full: “0 0 * * 0” |
Step 2: Deploy the Percona PG Operator and the Cluster (target-cluster)
Set up the Percona PG Operator
|
kubectl create ns percona–postgres–operator kubectl apply —server–facet –f https://uncooked.githubusercontent.com/percona/percona–postgresql–operator/v2.8.0/deploy/bundle.yaml –n percona–postgres–operator |
Create an AWS credentials secret (shared repository)
|
echo “apiVersion: v1 sort: Secret metadata: title: pgo–s3–creds stringData: S3.conf: | [global] repo1–s3–key=XXXXXXXXXXXXXXXXXXXX repo1–s3–key–secret=XXXXXXXXXXXXXXXXXXXX“ | kubectl apply –n percona–postgres–operator –f – |
Step 3: Deploy the Percona PostgreSQL Cluster
This cluster boots instantly from the Crunchy backup.
Apply the CR:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
echo “apiVersion: pgv2.percona.com/v2 sort: PerconaPGCluster metadata: title: goal–cluster annotations: pgv2.percona.com/patroni–model: “4” spec: crVersion: 2.8.0 picture: docker.io/percona/percona–distribution–postgresql:17.6–1 imagePullPolicy: At all times postgresVersion: 17 cases: – title: instance1 replicas: 3 affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: – weight: 1 podAffinityTerm: labelSelector: matchLabels: postgres–operator.crunchydata.com/knowledge: postgres topologyKey: kubernetes.io/hostname dataVolumeClaimSpec: accessModes: – ReadWriteOnce sources: requests: storage: 10Gi proxy: pgBouncer: replicas: 3 picture: docker.io/percona/percona–pgbouncer:1.24.1–1 affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: – weight: 1 podAffinityTerm: labelSelector: matchLabels: postgres–operator.crunchydata.com/function: pgbouncer topologyKey: kubernetes.io/hostname dataSource: pgbackrest: stanza: db configuration: – secret: title: pgo–s3–creds world: repo1–path: /pg–operator–testing/crunchydata repo: title: repo1 s3: bucket: pg–operator–testing endpoint: s3.amazonaws.com area: us–east–1 backups: pgbackrest: repos: – title: repo1 s3: bucket: pg–operator–testing endpoint: s3.amazonaws.com area: us–east–1 picture: docker.io/percona/percona–pgbackrest:2.56.0–1 configuration: – secret: title: pgo–s3–creds world: repo1–path: /pg–operator–testing/percona | kubectl apply –n percona–postgres–operator –f – |
Two vital sections to grasp
1. dataSource: – bootstrapping the Cluster From Crunchy Backups
This part is accountable for restoring the database from Crunchy’s backup.
This tells the Percona Operator:
- which backup repo to learn from
- which S3 bucket/path shops the Crunchy backups
- which credentials to make use of
|
dataSource: pgbackrest: stanza: db configuration: – secret: title: pgo-s3-creds world: repo1-path: /pg-operator-testing/crunchydata repo: title: repo1 s3: bucket: pg-operator-testing endpoint: s3.amazonaws.com area: us-east-1 |
2. backups: Goal Cluster Backup Configuration
This defines new backup storage for the Percona cluster. It should be separate from Crunchy’s backup storage to keep away from conflicts.
|
backups: pgbackrest: repos: – title: repo1 s3: bucket: pg-operator-testing endpoint: s3.amazonaws.com area: us-east-1 picture: docker.io/percona/percona-pgbackrest:2.56.0-1 configuration: – secret: title: pgo-s3-creds world: repo1-path: /pg-operator-testing/percona |
As quickly because the Customized Useful resource is utilized, the cluster is bootstrapped utilizing the storage for the backup outlined within the dataSource part after which began. As soon as the cluster turns into prepared, you may instantly create new backups. On this case, repo1 from the backups part shall be used because the goal repository.
Step 4: Await the Cluster
Instance:
|
kubectl get pg –n percona–postgres–operator
NAME ENDPOINT STATUS POSTGRES PGBOUNCER AGE goal–cluster–percona goal–cluster–percona–pgbouncer.default.svc prepared 3 |
As you may see, cluster (target-cluster) was efficiently restored from the most recent full backup which was made on (source-cluster).
3. Migrate Information Utilizing PV of Crunchy Information PostgreSQL cluster
This migration possibility makes use of the current Persistent Quantity from the Crunchy cluster, even after the cluster is deleted.
It’s helpful when:
- you wish to keep away from a full backup/restore
- your storage may be very giant
- you should protect the unique knowledge listing precisely
- you eliminated the cluster however stored the PV
Step 1: Configure the Supply Cluster to Retain PVs
Modify Persistent Quantity Retention
If you wish to delete your source-cluster however preserve persistent volumes (PV) which have been utilized by the cluster you’ve just one method to do it. The retention of PV needs to be modified. For dynamically provisioned PersistentVolumes, the default reclaim coverage is “Delete”, which removes any knowledge on a persistent quantity as soon as there aren’t any extra persistent quantity claims (PVCs) related to it.
To retain a persistent quantity you’ll need to set the reclaim coverage to Retain.
Let’s examine the checklist of PVs that are related to PVC utilized by source-cluser:
|
kubectl get pvc —selector=postgres–operator.crunchydata.com/cluster=supply–cluster,postgres–operator.crunchydata.com/knowledge=postgres
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE supply–cluster–instance1–5vxr–pgdata Certain pvc–d842c205–bbd1–4a0a–8fd0–301398a61e6f 10Gi RWO customary–rwo <unset> 164m supply–cluster–instance1–hm99–pgdata Certain pvc–a9891ba9–d2f7–4d12–a6ef–a3051e0f89db 10Gi RWO customary–rwo <unset> 164m supply–cluster–instance1–zdkd–pgdata Certain pvc–1b10bf46–56e2–4d25–868b–81e12a1fe120 10Gi RWO customary–rwo <unset> |
We advise utilizing the PV of the first pod. You may get it utilizing the next command:
|
kubectl get pvc –n cpgo $(kubectl get pod –n cpgo –l postgres–operator.crunchydata.com/function=main –o jsonpath=‘{.objects[0].spec.volumes[?(@.name==”postgres-data”)].persistentVolumeClaim.claimName}’) –o jsonpath=‘{.spec.volumeName}’
pvc–a9891ba9–d2f7–4d12–a6ef–a3051e0f89db |
Lastly, we will change the reclaim coverage to Retain for PV:
|
kubectl patch pv pvc–a9891ba9–d2f7–4d12–a6ef–a3051e0f89db –n cpgo –p ‘{“spec”:{“persistentVolumeReclaimPolicy”:”Retain”}}’ |
Confirm that the change:
|
kubectl get pv –n cpgo NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE pvc–a9891ba9–d2f7–4d12–a6ef–a3051e0f89db 10Gi RWO Retain Certain cpgo/supply–cluster–instance1–zdkd–pgdata customary–rwo <unset> 166m |
Step 2: Delete your current Crunchy Information Cluster (source-cluster) and operator if wanted:
|
kubectl delete postgrescluster supply–cluster –n cpgo kubectl delete –okay kustomize/set up/default |
Step 3. Deploy the Percona PG Operator and Create Percona PostgreSQL Cluster With Retained Quantity (target-cluster)
Set up the Percona PG Operator
|
kubectl create ns percona–postgres–operator kubectl apply —server–facet –f https://uncooked.githubusercontent.com/percona/percona–postgresql–operator/v2.8.0/deploy/bundle.yaml –n percona–postgres–operator |
Create an AWS credentials secret (shared repository)
|
echo “apiVersion: v1 sort: Secret metadata: title: pgo–s3–creds stringData: S3.conf: | [global] repo1–s3–key=XXXXXXXXXXXXXXXXXXXX repo1–s3–key–secret=XXXXXXXXXXXXXXXXXXXX“ | kubectl apply –n percona–postgres–operator –f – |
Now you can create target-cluster utilizing retained quantity. With a view to do it you’ll need to supply a label that’s distinctive on your persistent volumes. Let’s add it first to your PV.
|
kubectl label pv pvc–a9891ba9–d2f7–4d12–a6ef–a3051e0f89db pgo–postgres–cluster=percona–postgres–operator–cluster |
Subsequent, it’s good to check with this label in your CR.
Instance:
|
dataVolumeClaimSpec: accessModes: – ReadWriteOnce selector: matchLabels: pgo-postgres-cluster: percona-postgres-operator-cluster sources: requests: storage: 10Gi |
Now we’re able to create a target-cluster utilizing PV from source-cluster:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
echo “apiVersion: pgv2.percona.com/v2 sort: PerconaPGCluster metadata: title: goal–cluster annotations: pgv2.percona.com/patroni–model: “4” spec: crVersion: 2.8.0
picture: docker.io/percona/percona–distribution–postgresql:17.6–1 imagePullPolicy: At all times postgresVersion: 17
cases: – title: instance1 replicas: 1
affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: – weight: 1 podAffinityTerm: labelSelector: matchLabels: postgres–operator.crunchydata.com/knowledge: postgres topologyKey: kubernetes.io/hostname dataVolumeClaimSpec: accessModes: – ReadWriteOnce selector: matchLabels: pgo–postgres–cluster: percona–postgres–operator–cluster sources: requests: storage: 10Gi
proxy: pgBouncer: replicas: 3 picture: docker.io/percona/percona–pgbouncer:1.24.1–1 affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: – weight: 1 podAffinityTerm: labelSelector: matchLabels: postgres–operator.crunchydata.com/function: pgbouncer topologyKey: kubernetes.io/hostname
backups: pgbackrest: repos: – title: repo1 s3: bucket: pg–operator–testing endpoint: s3.amazonaws.com area: us–east–1 picture: docker.io/percona/percona–pgbackrest:2.56.0–1 configuration: – secret: title: pgo–s3–creds world: repo1–path: /pg–operator–testing/percona | kubectl apply –n percona–postgres–operator –f – |
Step 4: Await the Cluster
Instance:
|
kubectl get pg –n percona–postgres–operator
NAME ENDPOINT STATUS POSTGRES PGBOUNCER AGE goal–cluster–percona goal–cluster–percona–pgbouncer.default.svc prepared 1 |
The cluster (target-cluster) was efficiently began.
Conclusion
This weblog submit demonstrated 3 ways emigrate from the Crunchy Information PostgreSQL Operator to the absolutely open-source Percona PostgreSQL Operator:
- Standby Cluster Migration – Nearly zero downtime utilizing streaming replication or pgBackRest standby.
- Migration Utilizing Backup and Restore –Quick and easy, restore instantly from Crunchy’s S3 backups.
- Migration Utilizing Current Persistent Volumes – Supreme while you wish to reuse storage with out copying knowledge.
All three approaches present secure, predictable, and reversible migration paths.
And since Percona’s operator, photographs, and tooling are 100% open supply, you at all times retain full management, together with the choice emigrate again to Crunchy Information PostgreSQL Operator if wanted. The identical approaches will be tailored for migrating to different open-source operators (Zalando, StackGres, CloudNativePG), however that’s a subject for a future article.
P.S. This weblog submit covers solely primary deployment patterns and simplified configuration examples. In case your setting is extra complicated, makes use of customized photographs, consists of Crunchy’s TDE or different enterprise options, or requires tailor-made migration steps, don’t hesitate to contact Percona. Our workforce is completely happy that will help you plan and execute a clean, dependable migration.
