Trendy Java functions utilizing Amazon Aurora typically wrestle to take full benefit of their cloud-based capabilities. Though Aurora gives highly effective options resembling quick failover, AWS Id and Entry Administration (IAM) authentication help, and AWS Secrets and techniques Supervisor integration, normal JDBC drivers weren’t designed with cloud-specific options in thoughts. This isn’t a limitation of open supply drivers; they excel at what they had been designed for and concentrate on database requirements slightly than cloud-based optimizations.
When Aurora fails over in seconds, normal JDBC drivers can take as much as a minute to reconnect due to DNS propagation delays. Whereas Aurora helps highly effective options like IAM authentication and Secrets and techniques Supervisor integration, implementing these options with normal JDBC drivers requires advanced customized code and error dealing with—complexity that the AWS Superior JDBC Wrapper eliminates
This weblog submit reveals java builders find out how to improve an present utility that makes use of the open supply normal JDBC driver with a HikariCP connection pooler by including the AWS Superior JDBC Wrapper (JDBC Wrapper), unlocking the capabilities of Aurora and the AWS Cloud with minimal code modifications. This method preserves all the advantages of your present PostgreSQL driver whereas including cloud-based options. The submit additionally demonstrates one in every of JDBC Wrapper’s highly effective options: learn/write splitting.
Resolution overview
The JDBC Wrapper is an clever wrapper that enhances your present JDBC driver with capabilities of Aurora and the AWS Cloud. The wrapper can rework your normal PostgreSQL, MySQL, or MariaDB driver right into a cloud-aware, production-ready resolution. Builders can undertake the JDBC Wrapper to make the most of the next capabilities:
- Quick failover past DNS limitations – The JDBC Wrapper maintains a real-time cache of your Aurora cluster topology and every database occasion’s major or reproduction position by means of direct queries to Aurora. This bypasses DNS delays completely, enabling quick connections to the brand new major occasion throughout failover.
- Seamless AWS authentication – Aurora helps IAM database authentication, however implementing it historically requires customized code to generate tokens, deal with expiration, and handle renewals. The JDBC Wrapper mechanically handles your entire IAM authentication lifecycle.
- Constructed-in Secrets and techniques Supervisor help – Secrets and techniques Supervisor integration retrieves database credentials mechanically. Your utility doesn’t have to know the precise password—the driving force handles all the pieces behind the scenes.
- Federated authentication – Allow database entry through the use of organizational credentials by means of Microsoft Lively Listing Federation Providers or Okta.
- Learn/write splitting utilizing connection management – You’ll be able to maximize Aurora efficiency by routing write operations to the first occasion and distributing reads throughout Aurora replicas.
Notice: Learn/write splitting characteristic requires builders to explicitly name setReadOnly(true) on connections for learn operations. The motive force doesn’t mechanically parse queries to find out learn versus write operations. When setReadOnly(true) is known as, all subsequent statements executed on that connection might be routed to replicas till setReadOnly(false) is known as. This characteristic is explored intimately later on this submit.
This submit walks by means of a real-world transformation of java utility utilizing the JDBC Wrapper. You’ll see how an present Java utility evolves by means of three progressive phases:
- Stage 1: Customary JDBC driver(baseline) – The appliance connects on to the Aurora author endpoint by means of the usual JDBC driver, with all operations utilizing a single database occasion and counting on DNS-based failover.
- Stage 2: JDBC Wrapper with quick failover – The appliance makes use of the JDBC Wrapper to keep up an inner topology cache of the Aurora cluster, enabling quick failover by means of direct occasion discovery whereas nonetheless routing all operations by means of the author endpoint.
- Stage 3: Learn/write splitting – The appliance makes use of the JDBC Wrapper learn/write splitting characteristic to ship write operations to the Aurora author occasion and distribute learn operations throughout Aurora reader cases, optimizing efficiency by means of computerized load balancing.
Determine 1: Structure diagram displaying Stage 3 configuration with learn/write splitting enabled
Conditions
You could have the next in place to implement this submit’s resolution:
- An AWS account with permissions to create Aurora clusters
- A Linux-based machine with the next software program put in to run the demo utility that may connect with the Aurora cluster:
Infrastructure setup choices
- Possibility A: Infrastructure as code with the AWS Cloud Improvement Package (AWS CDK) (really useful)
- Possibility B: Handbook setup
- Aurora cluster: Create an Aurora cluster with at the very least one learn reproduction
- Safety group: Open port 5432 from the machine the place you cloned the repository.
Implementing the Resolution
Arrange the event surroundings
On this part, you’ll clone the pattern repository and study the Java order administration utility that makes use of HikariCP connection pooling with a typical PostgreSQL JDBC driver
Clone the GitHub repository through the use of the next code:
The demo utility simulates a real-world order administration system that powers a web-based retailer the place clients place orders, workers members replace order statuses, and managers generate gross sales experiences. This state of affairs demonstrates the problem of mixed-database workloads: Some write-heavy operations want quick consistency resembling processing funds, however different read-heavy operations that use learn replicas can tolerate slight delays resembling when producing gross sales experiences
The repository has the next construction:

Now that you’ve the demo utility code regionally and perceive its construction as a typical Java order administration system utilizing HikariCP and normal PostgreSQL JDBC drivers, the subsequent step is to create the Aurora database infrastructure that the appliance will connect with.
Deploy the database infrastructure
You’ll create an Aurora cluster with two learn replicas through the use of an automatic script that makes use of infrastructure as code with the AWS CDK. The 2 learn replicas are wanted for demonstrating the AWS Superior JDBC Wrapper’s learn/write splitting capabilities—they supply separate cases to route learn operations to whereas the first occasion handles write operations. In case you select to not use the supplied script, you may create the cluster manually by means of the AWS Administration Console.
Override defaults with. env (Optionally available)
You’ll be able to override the default settings by making a .env file if it’s good to use present AWS assets (like a selected VPC or safety group) or need to customise useful resource names. In case you don’t need to use present AWS infrastructure, you may skip this step and use the defaults.
Create an Aurora cluster
Run the setup script to create an Aurora cluster with two reader cases and the author occasion:
You will notice the next output after efficiently creating the cluster:
Arrange utility properties
The appliance properties file incorporates the database connection particulars that your Java utility makes use of to hook up with Aurora cluster.
In case you created the cluster through the use of the supplied AWS CDK script (choice A), the script mechanically created and configured src/foremost/assets/utility.properties along with your Aurora connection particulars. Consequently, you don’t have to create or configure the appliance properties file as a result of the script did this for you.
For guide setup (choice B), create and configure the appliance properties file:
Arrange the database password
In case you created infrastructure through the use of the AWS CDK script supplied (choice A): The AWS CDK script mechanically generates a safe password and shops it in Secrets and techniques Supervisor. Arrange the database password surroundings variable through the use of the next instructions:
In case you used the guide setup (choice B):
Run this command to set the password you specified when creating your Aurora cluster:
export DB_PASSWORD=
Now that you’ve efficiently deployed your Aurora cluster with learn replicas and configured the appliance properties and database password, the subsequent step is to check the appliance in three progressive phases that exhibit the AWS Superior JDBC Wrapper’s capabilities
Configure the appliance with the JDBC Wrapper
This part covers three progressive phases of configuring your Java utility with the JDBC Wrapper:
- Stage 1: Customary JDBC driver (baseline) – Run the appliance with the usual PostgreSQL JDBC driver.
- Stage 2: JDBC Wrapper with quick failover – Configure the JDBC Wrapper with quick failover capabilities.
- Stage 3: Learn/write splitting – Allow learn/write splitting to distribute reads throughout Aurora replicas
Stage 1: Customary JDBC driver (baseline)
You’ll run the appliance through the use of the usual PostgreSQL JDBC driver to determine a baseline earlier than enhancing it with JDBC Wrapper capabilities. Execute the appliance to watch normal JDBC habits:
./gradlew clear run
The next is the pattern output:
Discover within the output that each write operations (creating orders) and browse operations (getting order historical past) present the identical connection URL sample: → WRITER jdbc:postgresql://aurora-jdbc-demo.cluster-xxxxxxx. This demonstrates normal JDBC habits the place all database operations path to the Aurora author endpoint, which means each transactional operations and analytical queries compete for a similar author assets—the precise downside the AWS Superior JDBC Wrapper’s learn/write splitting will remedy within the subsequent steps.
Now that you’ve established a baseline with the usual JDBC driver and noticed how all operations path to the Aurora author endpoint, the subsequent step is to configure the appliance to make use of the JDBC Wrapper whereas sustaining the identical performance however including cloud capabilities resembling quick failover.
Stage 2: JDBC Wrapper with quick failover
Now, rework this utility to make use of the JDBC Wrapper whereas sustaining the identical performance however including capabilities resembling quick failover. You’ll use a script to mechanically apply the mandatory modifications to improve your normal JDBC utility with Aurora and AWS Cloud options.Earlier than operating the script, let’s study which modifications are wanted for the appliance to make use of the JDBC Wrapper:
The construct.gradle (Earlier than configured to make use of JDBC Wrapper):
The next configuration reveals the required modifications to make use of JDBC Wrapper capabilities. The construct.gradle (After configured to make use of JDBC Wrapper):
The construct.gradle (After configured to make use of JDBC Wrapper):
This transformation provides the AWS Superior JDBC Wrapper library (software program.amazon.jdbc:aws-advanced-jdbc-wrapper:2.5.6) alongside the prevailing PostgreSQL driver (org.postgresql:postgresql:42.6.0). The wrapper acts as an middleman layer that intercepts database calls, provides particular capabilities, then delegates precise SQL operations to the PostgreSQL driver.
Along with the code modifications above, you additionally have to replace the JDBC URL within the utility.properties file, which incorporates the database connection settings. The next configuration illustrates the present configuration with normal JDBC:
Earlier than configured to make use of JDBC Wrapper:
db.url=jdbc:postgresql://aurora-jdbc-demo.cluster-abc123.us-east-1.rds.amazonaws.com:5432/postgres
The next configuration reveals the required change with the JDBC Wrapper
After configured to make use of JDBC Wrapper:
The aws-wrapper: prefix tells the driving force supervisor to make use of JDBC Wrapper capabilities.
The DatabaseConfig.java file updates the connection configuration. The next code illustrates the present configuration with normal JDBC:
Earlier than configured to make use of JDBC Wrapper:
The next code reveals the required change with the JDBC Wrapper:
After configured to make use of JDBC Wrapper:
The previous code switches from a direct JDBC URL configuration to utilizing the JDBC Wrapper.. This permits quick failover capabilities and helps superior options like learn/write splitting and IAM authentication. Whereas including these cloud capabilities, the wrapper nonetheless delegates all precise database operations to the underlying PostgreSQL driver. This provides you Aurora’s cloud options with out altering your utility’s enterprise logic.
Run the next script to use all of the above modifications after which execute the appliance:
./demo.sh aws-jdbc-wrapper
The previous script makes the JDBC Wrapper modifications and runs the Java utility. You will notice the identical output as earlier than, however now it contains JDBC Wrapper capabilities:
Discover that the connection pool identify has modified from StandardPostgresPool to AWSJDBCPool , and the log reveals AWS JDBC Wrapper connection pool initialized, confirming that the appliance is now utilizing the JDBC Wrapper. The connection kind reveals software program.amazon.jdbc.wrapper.ConnectionWrapper wrapping the underlying org.postgresql.jdbc.PgConnection, demonstrating that the wrapper is intercepting database calls whereas delegating to the PostgreSQL driver.
Operations nonetheless use the Aurora author endpoint, however now your utility has quick failover capabilities with out you having made any enterprise logic modifications.
Now that you’ve efficiently configured the appliance to make use of the JDBC Wrapper with quick failover capabilities whereas sustaining all operations on the Aurora author endpoint, the subsequent step is to configure learn/write splitting to distribute learn operations throughout Aurora replicas and optimize efficiency
Stage 3: Allow learn/write splitting
Now let’s implement the JDBC Wrapper learn/write functionality by enabling connection routing. With connection routing, writes go to the first occasion and reads are distributed throughout Aurora replicas primarily based on reader choice methods resembling roundRobin and fastestResponse. For detailed configuration info, see Reader Choice Methods.
Efficiency issues with HikariCP utilizing JDBC Wrapper
The demo utility makes use of exterior HikariCP connection pooling to exhibit a number of use circumstances. Nevertheless, for manufacturing functions with frequent learn/write operations, utilizing the JDBC Wrapper’s inner connection pooling is really useful. The JDBC wrapper presently makes use of HikariCP to create and keep its inner connection swimming pools.
For a complete instance with efficiency testing utilizing inner and exterior swimming pools and examine to no learn/write splitting, see the ReadWriteSplittingSample.java instance, which demonstrates three approaches.
Spring Boot/Framework issues
If you’re utilizing Spring Boot/Framework, concentrate on efficiency implications when utilizing the learn/write splitting characteristic. For instance, the @Transactional(readOnly = true) annotation may cause important efficiency degradation due to fixed switching between reader and author connections. For detailed details about these issues and really useful workarounds, see Limitations when utilizing Spring Boot/Framework.
Adjustments wanted to make use of learn/write splitting
Let’s assessment the modifications wanted to make use of learn/write splitting. The DatabaseConfig.java file provides the readWriteSplitting plugin.
The next code reveals the prevailing JDBC Wrapper configuration with failover:
targetProps.setProperty("wrapperPlugins", "failover");
The up to date code to permit the usage of learn/write splitting is:
targetProps.setProperty("wrapperPlugins", "readWriteSplitting,failover");
The OrderDAO.java file marks connections as learn solely to allow routing to reader cases:
the JDBC Wrapper now routes write operations to the Aurora author endpoint (the first occasion) and browse operations to Aurora reader endpoints (the reproduction cases).Learn/write splitting plugin gives the next advantages:
- Simplified connection administration – You don’t have to handle separate connection swimming pools for learn and write connections inside your utility. Simply by setting Connection#setReadOnly() technique within the utility, the JDBC Wrapper mechanically manages the connections.
- Versatile reader choice methods – Select from a number of reader choice methods like roundRobin, fastestResponse, or least connections to optimize efficiency primarily based in your particular utility necessities and workload patterns
- Lowered author load – Analytics queries not compete with transactions
- Higher useful resource utilization – Learn site visitors distributes throughout a number of replicas, permitting every Aurora occasion to serve its optimum workload with out requiring utility logic modifications
Cleanup
To keep away from incurring future prices, delete the assets created throughout this walkthrough.
If you used the AWS CDK script (Possibility A):
Run the next instructions to delete all AWS assets:
In case you created assets manually (Possibility B):Delete the Aurora cluster and any related assets (safety teams, DB subnet teams) utilizing the identical technique you used to create them—both by means of the AWS Administration Console or AWS CLI.
Conclusion
This submit confirmed how one can improve your Java utility with the cloud-based capabilities of Aurora through the use of the JDBC Wrapper. The easy code modifications shared on this submit can rework a typical JDBC utility to make use of quick failover, learn/write splitting, IAM authentication, Secrets and techniques Supervisor integration, and federated authentication.
In regards to the authors
