Frictionless Local Postgres with Docker Compose

PostgreSQL is a very powerful open source database that is available everywhere from a developer laptop to cloud providers. I have been using PostgreSQL for decades starting in 1998. I am always looking for ways to optimize my dev workstation configuration.

PgAdmin 4 is the graphical interface for working with Postgres. It is is a single page web application written in JavaScript that talks to a backend written in python. Using docker-compose it is possible to run both pgAdmin and postgres inside docker, as shown in the diagram below.

final configuration: apps, containers, virtual machines on a developer workstation

Because pgAdmin runs as a server it asks for a username/password to access the UI. After logging in to pgAdmin you have to setup a connection to the database that you want administer. The login and connection setup are annoying when developing locally.

Setting up pgAdmin with pre-configured connectivity and passwords is tricky. It requires in depth understanding of docker, docker-compose, shell scripting, and how pgAdmin works. I put together an example implementation at with all the gory technical details explained in the repository’s file.

Minikube vs. Docker Desktop for Local Development

Docker desktop and minikube are popular options for local development. Do you need both installed for local development or can you save some RAM and run a only one of them. Below are some questions to help you decide which one to use in your situation.

Which version of Kubernetes are you running in production?

Kuberenets (k8s) is on a three month release cycle with an N-2 support policy. For example in July 2020 k8s 1.18 is the most recent GA release, 1.17 and 1.16 are currently supported. When k8s 1.19 is released 1.16 will fall out of support. Many organizations run the oldest supported k8s to minimize risk to production systems.


Docker Desktop includes a hard coded version of Kubernetes. For example at time of writing the most recent Docker desktop includes k8s 1.16.5 since that is the oldest supported version of k8s.

About dialog box on MacOS

Currently it is not possible to change the version of k8s in Docker desktop. If you need an older version of k8s you will have to install an older version of Docker desktop. Basically Docker is shipping the “most stable, widely used” version of k8s since many organizations run the oldest GA version of k8s in production.

Minikube supports the most recent GA version of k8s plus the previous 6 minor versions. You can pass minikube a command line argument to launch a specific version of k8s. For example, minikube start --kubernetes-version=v1.18.3 will launch k8s 1.18.3

How do you build your container images?

If you use a Dockerfile during development you will need to have Docker desktop installed, otherwise you won’t be able to run docker build to create a container image on your laptop.

If you are building your container image using tools such as JIB that don’t require a local Docker daemon you can run minikube without Docker desktop.

Do you need a local container registry?

Minikube does not ship with a container registry. By default it will try to resolve container images from Docker hub and other public registries. If you need to build containers on your laptop and you want minikube to pull them form your laptop rather than a remote container registry you will benefit from running Docker desktop since it includes a local container registry.

How do you manage local development dependencies?

If your application depends on commonly used OSS databases, message queues, caches you will have to decide how setup these dependencies on your laptop. There are three reasonable choices.

  • Install on laptop
  • Run with docker-compose
  • Run on minikube

Installing directly on a laptop can be time consuming and error prone. If you work on multiple apps or services that need different versions of PostgreSQL or MySQL then development becomes more time consuming and error prone.

You can run dependencies using docker-compose which has a nice developer friendly workflow.

  • checkout code
  • docker-compose up
  • write code / run tests on laptop that use services running in Docker
  • docker-compose down

With docker-compose we are running third party dependencies in simple repeatable manner.

Minikube can also be used to run third party dependencies such as MySQL and other tools. To do so you will have to write k8s deployment manifests and expose the services using a NodePort on the minikube vm ip address.

If you want to use docker-compose for dependencies then you will need docker desktop, otherwise you can get away with minikube.

Are you using JUNIT with test containers ?

If you are using the test containers project for your automated testing you will need to run Docker Desktop since test containers does not currently support Kubernetes.

Which Operating System are you using for local development?

Docker Desktop is available on MacOS and Windows and it includes both k8s and docker . On Linux the docker distribution only includes docker, so you will have to install k8s from another source.

Minikube is available on Mac, Windows, and Linux. If you are looking for the same developer experience across Mac, Windows, and Linux then minikube is a good choice.


Use docker desktop if

  • You need to build container images from Dockerfile
  • You need a local container registry
  • You are managing your local development environment with docker compose
  • You are using test containers with junit
  • The version of Kubernetes included in docker desktop is the version you want to use
  • Your developers are only on MacOS and Windows.

Use minikube if

  • You need to pick a specific version of Kubernetes to work with
  • You don’t need a local container registry
  • You are not using test containers with junit
  • You have developers using Linux, MacOS, and Windows

Based on your answers to the question above it is quite possible that you will need to run both.

Consumer Driven Contracts and Your Microservice Architecture

Video of my talk Consumer Driven Contracts and Your Microservice Architecture co-delivered with my friend Marcin Grzejszczak at Spring One Platform 2017


Consumer driven contracts (CDC) are like TDD applied to the API. It’s especially important in the world of microservices. Since it’s driven by consumers, it’s much more user friendly. Of course microservices are really cool, but most people do not take into consideration plenty of potential obstacles that should be tackled. Then instead of frequent, fully automated deploys via a delivery pipeline, you might end up in an asylum due to frequent mental breakdowns caused by production disasters.

We will write a system using the CDC approach together with Spring Boot, Spring Cloud Contract verifier. We’ll show you how easy it is to write applications that have a consumer driven API and that will allow a developer to speed up the time of writing his better quality software.

Consumer Driven Contracts with Spring Cloud Contract

Jan 26 2017 Talk @ Toronto Java User Group

A video of my talk Consumer Driven Contracts with Spring Cloud Contract on Jan 26 2017 at the Toronto Java User Group.


Changing a published API over time is hard due to backward compatibility concerns. This is even more of an issue in a microservice architecture with 100’s of microservices that each publish and API. Consumer Driven Contracts is an effective service evolution pattern. In this talk we will explain the ideas behind Consumer Driven Contracts and show how to implement them easily with Spring Cloud Contract.

Spring Cloud Contract is an umbrella project holding solutions that help users in successfully implementing the Consumer Driven Contracts approach. Currently Spring Cloud Contract consists of the Spring Cloud Contract Verifier project.

Spring Cloud Contract Verifier is a tool that enables Consumer Driven Contract (CDC) development of JVM-based applications. It is shipped with Contract Definition Language (DSL) written in Groovy. Stating with version 1.1.0 you can define your own way of defining contracts – the only thing you have to provide is a converter. Contract definitions are used to produce following resources:

  • JSON stub definitions to be used by WireMock (HTTP Server Stub) when doing integration testing on the client code (client tests). Test code must still be written by hand, test data is produced by Spring Cloud Contract Verifier. Starting with version 1.1.0 you can provide your own implementation of the HTTP Server Stub.
  • Messaging routes if you’re using one. We’re integrating with Spring Integration, Spring Cloud Stream and Apache Camel. You can however set your own integrations if you want to.
  • Messaging routes if you’re using one. We’re integrating with Spring Integration, Spring Cloud Stream and Apache Camel. You can however set your own integrations if you want to.

Three Tricks to Make Your Application More Reliable

August 29 2013 Talk @ Toronto Java Users Group

Video of my talk Three Tricks to Make Your Application More Reliable on August 29 2013 at the Toronto Java Users Group


  • Protecting your system from others using circuit breakers
  • Automatic troubleshooting with environment self diagnostics
  • Make your log files highly monitorable and actionable with structured logging

All the tricks will be demonstrated with working code that you will be able to access from a public repo.