DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Enterprise AI Trend Report: Gain insights on ethical AI, MLOps, generative AI, large language models, and much more.

2024 Cloud survey: Share your insights on microservices, containers, K8s, CI/CD, and DevOps (+ enter a $750 raffle!) for our Trend Reports.

PostgreSQL: Learn about the open-source RDBMS' advanced capabilities, core components, common commands and functions, and general DBA tasks.

AI Automation Essentials. Check out the latest Refcard on all things AI automation, including model training, data security, and more.

Core Badge
Avatar

Gunter Rotsaert

DZone Core CORE

Systems Engineer at TriOpSys

NL

Joined Dec 2017

https://www.mydeveloperplanet.com

Stats

Reputation: 3878
Pageviews: 2.0M
Articles: 75
Comments: 43
  • Articles
  • Comments

Articles

article thumbnail
Implement RAG Using Weaviate, LangChain4j, and LocalAI
In this blog, you will learn how to implement Retrieval Augmented Generation (RAG) using Weaviate, LangChain4j and LocalAI.
March 21, 2024
· 2,808 Views · 3 Likes
article thumbnail
Semantic Search With Weaviate Vector Database
Learn how to implement a vector similarity search using Weaviate, a vector database.
February 28, 2024
· 2,290 Views · 1 Like
article thumbnail
How To Embed Documents for Semantic Search
Explore embedding documents to be used for a semantic search. Follow examples to learn how embedding influences search results and how to improve the results.
February 14, 2024
· 3,410 Views · 2 Likes
article thumbnail
LangChain4j: Chat With Documents
In this blog, you will take a closer look at how you can chat with your documents using LangChain4j and LocalAI and learn some basics about prompt engineering.
January 31, 2024
· 1,577 Views · 1 Like
article thumbnail
How To Use LangChain4j With LocalAI
Learn how you can integrate Large Language Model (LLM) capabilities into your Java application, and how to integrate with LocalAI from your Java application.
January 18, 2024
· 4,144 Views · 3 Likes
article thumbnail
Running LLMs Locally: A Step-by-Step Guide
In this article, take a closer look at LocalAI, an open-source alternative to OpenAI that allows you to run LLMs on your local machine.
December 22, 2023
· 7,493 Views · 11 Likes
article thumbnail
How to Monitor a Spring Boot App With Ostara
In this blog, you will learn how to monitor a Spring Boot application using Ostara. Ostara is a desktop application that monitors and manages your application. Enjoy!
December 6, 2023
· 3,581 Views · 5 Likes
article thumbnail
What’s New Between Java 17 and Java 21?
In this blog, some of the changes between Java 17 and Java 21 are highlighted, mainly by means of examples. Take a look at the changes since the last LTS release.
November 22, 2023
· 10,869 Views · 10 Likes
article thumbnail
Devoxx Belgium 2023 Takeaways
In October 2023, I visited Devoxx Belgium, and again, it was an awesome event! In this blog, you can find my takeaways from Devoxx Belgium 2023!
October 25, 2023
· 4,846 Views · 7 Likes
article thumbnail
How To Generate Spring Properties Documentation
Are you struggling to keep the documentation of your Spring configuration properties in line with the code? Learn how to solve the issue in this article.
October 11, 2023
· 6,864 Views · 9 Likes
article thumbnail
Podman Desktop Review
In this blog, you will take a closer look at Podman Desktop, a graphical tool when you are working with containers. Enjoy!
September 26, 2023
· 7,165 Views · 4 Likes
article thumbnail
Spring Boot Configuration Properties Explained
Do you get lost in the configuration annotations of Spring Boot? Take a look at the configuration annotations, what they mean, and how you can apply them in your code.
September 12, 2023
· 2,831 Views · 4 Likes
article thumbnail
Podman Compose vs Docker Compose
Take a look at Podman Compose, exploring how to use Compose files according to the Compose Spec in combination with a Podman backend.
August 2, 2023
· 10,757 Views · 1 Like
article thumbnail
Podman Equivalent for Docker Compose
Learn how to use Podman with the built-in equivalent for Docker Compose, Podman kube play, and how to deploy your Podman pod to a local Minikube cluster.
July 13, 2023
· 7,115 Views · 1 Like
article thumbnail
Is Podman a Drop-In Replacement for Docker?
In this blog, you will start with a production-ready Dockerfile and execute the Podman commands just like you would do when using Docker.
May 31, 2023
· 7,983 Views · 12 Likes
article thumbnail
How To Backup and Restore a PostgreSQL Database
In this blog, the author will take the reader through the step-by-step process of how to backup and restore a PostgreSQL database.
May 16, 2023
· 2,805 Views · 3 Likes
article thumbnail
What Is JHipster?
In this blog, you will learn more about JHipster and how it can help you with developing modern web applications. Enjoy!
May 2, 2023
· 5,724 Views · 9 Likes
article thumbnail
How To Create a GraalVM Docker Image
Learn how to create a Docker image for your GraalVM native image and find out it is a bit trickier than what you are used to when creating Docker images.
April 5, 2023
· 9,977 Views · 7 Likes
article thumbnail
How To Build a Spring Boot GraalVM Image
In this article, readers will use a tutorial to learn how to build a Spring Boot GraalVM images and using Reflection, including guide code block examples.
March 21, 2023
· 5,520 Views · 1 Like
article thumbnail
How To Use Ansible Roles
In this article, readers will use a tutorial to learn about the basics of Ansible Roles, experimenting with features, and how to use them in playbooks.
March 7, 2023
· 2,756 Views · 1 Like
article thumbnail
How To Build an SBOM
SBOM is getting more and more important in the software supply chain. In this blog, you will learn what an SBOM is and how to build the SBOM in an automated way.
February 22, 2023
· 4,940 Views · 2 Likes
article thumbnail
How To Setup Spring Boot With Vue.js Frontend
In this article, readers will learn how to create the project structure for a basic Spring Boot backend with a Quasar frontend application, along with code.
February 8, 2023
· 5,642 Views · 2 Likes
article thumbnail
How To Check Docker Images for Vulnerabilities
Regularly checking for vulnerabilities in your pipeline is very important. One of the steps to execute is to perform a vulnerability scan of your Docker images. In this blog, you will learn how to perform the vulnerability scan, how to fix the vulnerabilities, and how to add it to your Jenkins pipeline. Enjoy! 1. Introduction In a previous blog from a few years ago, it was described how you could scan your Docker images for vulnerabilities. A follow-up blog showed how to add the scan to a Jenkins pipeline. However, Anchore Engine, which was used in the previous blogs, is not supported anymore. An alternative solution is available with grype, which is also provided by Anchore. In this blog, you will take a closer look at grype, how it works, how you can fix the issues, and how you can add it to your Jenkins pipeline. But first of all, why check for vulnerabilities? You have to stay up-to-date with the latest security fixes nowadays. Many security vulnerabilities are publicly available and therefore can be exploited quite easily. It is therefore a must-have to fix security vulnerabilities as fast as possible in order to minimize your attack surface. But how do you keep up with this? You are mainly focused on business and do not want to have a full-time job fixing security vulnerabilities. That is why it is important to scan your application and your Docker images automatically. Grype can help with scanning your Docker images. Grype will check operating system vulnerabilities but also language-specific packages such as Java JAR files for vulnerabilities and will report them. This way, you have a great tool that will automate the security checks for you. Do note that grype is not limited to scanning Docker images. It can also scan files and directories and can therefore be used for scanning your sources. In this blog, you will create a vulnerable Docker image containing a Spring Boot application. You will install and use grype in order to scan the image and fix the vulnerabilities. In the end, you will learn how to add the scan to your Jenkins pipeline. The sources used in this blog can be found on GitHub. 2. Prerequisites The prerequisites needed for this blog are: Basic Linux knowledge Basic Docker knowledge Basic Java and Spring Boot knowledge 3. Vulnerable Application Navigate to Spring Initializr and choose a Maven build, Java 17, Spring Boot 2.7.6, and the Spring Web dependency. This will not be a very vulnerable application because Spring already ensures that you use the latest Spring Boot version. Therefore, change the Spring Boot version to 2.7.0. The Spring Boot application can be built with the following command, which will create the jar file for you: Shell $ mvn clean verify You are going to scan a Docker image, therefore a Dockerfile needs to be created. You will use a very basic Dockerfile which just contains the minimum instructions needed to create the image. If you want to create production-ready Docker images, do read the posts Docker Best Practices and Spring Boot Docker Best Practices. Dockerfile FROM eclipse-temurin:17.0.1_12-jre-alpine WORKDIR /opt/app ARG JAR_FILE COPY target/${JAR_FILE} app.jar ENTRYPOINT ["java", "-jar", "app.jar"] At the time of writing, the latest eclipse-temurin base image for Java 17 is version 17.0.5_8. Again, use an older one in order to make it vulnerable. For building the Docker image, a fork of the dockerfile-maven-plugin of Spotify will be used. The following snippet is therefore added to the pom file. XML com.xenoamess.docker dockerfile-maven-plugin 1.4.25 mydeveloperplanet/mygrypeplanet ${project.version} ${project.build.finalName}.jar The advantage of using this plugin is that you can easily reuse the configuration. Creating the Docker image can be done by a single Maven command. Building the Docker image can be done by invoking the following command: Shell $ mvn dockerfile:build You are now all set up to get started with grype. 4. Installation Installation of grype can be done by executing the following script: Shell $ curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin Verify the installation by executing the following command: Shell $ grype version Application: grype Version: 0.54.0 Syft Version: v0.63.0 BuildDate: 2022-12-13T15:02:51Z GitCommit: 93499eec7e3ce2704755e9f51457181b06b519c5 GitDescription: v0.54.0 Platform: linux/amd64 GoVersion: go1.18.8 Compiler: gc Supported DB Schema: 5 5. Scan Image Scanning the Docker image is done by calling grype followed by docker:, indicating that you want to scan an image from the Docker daemon, the image, and the tag: Shell $ grype docker:mydeveloperplanet/mygrypeplanet:0.0.1-SNAPSHOT Application: grype Version: 0.54.0 Syft Version: v0.63.0 Vulnerability DB [updated] Loaded image Parsed image Cataloged packages [50 packages] Scanned image [42 vulnerabilities] NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY busybox 1.34.1-r3 1.34.1-r5 apk CVE-2022-28391 High jackson-databind 2.13.3 java-archive CVE-2022-42003 High jackson-databind 2.13.3 java-archive CVE-2022-42004 High jackson-databind 2.13.3 2.13.4 java-archive GHSA-rgv9-q543-rqg4 High jackson-databind 2.13.3 2.13.4.1 java-archive GHSA-jjjh-jjxp-wpff High java 17.0.1+12 binary CVE-2022-21248 Low java 17.0.1+12 binary CVE-2022-21277 Medium java 17.0.1+12 binary CVE-2022-21282 Medium java 17.0.1+12 binary CVE-2022-21283 Medium java 17.0.1+12 binary CVE-2022-21291 Medium java 17.0.1+12 binary CVE-2022-21293 Medium java 17.0.1+12 binary CVE-2022-21294 Medium java 17.0.1+12 binary CVE-2022-21296 Medium java 17.0.1+12 binary CVE-2022-21299 Medium java 17.0.1+12 binary CVE-2022-21305 Medium java 17.0.1+12 binary CVE-2022-21340 Medium java 17.0.1+12 binary CVE-2022-21341 Medium java 17.0.1+12 binary CVE-2022-21360 Medium java 17.0.1+12 binary CVE-2022-21365 Medium java 17.0.1+12 binary CVE-2022-21366 Medium libcrypto1.1 1.1.1l-r7 apk CVE-2021-4160 Medium libcrypto1.1 1.1.1l-r7 1.1.1n-r0 apk CVE-2022-0778 High libcrypto1.1 1.1.1l-r7 1.1.1q-r0 apk CVE-2022-2097 Medium libretls 3.3.4-r2 3.3.4-r3 apk CVE-2022-0778 High libssl1.1 1.1.1l-r7 apk CVE-2021-4160 Medium libssl1.1 1.1.1l-r7 1.1.1n-r0 apk CVE-2022-0778 High libssl1.1 1.1.1l-r7 1.1.1q-r0 apk CVE-2022-2097 Medium snakeyaml 1.30 java-archive GHSA-mjmj-j48q-9wg2 High snakeyaml 1.30 1.31 java-archive GHSA-3mc7-4q67-w48m High snakeyaml 1.30 1.31 java-archive GHSA-98wm-3w3q-mw94 Medium snakeyaml 1.30 1.31 java-archive GHSA-c4r9-r8fh-9vj2 Medium snakeyaml 1.30 1.31 java-archive GHSA-hhhw-99gj-p3c3 Medium snakeyaml 1.30 1.32 java-archive GHSA-9w3m-gqgf-c4p9 Medium snakeyaml 1.30 1.32 java-archive GHSA-w37g-rhq8-7m4j Medium spring-core 5.3.20 java-archive CVE-2016-1000027 Critical ssl_client 1.34.1-r3 1.34.1-r5 apk CVE-2022-28391 High zlib 1.2.11-r3 1.2.12-r0 apk CVE-2018-25032 High zlib 1.2.11-r3 1.2.12-r2 apk CVE-2022-37434 Critical What does this output tell you? NAME: The name of the vulnerable package INSTALLED: Which version is installed FIXED-IN: In which version the vulnerability is fixed TYPE: The type of dependency, e.g., binary for the JDK, etc. VULNERABILITY: The identifier of the vulnerability; with this identifier, you are able to get more information about the vulnerability in the CVE database SEVERITY: Speaks for itself and can be negligible, low, medium, high, or critical. As you take a closer look at the output, you will notice that not every vulnerability has a confirmed fix. So what do you do in that case? Grype provides an option in order to show only the vulnerabilities with a confirmed fix. Adding the --only-fixed flag will do the trick. Shell $ grype docker:mydeveloperplanet/mygrypeplanet:0.0.1-SNAPSHOT --only-fixed Vulnerability DB [no update available] Loaded image Parsed image Cataloged packages [50 packages] Scanned image [42 vulnerabilities] NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY busybox 1.34.1-r3 1.34.1-r5 apk CVE-2022-28391 High jackson-databind 2.13.3 2.13.4 java-archive GHSA-rgv9-q543-rqg4 High jackson-databind 2.13.3 2.13.4.1 java-archive GHSA-jjjh-jjxp-wpff High libcrypto1.1 1.1.1l-r7 1.1.1n-r0 apk CVE-2022-0778 High libcrypto1.1 1.1.1l-r7 1.1.1q-r0 apk CVE-2022-2097 Medium libretls 3.3.4-r2 3.3.4-r3 apk CVE-2022-0778 High libssl1.1 1.1.1l-r7 1.1.1n-r0 apk CVE-2022-0778 High libssl1.1 1.1.1l-r7 1.1.1q-r0 apk CVE-2022-2097 Medium snakeyaml 1.30 1.31 java-archive GHSA-3mc7-4q67-w48m High snakeyaml 1.30 1.31 java-archive GHSA-98wm-3w3q-mw94 Medium snakeyaml 1.30 1.31 java-archive GHSA-c4r9-r8fh-9vj2 Medium snakeyaml 1.30 1.31 java-archive GHSA-hhhw-99gj-p3c3 Medium snakeyaml 1.30 1.32 java-archive GHSA-9w3m-gqgf-c4p9 Medium snakeyaml 1.30 1.32 java-archive GHSA-w37g-rhq8-7m4j Medium ssl_client 1.34.1-r3 1.34.1-r5 apk CVE-2022-28391 High zlib 1.2.11-r3 1.2.12-r0 apk CVE-2018-25032 High zlib 1.2.11-r3 1.2.12-r2 apk CVE-2022-37434 Critical Note that the vulnerabilities for the Java JDK have disappeared, although there exists a more recent update for the Java 17 JDK. However, this might not be a big issue, because the other (non-java-archive) vulnerabilities show you that the base image is outdated. 6. Fix Vulnerabilities Fixing the vulnerabilities is quite easy in this case. First of all, you need to update the Docker base image. Change the first line in the Docker image: Dockerfile FROM eclipse-temurin:17.0.1_12-jre-alpine into: Dockerfile FROM eclipse-temurin:17.0.5_8-jre-alpine Build the image and run the scan again: Shell $ mvn dockerfile:build ... $ grype docker:mydeveloperplanet/mygrypeplanet:0.0.1-SNAPSHOT --only-fixed Vulnerability DB [no update available] Loaded image Parsed image Cataloged packages [62 packages] Scanned image [14 vulnerabilities] NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY jackson-databind 2.13.3 2.13.4 java-archive GHSA-rgv9-q543-rqg4 High jackson-databind 2.13.3 2.13.4.1 java-archive GHSA-jjjh-jjxp-wpff High snakeyaml 1.30 1.31 java-archive GHSA-3mc7-4q67-w48m High snakeyaml 1.30 1.31 java-archive GHSA-98wm-3w3q-mw94 Medium snakeyaml 1.30 1.31 java-archive GHSA-c4r9-r8fh-9vj2 Medium snakeyaml 1.30 1.31 java-archive GHSA-hhhw-99gj-p3c3 Medium snakeyaml 1.30 1.32 java-archive GHSA-9w3m-gqgf-c4p9 Medium snakeyaml 1.30 1.32 java-archive GHSA-w37g-rhq8-7m4j Medium As you can see in the output, only the java-archive vulnerabilities are still present. The other vulnerabilities have been solved. Next, fix the Spring Boot dependency vulnerability. Change the version of Spring Boot from 2.7.0 to 2.7.6 in the POM. XML org.springframework.boot spring-boot-starter-parent 2.7.6 Build the JAR file, build the Docker image, and run the scan again: Shell $ mvn clean verify ... $ mvn dockerfile:build ... $ grype docker:mydeveloperplanet/mygrypeplanet:0.0.1-SNAPSHOT --only-fixed Vulnerability DB [no update available] Loaded image Parsed image Cataloged packages [62 packages] Scanned image [10 vulnerabilities] NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY snakeyaml 1.30 1.31 java-archive GHSA-3mc7-4q67-w48m High snakeyaml 1.30 1.31 java-archive GHSA-98wm-3w3q-mw94 Medium snakeyaml 1.30 1.31 java-archive GHSA-c4r9-r8fh-9vj2 Medium snakeyaml 1.30 1.31 java-archive GHSA-hhhw-99gj-p3c3 Medium snakeyaml 1.30 1.32 java-archive GHSA-9w3m-gqgf-c4p9 Medium snakeyaml 1.30 1.32 java-archive GHSA-w37g-rhq8-7m4j Medium So, you got rid of the jackson-databind vulnerability, but not of the snakeyaml vulnerability. So, in which dependency is snakeyaml 1.30 being used? You can find out by means of the dependency:tree Maven command. For brevity purposes, only a part of the output is shown here: Shell $ mvnd dependency:tree ... com.mydeveloperplanet:mygrypeplanet:jar:0.0.1-SNAPSHOT [INFO] +- org.springframework.boot:spring-boot-starter-web:jar:2.7.6:compile [INFO] | +- org.springframework.boot:spring-boot-starter:jar:2.7.6:compile [INFO] | | +- org.springframework.boot:spring-boot:jar:2.7.6:compile [INFO] | | +- org.springframework.boot:spring-boot-autoconfigure:jar:2.7.6:compile [INFO] | | +- org.springframework.boot:spring-boot-starter-logging:jar:2.7.6:compile [INFO] | | | +- ch.qos.logback:logback-classic:jar:1.2.11:compile [INFO] | | | | \- ch.qos.logback:logback-core:jar:1.2.11:compile [INFO] | | | +- org.apache.logging.log4j:log4j-to-slf4j:jar:2.17.2:compile [INFO] | | | | \- org.apache.logging.log4j:log4j-api:jar:2.17.2:compile [INFO] | | | \- org.slf4j:jul-to-slf4j:jar:1.7.36:compile [INFO] | | +- jakarta.annotation:jakarta.annotation-api:jar:1.3.5:compile [INFO] | | \- org.yaml:snakeyaml:jar:1.30:compile ... The output shows us that the dependency is part of the spring-boot-starter-web dependency. So, how do you solve this? Strictly speaking, Spring has to solve it. But if you do not want to wait for a solution, you can solve it by yourself. Solution 1: You do not need the dependency. This is the easiest fix and is low risk. Just exclude the dependency from the spring-boot-starter-web dependency in the pom. XML org.springframework.boot spring-boot-starter-web org.yaml snakeyaml Build the JAR file, build the Docker image, and run the scan again: Shell $ mvn clean verify ... $ mvn dockerfile:build ... $ grype docker:mydeveloperplanet/mygrypeplanet:0.0.1-SNAPSHOT --only-fixed Vulnerability DB [no update available] Loaded image Parsed image Cataloged packages [61 packages] Scanned image [3 vulnerabilities] No vulnerabilities found No vulnerabilities are found anymore. Solution 2: You do need the dependency. You can replace this transitive dependency by means of dependencyManagement in the pom. This is a bit more tricky because the updated transitive dependency is not tested with the spring-boot-starter-web dependency. It is a trade-off whether you want to do this or not. Add the following section to the pom: XML org.yaml snakeyaml 1.32 Build the jar file, build the Docker image, and run the scan again: Shell $ mvn clean verify ... $ mvn dockerfile:build ... $ grype docker:mydeveloperplanet/mygrypeplanet:0.0.1-SNAPSHOT --only-fixed Vulnerability DB [no update available] Loaded image Parsed image Cataloged packages [62 packages] Scanned image [3 vulnerabilities] No vulnerabilities found Again, no vulnerabilities are present anymore. Solution 3: This is the solution when you do not want to do anything or whether it is a false positive notification. Create a .grype.yaml file where you exclude the vulnerability with High severity and execute the scan with the --config flag followed by the .grype.yaml file containing the exclusions. The .grype.yaml file looks as follows: YAML ignore: - vulnerability: GHSA-3mc7-4q67-w48m Run the scan again: Shell $ grype docker:mydeveloperplanet/mygrypeplanet:0.0.1-SNAPSHOT --only-fixed Vulnerability DB [no update available] Loaded image Parsed image Cataloged packages [62 packages] Scanned image [10 vulnerabilities] NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY snakeyaml 1.30 1.31 java-archive GHSA-98wm-3w3q-mw94 Medium snakeyaml 1.30 1.31 java-archive GHSA-c4r9-r8fh-9vj2 Medium snakeyaml 1.30 1.31 java-archive GHSA-hhhw-99gj-p3c3 Medium snakeyaml 1.30 1.32 java-archive GHSA-9w3m-gqgf-c4p9 Medium snakeyaml 1.30 1.32 java-archive GHSA-w37g-rhq8-7m4j Medium The High vulnerability is not shown anymore. 7. Continuous Integration Now you know how to manually scan your Docker images. However, you probably want to scan the images as part of your continuous integration pipeline. In this section, a solution is provided when using Jenkins as a CI platform. The first question to answer is how you will be notified when vulnerabilities are found. Up until now, you only noticed the vulnerabilities by looking at the standard output. This is not a solution for a CI pipeline. You want to get notified and this can be done by failing the build. Grype has the --fail-on flag for this purpose. You probably do not want to fail the pipeline when a vulnerability with severity negligible has been found. Let’s see what happens when you execute this manually. First of all, introduce the vulnerabilities again in the Spring Boot application and in the Docker image. Build the JAR file, build the Docker image and run the scan with flag --fail-on: Shell $ mvn clean verify ... $ mvn dockerfile:build ... $ grype docker:mydeveloperplanet/mygrypeplanet:0.0.1-SNAPSHOT --only-fixed --fail-on high ... 1 error occurred: * discovered vulnerabilities at or above the severity threshold Not all output has been shown here, but only the important part. And, as you can see, at the end of the output, a message is shown that the scan has generated an error. This will cause your Jenkins pipeline to fail and as a consequence, the developers are notified that something went wrong. In order to add this to your Jenkins pipeline, several options exist. Here it is chosen to create the Docker image and execute the grype Docker scan from within Maven. There is no separate Maven plugin for grype, but you can use the exec-maven-plugin for this purpose. Add the following to the build-plugins section of the POM. XML org.codehaus.mojo exec-maven-plugin 3.1.0 grype docker:mydeveloperplanet/mygrypeplanet:${project.version} --scope all-layers --fail-on high --only-fixed -q Two extra flags are added here: --scope all-layers: This will scan all layers involved in the Docker image. -q: This will use quiet logging and will show only the vulnerabilities and possible failures. You can invoke this with the following command: Shell $ mvnd exec:exec You can add this to your Jenkinsfile inside the withMaven wrapper: Plain Text withMaven() { sh 'mvn dockerfile:build dockerfile:push exec:exec' } 8. Conclusion In this blog, you learned how to scan your Docker images by means of grype. Grype has some interesting, user-friendly features which allow you to efficiently add them to your Jenkins pipeline. Also, installing grype is quite easy. Grype is definitely a great improvement over Anchor Engine.
January 24, 2023
· 8,979 Views · 9 Likes
article thumbnail
Spring Boot Docker Best Practices
In this blog, you will learn some Docker best practices mainly focussed on Spring Boot applications. You will learn these practices by applying them to a sample application. Enjoy! 1. Introduction This blog continues where the previous blog about Docker Best Practices left off. However, this blog can be read independently from the previous one. The goal is to provide some best practices that can be applied to Dockerized Spring Boot applications. The Dockerfile that will be used as a starting point is the following: Dockerfile FROM eclipse-temurin:17.0.5_8-jre-alpine@sha256:02c04793fa49ad5cd193c961403223755f9209a67894622e05438598b32f210e WORKDIR /opt/app RUN addgroup --system javauser && adduser -S -s /usr/sbin/nologin -G javauser javauser ARG JAR_FILE COPY target/${JAR_FILE} app.jar RUN chown -R javauser:javauser . USER javauser ENTRYPOINT ["java", "-jar", "app.jar"] This Dockerfile is doing the following: FROM: Take eclipse-temurin:17 Java Docker image as base image; WORKDIR: Set /opt/app as the working directory; RUN: Create a system group and system user; ARG: provide an argument JAR_FILE so that you do not have to hard code the jar file name into the Dockerfile; COPY: Copy the jar file into the Docker image; RUN: Change the owner of the WORKDIR to the previously created system user; USER: Ensure that the previously created system user is used; ENTRYPOINT: Start the Spring Boot application. In the next sections, you will change this Dockerfile to adhere to best practices. The resulting Dockerfile of each paragraph is available in the git repository in the directory Dockerfiles. At the end of each paragraph, the name of the corresponding final Dockerfile will be mentioned where applicable. The code being used in this blog is available on GitHub. 2. Prerequisites The following prerequisites apply to this blog: Basic Linux knowledge Basic Java and Spring Boot knowledge Basic Docker knowledge 3. Sample Application A sample application is needed in order to demonstrate the best practices. Therefore, a basic Spring Boot application is created containing the Spring Web and Spring Actuator dependencies. The application can be run by invoking the following command from within the root of the repository: Shell $ mvn spring-boot:run Spring Actuator will provide a health endpoint for your application. By default, it will always return the UP status. Shell $ curl http://localhost:8080/actuator/health {"status":"UP"} In order to alter the health status of the application, a custom health indicator is added. Every 5 invocations, the health of the application will be set to DOWN. Java @Component public class DownHealthIndicator implements HealthIndicator { private int counter; @Override public Health health() { counter++; Health.Builder status = Health.up(); if (counter == 5) { status = Health.down(); counter = 0; } return status.build(); } } For building the Docker image, a fork of the dockerfile-maven-plugin of Spotify will be used. The following snippet is therefore added to the pom file. XML com.xenoamess.docker dockerfile-maven-plugin 1.4.25 mydeveloperplanet/dockerbestpractices ${project.version} ${project.build.finalName}.jar The advantage of using this plugin is that you can easily reuse the configuration. Creating the Docker image can be done by a single Maven command. Building the jar file is done by invoking the following command: Shell $ mvn clean verify Building the Docker image can be done by invoking the following command: Shell $ mvn dockerfile:build Run the Docker image: Shell $ docker run --name dockerbestpractices mydeveloperplanet/dockerbestpractices:0.0.1-SNAPSHOT Find the IP-address of the running container: Shell $ docker inspect dockerbestpractices | grep IPAddress "SecondaryIPAddresses": null, "IPAddress": "172.17.0.3", "IPAddress": "172.17.0.3" In the above example, the IP-address is 172.17.0.3. The application also contains a HelloController which just responds with a hello message. The Hello endpoint can be invoked as follows: Shell $ curl http://172.17.0.3:8080/hello Hello Docker! Everything is now explained to get started! 4. Best Practices 4.1 Healthcheck A healthcheck can be added to your Dockerfile in order to expose the health of your container. Based on this status, the container can be restarted. This can be done by means of the HEALTHCHECK command. Add the following healthcheck: Dockerfile HEALTHCHECK --interval=30s --timeout=3s --retries=1 CMD wget -qO- http://localhost:8080/actuator/health/ | grep UP || exit 1 This healthcheck is doing the following: interval: Every 30 seconds the healthcheck is executed. For production use, it is better to choose something like five minutes. In order to do some tests, a smaller value is easier. This way you do not have to wait for five minutes each time. timeout: A timeout of three seconds for executing the health check. retries: This indicates the number of consecutive checks which have to be executed before the health status changes. This defaults to three which is a good number for in-production. For testing purposes, you set it to one, meaning that after one unsuccessful check, the health status changes to unhealthy. command: The Spring Actuator endpoint will be used as a healthcheck. The response is retrieved and piped to grep in order to verify whether the health status is UP. It is advised not to use curl for this purpose because not every image has curl available. You will need to install curl in addition to the image and this enlarges the image with several MBs. Build and run the container. Take a closer look at the status of the container. In the first 30 seconds, the health status indicates starting because the first health check will be done after the interval setting. Shell $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ddffb5a9cbf0 mydeveloperplanet/dockerbestpractices:0.0.1-SNAPSHOT "java -jar /opt/app/…" 8 seconds ago Up 6 seconds (health: starting) dockerbestpractices After 30 seconds, the health status indicates healthy. Shell $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ddffb5a9cbf0 mydeveloperplanet/dockerbestpractices:0.0.1-SNAPSHOT "java -jar /opt/app/…" 33 seconds ago Up 32 seconds (healthy) dockerbestpractices After 2-5 minutes, the health status indicates unhealthy because of the custom health indicator you added to the sample application. Shell $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ddffb5a9cbf0 mydeveloperplanet/dockerbestpractices:0.0.1-SNAPSHOT "java -jar /opt/app/…" 2 minutes ago Up 2 minutes (unhealthy) dockerbestpractices Again, 30 seconds after the unhealthy status, the status reports healthy. Did you notice that the container did not restart due to the unhealthy status? That is because the Docker engine does not do anything based on this status. A container orchestrator like Kubernetes will do a restart. Is it not possible to restart the container when running with the Docker engine? Yes, it can: you can use the autoheal Docker image for this purpose. Let’s start the autoheal container. Shell docker run -d \ --name autoheal \ --restart=always \ -e AUTOHEAL_CONTAINER_LABEL=all \ -v /var/run/docker.sock:/var/run/docker.sock \ willfarrell/autoheal Verify whether it is running. Shell $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ddffb5a9cbf0 mydeveloperplanet/dockerbestpractices:0.0.1-SNAPSHOT "java -jar /opt/app/…" 10 minutes ago Up 10 minutes (healthy) dockerbestpractices d40243eb242a willfarrell/autoheal "/docker-entrypoint …" 5 weeks ago Up 9 seconds (healthy) autoheal Wait until the health is unhealthy again or just invoke the health actuator endpoint in order to speed it up. When the status reports unhealthy, the container is restarted. You can verify this in the STATUS column where you can see the uptime of the container. Shell $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ddffb5a9cbf0 mydeveloperplanet/dockerbestpractices:0.0.1-SNAPSHOT "java -jar /opt/app/…" 12 minutes ago Up 6 seconds (health: starting) dockerbestpractices You have to decide for yourself whether you want this or whether you want to monitor the health status yourself by means of a monitoring tool. The autoheal image provides you the means to automatically restart your Docker container(s) without manual intervention. The resulting Dockerfile is available in the git repository with the name 6-Dockerfile-healthcheck. 4.2 Docker Compose Docker Compose gives you the opportunity to start multiple containers at once with a single command. Besides that, it also enables you to document your services, even when you only have one service to manage. Docker Compose used to be installed separately from Docker, but nowadays it is part of Docker itself. You need to write a compose.yml file that contains this configuration. Let’s see what this looks like for the two containers you used during the healthcheck. YAML services: dockerbestpractices: image: mydeveloperplanet/dockerbestpractices:0.0.1-SNAPSHOT autoheal: image: willfarrell/autoheal:1.2.0 restart: always environment: AUTOHEAL_CONTAINER_LABEL: all volumes: - type: bind source: /var/run/docker.sock target: /var/run/docker.sock Two services (read: containers) are configured. One for the dockerbestpractices image and one for the autoheal image. The autoheal image will restart after a reboot, has an environment variable defined, and has a volume mounted. Execute the following command from the directory where the compose.yml file can be found: Shell $ docker compose up In the logging, you will see that both containers are started. Open another terminal window and navigate to the directory where the compose.yml can be found. A lot of commands can be used in combination with Docker Compose. E.g. show the status of the running containers. Shell $ docker compose ps NAME COMMAND SERVICE STATUS PORTS mydockerbestpracticesplanet-autoheal-1 "/docker-entrypoint …" autoheal running (healthy) mydockerbestpracticesplanet-dockerbestpractices-1 "java -jar /opt/app/…" dockerbestpractices running (healthy) Or stop the containers: Shell $ docker compose stop [+] Running 2/2 ⠿ Container mydockerbestpracticesplanet-autoheal-1 Stopped 4.3s ⠿ Container mydockerbestpracticesplanet-dockerbestpractices-1 Stopped 0.3s Or easily remove the containers: Shell $ docker compose rm ? Going to remove mydockerbestpracticesplanet-dockerbestpractices-1, mydockerbestpracticesplanet-autoheal-1 Yes [+] Running 2/0 ⠿ Container mydockerbestpracticesplanet-autoheal-1 Removed 0.0s ⠿ Container mydockerbestpracticesplanet-dockerbestpractices-1 Removed As you can see, Docker Compose provides quite some advantages and you should definitely consider using it. 4.3 Multi-Stage Builds Sometimes it can be handy to build your application inside a Docker container. The advantage is that you do not need to install a complete development environment onto your system and that you can interchange the development environment more easily. However, there is a problem with building the application inside your container. Especially when you want to use the same container for running your application. The sources and the complete development environment will be available in your production container and this is not a good idea from a security perspective. You could write separate Dockerfiles to circumvent this issue: one for the build and one for running the application. But this is quite cumbersome. The solution is to use multi-stage builds. With multi-stage builds, you can separate the building stage from the running stage. The Dockerfile looks as follows: Dockerfile FROM maven:3.8.6-eclipse-temurin-17-alpine@sha256:e88c1a981319789d0c00cd508af67a9c46524f177ecc66ca37c107d4c371d23b AS builder WORKDIR /build COPY . . RUN mvn clean package -DskipTests FROM eclipse-temurin:17.0.5_8-jre-alpine@sha256:02c04793fa49ad5cd193c961403223755f9209a67894622e05438598b32f210e WORKDIR /opt/app RUN addgroup --system javauser && adduser -S -s /usr/sbin/nologin -G javauser javauser COPY --from=builder /build/target/mydockerbestpracticesplanet-0.0.1-SNAPSHOT.jar app.jar RUN chown -R javauser:javauser . USER javauser HEALTHCHECK --interval=30s --timeout=3s --retries=1 CMD wget -qO- http://localhost:8080/actuator/health/ | grep UP || exit 1 ENTRYPOINT ["java", "-jar", "app.jar"] As you can see, this Dockerfile contains two FROM statements. The first one is used for building the application: FROM: A Docker image containing Maven and Java 17, this is needed for building the application; WORKDIR: Set the working directory; COPY: copy the current directory to the working directory into the container; RUN: The command in order to build the jar file. Something else is also added to the FROM statement. At the end, AS builder is added. This way, this container is labeled and can be used for building the image for running the application. The second part is identical to the Dockerfile you used to have before, except for two lines. The following lines are removed: Dockerfile ARG JAR_FILE COPY target/${JAR_FILE} app.jar These lines ensured that the jar file from our local build was copied into the image. These are replaced with the following line: Dockerfile COPY --from=builder /build/target/mydockerbestpracticesplanet-0.0.1-SNAPSHOT.jar app.jar With this line, you indicate that you want to copy a file from the builder container into the new image. When you build this Dockerfile, you will notice that the build container executes the build and finally, the image for running the application is created. During building the image, you will also notice that all Maven dependencies are downloaded. The resulting Dockerfile is available in the git repository with the name 7-Dockerfile-multi-stage-build. 4.4 Spring Boot Docker Layers A Docker image consists of layers. If you are not familiar with Docker layers, you can check out a previous post. Every command in a Dockerfile will result in a new layer. When you initially pull a Docker image, all layers will be retrieved and stored. If you update your Docker image and you only change for example the jar file, the other layers will not be retrieved anew. This way, your Docker images are stored more efficiently. However, when you are using Spring Boot, a fat jar is created. Meaning that when you only change some of your code, a new fat jar is created with unchanged dependencies. So each time you create a new Docker image, megabytes are added in a new layer without any necessity. For this purpose, Spring Boot Docker layers can be used. A detailed explanation can be found here. In short, Spring Boot can split the fat jar into several directories: /dependencies /spring-boot-loader /snapshot-dependencies /application The application code will reside in the directory application, whereas for example, the dependencies will reside in directory dependencies. In order to achieve this, you will use a multi-stage build. The first part will copy the jar file into a JDK Docker image and will extract the fat jar. Dockerfile FROM eclipse-temurin:17.0.4.1_1-jre-alpine@sha256:e1506ba20f0cb2af6f23e24c7f8855b417f0b085708acd9b85344a884ba77767 AS builder WORKDIR application ARG JAR_FILE COPY target/${JAR_FILE} app.jar RUN java -Djarmode=layertools -jar app.jar extract The second part will copy the split directories into a new image. The COPY commands replace the jar file. Shell FROM eclipse-temurin:17.0.4.1_1-jre-alpine@sha256:e1506ba20f0cb2af6f23e24c7f8855b417f0b085708acd9b85344a884ba77767 WORKDIR /opt/app RUN addgroup --system javauser && adduser -S -s /usr/sbin/nologin -G javauser javauser COPY --from=builder application/dependencies/ ./ COPY --from=builder application/spring-boot-loader/ ./ COPY --from=builder application/snapshot-dependencies/ ./ COPY --from=builder application/application/ ./ RUN chown -R javauser:javauser . USER javauser HEALTHCHECK --interval=30s --timeout=3s --retries=1 CMD wget -qO- http://localhost:8080/actuator/health/ | grep UP || exit 1 ENTRYPOINT ["java", "org.springframework.boot.loader.JarLauncher"] Build and run the container. You will not notice any difference when running the container. The main advantage is the way the Docker image is stored. The resulting Dockerfile is available in the git repository with the name 8-Dockerfile-spring-boot-docker-layers. 5. Conclusion In this blog, some best practices are covered when creating Dockerfiles for Spring Boot applications. Learn to apply these practices and you will end up with much better Docker images.
December 20, 2022
· 7,695 Views · 12 Likes
article thumbnail
Docker Best Practices
In this blog, you will learn some Docker best practices mainly focussed on Java applications. This is not only a theoretical exercise, but you will learn how to apply the best practices to your Dockerfiles. Enjoy! 1. Introduction Writing Dockerfiles seems easy: just pick an example from the internet and customize it to fit your needs. However, many examples are good for a development environment but are not production worthy. A production environment has more strict requirements especially concerning security. Besides that, Docker also provides guidelines for writing good Dockerfiles. It is just like writing code: you may know the syntax, but that does not mean you can write clean and good code in that specific programming language. The same applies to Dockerfiles. With this blog, you will learn some best practices, guidelines you can apply when writing Dockerfiles. The previous sentence deliberately says can apply and not must apply. It all depends on your use case. The example Dockerfile which often can be found when searching for Dockerfile for Java applications, is the following: Dockerfile FROM eclipse-temurin:17 RUN mkdir /opt/app ARG JAR_FILE ADD target/${JAR_FILE} /opt/app/app.jar CMD ["java", "-jar", "/opt/app/app.jar"] This Dockerfile is doing the following: FROM: Take eclipse-temurin:17 Java Docker image as base image; RUN: create a directory for the application jar file; ARG: provide an argument JAR_FILE so that you do not have to hard code the jar file name into the Dockerfile; ADD: add the jar file to the Docker image; CMD: the command that has to be executed when running the container, in this case, just start the Java application. In the next sections, you will change this Dockerfile to adhere best practices. The resulting Dockerfile of each paragraph is available in the git repository in directory Dockerfiles. At the end of each paragraph the name of the corresponding final Dockerfile will be mentioned where applicable. This post is inspired by the CIS Docker Benchmarks, the blog 10 best practices to containerize Java applications with Docker by Brian Vermeer and my own experiences. Code being used in this blog is available at GitHub. 2. Prerequisites The following prerequisites apply to this blog: Basic Linux knowlegde; Basic Java and Spring Boot knowledge; Basic Docker knowlegde. 3. Sample Application A sample application is needed in order to demonstrate the best practices. Therefore, a basic Spring Boot application is created containing the Spring Web dependency. The application can be run by invoking the following command from within the root of the repository: Shell $ mvn spring-boot:run For building the Docker image, a fork of the dockerfile-maven-plugin of Spotify will be used. The following snippet is therefore added to the pom file. XML com.xenoamess.docker dockerfile-maven-plugin 1.4.25 mydeveloperplanet/dockerbestpractices ${project.version} ${project.build.finalName}.jar The advantage of using this plugin is that you can easily reuse the configuration. Creating the Docker image can be done by a single Maven command. Building the jar file is done by invoking the following command: Shell $ mvn clean verify Building the Docker image can be done by invoking the following command: Shell $ mvn dockerfile:build Run the Docker image: Shell $ docker run --name dockerbestpractices mydeveloperplanet/dockerbestpractices:0.0.1-SNAPSHOT Find the IP-address of the running container: Shell $ docker inspect dockerbestpractices | grep IPAddress "SecondaryIPAddresses": null, "IPAddress": "172.17.0.3", "IPAddress": "172.17.0.3" In the above example, the IP-address is 172.17.0.3. The application also contains a HelloController which just responds with a hello message. The Hello endpoint can be invoked as follows: Shell $ curl http://172.17.0.3:8080/hello Hello Docker! Everything is now explained to get started! 4. Best Practices 4.1 Which Image to Use The image used in the Dockerfile is eclipse-temurin:17. What kind of image is this exactly? Therefore, you need to check how this image is built. Navigate to DockerHub; Search for ‘eclipse-temurin’; Navigate to the Tags tab; Search for 17; Sort by A-Z; Click the tag 17. This will bring you to the page where the layers are listed. If you look closely to the details of every layer and compare this to the tag 17-jre, you will notice that the tag 17 contains a complete JDK and tag 17-jre only contains the JRE. The latter is enough for running a Java application and you do not need the whole JDK for running applications in production. It is even a security issue when the JDK is used because the development tools could be misused. Besides that, the compressed size of the tag 17 image is almost 235MB and for the 17-jre it is only 89MB. In order to reduce the size of the image even further, you can use a slimmed image. The 17-jre-alpine image is such a slimmed image. The compressed size of this image is 59MB and reduces the compressed size with 30MB compared to the 17-jre. The advantage is that it will be faster to distribute the image because of its reduced size. Be explicit in the image you use. The above used tags are general tags which point to the latest version. This might be ok in a development environment, but for production it is better to be explicit about the version being used. The tag being used in this case will be 17.0.5_8-jre-alpine. And if you want to be even more secure, you add the SHA256 hash to the image version. The SHA256 hash can be found at the page containing the layers. When the SHA256 hash does not correspond to the one you defined in your Dockerfile, building the Docker image will fail. The first line of the Dockerfile was: Dockerfile FROM eclipse-temurin:17 With the above knowledge, you change this line into: Dockerfile FROM eclipse-temurin:17.0.5_8-jre-alpine@sha256:02c04793fa49ad5cd193c961403223755f9209a67894622e05438598b32f210e Build the Docker image and you will notice that the (uncompressed) size of the image is drastically reduced. It was 475MB and now it is 188MB. Shell $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE mydeveloperplanet/dockerbestpractices 0.0.1-SNAPSHOT 0b8d89616602 3 seconds ago 188MB The resulting Dockerfile is available in the git repository with name 1-Dockerfile-specific-image. 4.2 Do Not Run As Root By default, the application runs as user root inside the container. This exposes many vulnerability risks and is not something you must want. Therefore, it is better to define a system user for your application. You can see in the first log line when starting the container that the application is started by root. Shell 2022-11-26 09:03:41.210 INFO 1 --- [ main] m.MyDockerBestPracticesPlanetApplication : Starting MyDockerBestPracticesPlanetApplication v0.0.1-SNAPSHOT using Java 17.0.5 on 3b06feee6c65 with PID 1 (/opt/app/app.jar started by root in /) Creating a system user can be done by adding a group javauser and a user javauser to the Dockerfile. The javauser is a system user which cannot login. This is achieved by adding the following instruction to the Dockerfile. Notice that creating the group and user are combined in one line by means of the ampersand signs in order to create only one layer. Dockerfile RUN addgroup --system javauser && adduser -S -s /usr/sbin/nologin -G javauser javauser The complete list of arguments which can be used for adduser are the following: -h DIR Home directory -g GECOS GECOS field -s SHELL Login shell -G GRP Group -S Create a system user -D Don’t assign a password -H Don’t create home directory -u UID User id -k SKEL Skeleton directory (/etc/skel) You will also need to change the owner of the directory /opt/apt to this new javauser, otherwise the javauser will not be able to access this directory. This can be achieved by adding the following line: Dockerfile RUN chown -R javauser:javauser /opt/app And lastly, you need to ensure that the javauser is actually used in the container by means of the USER command. The complete Dockerfile is the following: Dockerfile FROM eclipse-temurin:17.0.5_8-jre-alpine@sha256:02c04793fa49ad5cd193c961403223755f9209a67894622e05438598b32f210e RUN mkdir /opt/app RUN addgroup --system javauser && adduser -S -s /usr/sbin/nologin -G javauser javauser ARG JAR_FILE ADD target/${JAR_FILE} /opt/app/app.jar RUN chown -R javauser:javauser /opt/app USER javauser CMD ["java", "-jar", "/opt/app/app.jar"] In order to test this new image, you first need to stop and remove the running container. You can do so with the following commands: Shell $ docker stop dockerbestpractices $ docker rm dockerbestpractices Build and run the container again. The first log line mentions now that the application is started by javauser. Before, it stated that it was started by root. Shell 2022-11-26 09:06:45.227 INFO 1 --- [ main] m.MyDockerBestPracticesPlanetApplication : Starting MyDockerBestPracticesPlanetApplication v0.0.1-SNAPSHOT using Java 17.0.5 on ab1bcd38dff7 with PID 1 (/opt/app/app.jar started by javauser in /) The resulting Dockerfile is available in the git repository with name 2-Dockerfile-do-not-run-as-root. 4.3 Use WORKDIR In the Dockerfile you are using, a directory /opt/app is created. After that, the directory is several times repeated, because this is actually your working directory. However, Docker has the WORKDIR instruction for this purpose. When the WORKDIR does not exist, it will be created for you. Every instruction after the WORKDIR instruction will be executed inside the specified WORKDIR. So, you do not have to repeat the path every time. The second line contains the RUN instruction: Dockerfile RUN mkdir /opt/app Change this with using the WORKDIR instruction. Dockerfile WORKDIR /opt/app Now you can also remove every /opt/app reference, because the WORKDIR instruction ensures that you are in this directory. The new Dockerfile is the following: Dockerfile FROM eclipse-temurin:17.0.5_8-jre-alpine@sha256:02c04793fa49ad5cd193c961403223755f9209a67894622e05438598b32f210e WORKDIR /opt/app RUN addgroup --system javauser && adduser -S -s /usr/sbin/nologin -G javauser javauser ARG JAR_FILE ADD target/${JAR_FILE} app.jar RUN chown -R javauser:javauser . USER javauser CMD ["java", "-jar", "app.jar"] Build and run the container. As you can see in the logging, the jar file is still executed from within directory /opt/app: Shell 2022-11-26 16:07:18.503 INFO 1 --- [ main] m.MyDockerBestPracticesPlanetApplication : Starting MyDockerBestPracticesPlanetApplication v0.0.1-SNAPSHOT using Java 17.0.5 on fe5cf9223143 with PID 1 (/opt/app/app.jar started by javauser in /opt/app) The resulting Dockerfile is available in the git repository with name 3-Dockerfile-use-workdir. 4.4 Use ENTRYPOINT There exists a difference between the CMD instruction and the ENTRYPOINT instruction. More detailed information can be found in this blog. In short, use: ENTRYPOINT: when you build an executable Docker image using commands that always need to be executed. You can append arguments to the command if you like to; CMD: when you want to provide a default set of arguments but which are allowed to be overridden by the command line when the container runs. So, in the case for running a Java application, it is better to use ENTRYPOINT. The last line of the Dockerfile is: Dockerfile CMD ["java", "-jar", "app.jar"] Change it into the following: Dockerfile ENTRYPOINT ["java", "-jar", "app.jar"] Build and run the container. You will not notice any specific difference, the container just runs as it did before. The resulting Dockerfile is available in the git repository with name 4-Dockerfile-use-entrypoint. 4.5 Use COPY instead of ADD The COPY and ADD instructions seem to be similar. However, COPY is preferred above ADD. COPY does what it says, it just copies the file into the image. ADD has some extra features, like adding a file from a remote resource. The line in the Dockerfile with the ADD command: Dockerfile ADD target/${JAR_FILE} app.jar Change it by using the COPY command: Dockerfile COPY target/${JAR_FILE} app.jar Build and run the container again. You will not say a big change, besides that in the build log the COPY command is shown now instead of the ADD command. The resulting Dockerfile is available in the git repository with name 5-Dockerfile-use-copy-instead-of-add. 4.6 Use .dockerignore In order to prevent from accidentily adding files to your Docker image, you can use a .dockerignore file. With a .dockerignore file, you can specify which files may be sent to the Docker daemon or may be used in your image. A good practice is to ignore all files and to add explicitely the files you allow. This can be achieved by adding an asterisk pattern to the .dockerignore file which excludes all subdirectories and files. However, you do need the jar file into the build context. The jar file can be excluded from being ignored by means of an exclamation mark. The .dockerignore file looks as follows. You add it to the directory where you run the Docker commands from. In this case, you add it to the root of the git repository. Plain Text **/** !target/*.jar Build and run the container. Again, you will not notice a big change, but when you are developing with npm, you will notice that creating the Docker image will be much faster because the node_modules directory is not copied anymore into the Docker build context. The .dockerignore file is available in the git repository Dockerfiles directory. 4.7 Run Docker Daemon Rootless The Docker daemon runs as root by default. However, this causes some security issues as you can imagine. Since Docker v20.10, it is also possible to run the Docker daemon as a non-root user. More information how this can be achieved can be found here. An alternative way to accomplish this, is to make use of Podman. Podman is a daemonless container engine and runs by default as non-root. However, although you will read that Podman is a drop-in replacement for Docker, there are some major differences. One of them is how you mount volumes in the container. More information about this topic can be read here. 5. Conclusion In this blog, some best practices for writing Dockerfiles and running containers are covered. Writing Dockerfiles seems to be easy, but do take the effort in learning how to write them properly. Understand the instructions and when to use them.
December 6, 2022
· 10,173 Views · 6 Likes
article thumbnail
How to Create an Ansible Playbook
In this post, you will learn how to create an Ansible playbook. As an exercise, you will install an Apache Webserver onto two target machines and change the welcome page. 1. Introduction In the two previous Ansible posts, you learned how to setup an Ansible test environment and how to create an Ansible inventory. This post continues this series, but it is not necessary to read the first two posts. In this post, you will learn how to create an Ansible playbook. A playbook consists out of one or more plays which execute tasks. The tasks call Ansible modules. Do not worry if you do not understand this yet, this is what you will learn. It is also advised to read the introduction to playbooks in the Ansible documentation. In case you did not read the previous blogs or just as a reminder, the environment consists out of one Controller and two Target machines. The Controller and Target machines run in a VirtualBox VM. Development of the Ansible scripts is done with IntelliJ on the host machine. The files are synchronized from the host machine to the Controller by means of a script. In this blog, the machines have the following IP addresses: Controller: 192.168.2.11 Target 1: 192.168.2.12 Target 2: 192.168.2.13 The files being used in this blog are available in the corresponding git repository at GitHub. 2. Prerequisites The following prerequisites apply to this blog: You need an Ansible test environment, see a previous blog how to set up a test environment; You need to have basic knowledge about Ansible Inventory and Ansible Vault, see a previous blog if you do not have this knowledge; If you use your own environment, you should know that Ubuntu 22.04 LTS is used for the Controller and Target machines and Ansible version 2.13.3; Basic Linux knowledge. 3. Your First Playbook As a first playbook, you will create a playbook which will ping the Target1 and Target2 machines. The playbook can be found in the git repository as playbook-ping-targets-success.yml and looks as follows: YAML - name: Ping target1 hosts: target1 tasks: - name: Ping test ansible.builtin.ping: - name: Ping target2 hosts: target2 tasks: - name: Ping test ansible.builtin.ping: Let’s see how this playbook looks like. A playbook consists out of plays. In this playbook, two plays can be found with name Ping target1 and Ping target2. For each playbook, you indicate where it needs to run by means of the hosts parameter which refers to a name in the inventory file. A play consists out of tasks. In both plays, only one task is defined with name Ping test. A task calls an Ansible module. A list of modules which can be used, can be found here. It is important to learn which modules exists, how to find them, how to use them, etc. The documentation for the Ping module is what you need for this example, so take the time and have a look at it. Last thing to note is that the FQCN (Fully Qualified Collection Name) is used. This is considered to be a best practice. Run the playbook from the Controller machine. If you use the files as-is from the git repository, you will need to enter the vault password, which is itisniceweather. Shell $ ansible-playbook playbook-ping-targets-success.yml -i inventory/inventory.ini --ask-vault-pass Vault password: PLAY [Ping target1] *********************************************************************************************** TASK [Gathering Facts] ******************************************************************************************** ok: [target1] TASK [Ping test] ************************************************************************************************** ok: [target1] PLAY [Ping target2] *********************************************************************************************** TASK [Gathering Facts] ******************************************************************************************** ok: [target2] TASK [Ping test] ************************************************************************************************** ok: [target2] PLAY RECAP ******************************************************************************************************** target1 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 target2 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 The logging shows exactly which plays and which tasks are executed and whether they executed successfully. The Ping module also provides the option to crash the command. In the Target1 play, the parameter data is added in order to let the command crash. The playbook can be found in the git repository as playbook-ping-targets-failure.yml. Shell - name: Ping target1 hosts: target1 tasks: - name: Ping test ansible.builtin.ping: data: crash ... Executing this playbook will crash the Target1 play and the playbook just ends. Shell $ ansible-playbook playbook-ping-targets-failure.yml -i inventory/inventory.ini --ask-vault-pass Vault password: PLAY [Ping target1] *********************************************************************************************** TASK [Gathering Facts] ******************************************************************************************** ok: [target1] TASK [Ping test] ************************************************************************************************** An exception occurred during task execution. To see the full traceback, use -vvv. The error was: Exception: boom fatal: [target1]: FAILED! => {"changed": false, "module_stderr": "Shared connection to 192.168.2.12 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/home/osboxes/.ansible/tmp/ansible-tmp-1662800777.2553337-6094-259627128894774/AnsiballZ_ping.py\", line 107, in \r\n _ansiballz_main()\r\n File \"/home/osboxes/.ansible/tmp/ansible-tmp-1662800777.2553337-6094-259627128894774/AnsiballZ_ping.py\", line 99, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/home/osboxes/.ansible/tmp/ansible-tmp-1662800777.2553337-6094-259627128894774/AnsiballZ_ping.py\", line 47, in invoke_module\r\n runpy.run_module(mod_name='ansible.modules.ping', init_globals=dict(_module_fqn='ansible.modules.ping', _modlib_path=modlib_path),\r\n File \"/usr/lib/python3.10/runpy.py\", line 209, in run_module\r\n return _run_module_code(code, init_globals, run_name, mod_spec)\r\n File \"/usr/lib/python3.10/runpy.py\", line 96, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\r\n exec(code, run_globals)\r\n File \"/tmp/ansible_ansible.builtin.ping_payload_xnphtwh8/ansible_ansible.builtin.ping_payload.zip/ansible/modules/ping.py\", line 89, in \r\n File \"/tmp/ansible_ansible.builtin.ping_payload_xnphtwh8/ansible_ansible.builtin.ping_payload.zip/ansible/modules/ping.py\", line 79, in main\r\nException: boom\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} PLAY RECAP ******************************************************************************************************** target1 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 4. Install Apache Webserver In this second exercise, you will install Apache Webserver on a target machine and change the welcome page. The final playbook can be found in the git repository as playbook-httpd-target1.yml. You will learn in this section how to create this final version. 4.1 Install Package For installing packages, you can use the Apt module. It contains many parameters, you will only use a few: name: the name of the package to be installed; update_cache: runs apt-get update before installation; state: indicates the desired package state, present is just fine here. The other items in this playbook should be quite familiar by now. YAML - name: Install Apache webserver hosts: target1 tasks: - name: Install apache httpd (state=present is optional) ansible.builtin.apt: name: apache2 update_cache: yes state: present Run the playbook. Shell $ ansible-playbook playbook-httpd-target1.yml -i inventory/inventory.ini --ask-vault-pass Vault password: PLAY [Install Apache webserver] ***************************************************************************************** TASK [Gathering Facts] ************************************************************************************************** ok: [target1] TASK [Install apache httpd (state=present is optional)] **************************************************************** This playbook does not end. It hangs and you can stop it with CTRL+C. So what is happening here? As you probably know, in order to install packages you need sudo privileges. One way or the other, Ansible needs to know whether privilege escalation is needed and you will need to provide the sudo password to Ansible. A detailed description can be read in the Ansible documentation. The short version is, that you need to add the become parameter with value yes. But that is not all, you also need to add the command line parameter --ask-become-pass when running the Ansible playbook. This way, Ansible will ask you for the sudo password. The playbook with the added become parameter looks as follows: YAML - name: Install Apache webserver hosts: target1 become: yes tasks: - name: Install apache httpd (state=present is optional) ansible.builtin.apt: name: apache2 update_cache: yes state: present Running this playbook is successfull. As you can see, the become password and the vault password need to be entered. Shell $ ansible-playbook playbook-httpd-target1.yml -i inventory/inventory.ini --ask-vault-pass --ask-become-pass BECOME password: Vault password: PLAY [Install Apache webserver] **************************************************************************************** TASK [Gathering Facts] ************************************************************************************************* ok: [target1] TASK [Install apache httpd (state=present is optional)] *************************************************************** changed: [target1] PLAY RECAP ************************************************************************************************************* target1 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 In the output logging, you also notice that Target1 has been changed at line 11. Remember this, this will be important later on when the playbook is run again. Navigate via your browser (or by means of the curl command) to the IP address of the Target1 machine: http://192.16.2.12. You can execute this from your host machine if you have a similar test environment as used in this blog. As you can see, the Apache Webserver default welcome page is shown. 4.2 Change Welcome Page In the playbook, you can also change the contents of the welcome page. You can use the copy module for that. Add the following task to the playbook. YAML - name: Create index page ansible.builtin.copy: content: 'Hello world from target 1' dest: /var/www/html/index.html Execute the playbook. Shell $ ansible-playbook playbook-httpd-target1.yml -i inventory/inventory.ini --ask-vault-pass --ask-become-pass BECOME password: Vault password: PLAY [Install Apache webserver] **************************************************************************************************************************** TASK [Gathering Facts] ************************************************************************************************************************************* ok: [target1] TASK [Install apache httpd (state=present is optional)] *************************************************************************************************** ok: [target1] TASK [Create index page] *********************************************************************************************************************************** changed: [target1] PLAY RECAP ************************************************************************************************************************************************* target1 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 First, take a closer look at the logging. The task Install apache httpd now just returns ok and not changed. This means that Ansible did not install Apache Webserver again. Ansible tasks are idempotent. This means that you can execute them over and over again, the result will be the same. Also note that the welcome page has been changed now. Verify this via the browser or via curl. Shell $ curl http://192.168.2.12 Hello world from target 1 4.3 Install Target2 As a last exercise, you can add a second play for installing Apache Webserver on Target2 and change the welcome page accordingly in order that it welcomes you from Target2. The playbook can be found in the git repository as playbook-httpd-target1-and-target2.yml. YAML - name: Install Apache webserver for target 1 hosts: target1 become: yes tasks: - name: Install apache httpd (state=present is optional) ansible.builtin.apt: name: apache2 update_cache: yes state: present - name: Create index page for target 1 ansible.builtin.copy: content: 'Hello world from target 1' dest: /var/www/html/index.html - name: Install Apache webserver for target2 hosts: target2 become: yes tasks: - name: Install apache httpd (state=present is optional) ansible.builtin.apt: name: apache2 update_cache: yes state: present - name: Create index page for target 2 ansible.builtin.copy: content: 'Hello world from target 2' dest: /var/www/html/index.html Execute the playbook, you are now confident enough to explore the logging yourself. Shell $ ansible-playbook playbook-httpd-target1-and-target2.yml -i inventory/inventory.ini --ask-vault-pass --ask-become-pass BECOME password: Vault password: PLAY [Install Apache webserver for target 1] ***************************************************************************************************************************** TASK [Gathering Facts] *************************************************************************************************************************************************** ok: [target1] TASK [Install apache httpd (state=present is optional)] ***************************************************************************************************************** ok: [target1] TASK [Create index page for target 1] ************************************************************************************************************************************ ok: [target1] PLAY [Install Apache webserver for target2] ****************************************************************************************************************************** TASK [Gathering Facts] *************************************************************************************************************************************************** ok: [target2] TASK [Install apache httpd (state=present is optional)] ***************************************************************************************************************** changed: [target2] TASK [Create index page for target 2] ************************************************************************************************************************************ changed: [target2] PLAY RECAP *************************************************************************************************************************************************************** target1 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 target2 : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 Verify whether the welcome pages are changed correctly. Shell $ curl http://192.168.2.12 Hello world from target 1 $ curl http://192.168.2.13 Hello world from target 2 Just as expected! 5. Conclusion In this post, you continued your journey towards learning Ansible. You learned the basics about Ansible playbooks and you wrote and executed a playbook which installs Apache Webserver onto the two target machines. You are now able to write your own playbooks and continue to learn.
November 23, 2022
· 8,908 Views · 1 Like
article thumbnail
Devoxx Belgium 2022 Takeaways
In October 2022, I visited Devoxx Belgium after two cancelled editions due to COVID-19. I learned a lot and received quite some information which I do not want to withhold from you. In this blog, you can find my takeaways of Devoxx Belgium 2022! 1. Introduction Devoxx Belgium is the largest Java conference in Europe. This year, it was already the 19th edition. As always, Devoxx is being held in the fantastic theatres of Kinepolis Antwerp. The past two editions were cancelled due to COVID-19. As a result, there was a rush on the tickets. The first batch of tickets was sold out in 5 minutes, the second batch in a few seconds. Reactions on Twitter mentioned that it looked like a ticket sale for Beyonce. Although it was my plan to only attend the conference days, I was more or less obliged to also attend the Deep Dive days. For the record, the first two days of Devoxx are Deep Dive days where you can enjoy more in-depth talks (about 2-3 hours) and hands-on workshops. Days three up and including five are the conference days where talks are being held in a time frame of maximum 50 minutes. Nevertheless, I was attending Devoxx for five days. And it was a blast. I really enjoyed the Deep Dive days. The speakers take more time to elaborate on a topic and are also more relaxed. During a conference talk, as a speaker, you need to limit yourself to 50 minutes which can be challenging. I also attended some hands-on workshops, which I enjoyed very much. The workshops are in smaller groups of about 20-30 persons. As the name says, these are hands-on, so you’d better take your laptop with you. Enough for the introduction, the next paragraphs contain my takeaways from Devoxx. This only scratches the surface of a topic, but it should be enough in order to make you curious to dive a bit deeper in the topic yourself. Do check out the Devoxx Youtube channel. All the sessions are recorded and can be viewed there. The organizers did a great job, because the sessions of a particular day were already available on the channel the next day. That is really fast. If you intend to view them all: there are 240 of them… 2. Java Ecosystem Development 2.1 Project Loom – Virtual Threads Virtual threads is about handling numerous blocking requests and responses. When executing a blocking request, the processing itself only takes some nanoseconds. Waiting for the request to be sent over the internet and providing a response takes several milliseconds. This means that in case of a blocking request, the CPU is 99,9% of the time idle. This is really a waste. The current solution to this problem are asynchronous requests. However, the code you need to write for asynchronous processing is complex, difficult to profile and eventually leads to spaghetti code. This is where virtual threads come to the rescue. Virtual threads are not bound to an OS thread. Code is executed on OS thread 1, the OS thread is freed again during the blocking call and when the blocking part is finished, the rest of the code is executed, possibly on another OS thread 2 (but it might also be processed again on OS thread 1). Because the OS threads are freed during a blocking call, many more virtual threads can be executed in parallel. An amount of 1 million virtual threads on a normal laptop is no issue. You can probably run about 4000 threads in parallel without virtual threads. As a consequence, you do not need to write difficult asynchronous code anymore. Another topic which is part of Project Loom, is structured concurrency. Assume you want to interrogate several services in order to obtain the best price for a hotel room. With structured concurrency, you can assign several callbacks and group them together by means of a join and continue when all the wanted results are available. Virtual threads is a preview feature in JDK19, so the implementation can be changed with the next JDK releases. At least, it is an interesting topic to keep an eye on. If you want to know more about virtual threads, you can watch Loom is Looming. 2.2 GraalVM GraalVM Native Image technology compiles Java code ahead-of-time into a native executable file. Only the code that is required at run time by the application is included in the executable file. As a consequence, the startup time of your application is super fast. The build time is longer, however. A perfect use case is when you want to run your Java application as a serverless function, e.g. with AWS Lambda. In this context, it is very important that your application can start very fast. Something to be aware of is that GraalVM does not recognize classes which are loaded by means of reflection. You need to provide this information to GraalVM. There is a tracing tool available which will help you to create the configuration file needed for that. Another initiative in this context is Spring Native. Spring Native is still experimental, but you can give it a try. With the Spring Native project, Spring has tried to remove as much as reflection out of the Spring code. Besides that, it helps you with the reflection parts of used dependencies. It should almost work out-of-the-box. There is also an interesting article about this topic at Medium. If you want to experiment with GraalVM, the workshop is a good starting point. 2.3 What’s Next After Git I am quite interested in version control, so I was curious whether there is already something new cooking in this field. There are two VCS which might be interesting, but bottom line is that Git will remain for a while. The first alternative is Fossil. It is just like Git a distributed system. It does not support rebase and it runs with an SQLite database. The second alternative is Pijul. It is just like Git a distributed system. It follows the patch theory, so basically everything is a cherry-pick. More information can be found in the talk Will Git Be Around Forever. 2.4 JetBrains Fleet JetBrains made the Fleet IDE publicly available during Devoxx. Fleet will have support for multiple languages. That will remove the need to install a seperate IDE for every language. So, no need anymore for IntelliJ, PyCharm, WebStorm, etc. It lacks of course features nowadays, but from now on, you can experiment with it. If you find issues, you can file them with JetBrains. But do note that this is experimental. I have seen quite some Tweets the past weeks from people complaining that some things do not work, but that is why it is still experimental. You need to give JetBrains the time and we should be glad that they want your opinion and feedback. More information can be found at the Fleet website. 2.5 Future of Frontend Development I was quite curious about what can be expected in the near future for frontend development. Conclusion is that the major three frameworks React, Angular and Vue will remain. There are some new frameworks which have more focus on less client side processing. These new frameworks can initially release new features, concepts faster, but the main three frameworks are also working on this. The assumption is that the new frameworks will not be able to compete with the major three, because at that time the major three will have caught up with the new ones. This was a nice talk, so if you want to know more about this topic, just watch the talk. 2.6 Maven I use Maven as primary build tool and learned what is going on with Maven currently and in the near future. Maven wrapper: The Maven wrapper is now an Apache project. When you create a Spring Boot project from the Spring Initializer website, you always get a Maven wrapper. This can be used in order to ensure that everyone is using the same Maven version and you do not need to have installed Maven in order to build the application. Build/Consumer pom decoupling:With Maven 4, the build and consumer pom will be decoupled. What does this mean? Nowadays, the pom file is also deployed as-is to a Maven artefact repository. Many tools are dependent on the structure of this pom file. However, this blocks the Maven project from making enhancements to the pom file in order to easen the life of developers. Therefore, the build and consumer pom need to be decoupled. What can be made easier? E.g. when you work with Maven modules, you are now obliged to add a section for the parent pom. In 99,9% of the time, the parent pom is located in the directory above the pom of a module. Therefore, Maven can sort this out by itself. This means, that in the future, you will not need to add a parent pom section anymore in the pom of a module. Maven will add this section when the pom needs to be deployed to a Maven artefact repository. Improved Reactor: This improvement will make it easier to resume failed builds, especially when you want to build a child pom. More information can be found here. Maven daemon: The Maven daemon keeps the JVM and plugins in memory. This means, when you run consecutive builds, the first build will take some time, but from the second build on, the builds will be faster. I have tried this with a project of mine: mvn clean verify: 38.994 s with Maven daemon, first build: 40.469 s with Maven daemon, consecutive build: 31.571 s Install the Maven daemon and save time! The Maven daemon is also available from SDKMAN. 3. Testing 3.1 Playwright Playwright is an end-to-end testing framework. It is like Selenium but more elaborate. The interesting part was that you can record your test and Playwright will create a template for you. With this template you have a head start for creating the automated test. Other interesting parts were that you can verify whether requests are actually sent to the backend and that Playwright automatically can create screenshots for you during the test. This way, you can have a complete trace of the test. 3.2 Testing an Onion Architecture This was an interesting talk about test slice annotations. You do not have to use SpringBootTest all of the time in order to test your components. Using test slice annotations, you can speed up your tests significantly. Testing the Controller: use WebMvcTest; Testing a Rest Client: use RestClientTest with WireMock and MockRestServiceServer; Testing the repository: use DataJpaTest with testcontainers. Slides and code repository are available. The talk can be viewed here. 3.3 Contract Testing Contract Testing is a test approach in order to verify whether applications will work together based on a specified contract. It situates itself between Integration Tests and End-to-End Tests. The idea is that an independent broker, which is accessible for both parties, verifies whether your implementation is still valid according to the contract. Such a broker is Pact. If you do not know the consumers of your API, then you should use OpenAPI. A similar broker is Spring Cloud Contract. Pact can be used in a polyglot environment whereas Spring Cloud Contract cannot. More information can be found in the talk. 4. Security 4.1 SBOM SBOM’s (Software Bill of Materials) are becoming more and more important. But how do you create your SBOM? One way to do so is by using CycloneDX. CycloneDX can be run from within Maven, so it should be easy to use. Sonatype also provides an experimental website called BOM Doctor. Here you can analyze your SBOM in a graphical way and it will provide a score. Do not know yet how to interpret this graph, but for the spring-boot-starter-web, it looks as follows. Other interesting websites to take a look at are Reproducible Builds and Sigstore. 4.2 Security Knowledge Framework Security Knowledge Framework is the place to be when you want to learn more about application security. The starting point is the Online Demo. Here you will have access to training material, hands-on labs, security checklists for your requirements, etc. It provides you a lot of information and I will need to take the time to dive a bit deeper in this, but it looks very promising and interesting. 5. Artifical Intelligence There were quite some talks about AI. The results that were shown are mindblowing but also scary. Besides that, AI prompt engineering might turn into a future profession. Some interesting websites to take a look at, are: Midjourney Showcase: AI generated pictures, some of them are quite impressive; CodeGeeX: generate code based on a descriptive text; Once Upon a Bot: generate a story based on some input text; Stable Diffusion: experiment yourself with generating images. 6. Other 6.1 Domain Driven Design If you want to get acquainted with DDD, the suggested learning path is the following: Read the blog Domain-Driven Design in 2020 by Alberto Brandolino; Read the book Implementing Domain-driven Design by Vaughn Vernon; Read the book Patterns, Principles, and Practices of Domain-Driven Design by Nick Tune and Scott Millett; Watch the talk The Power of Value by Dan Bergh Johnsson; As last, read the book Domain-Driven Design by Eric Evans. Although DDD is commonly used in a microservices architecture, it might also become important in the Modular Monolith. After Devoxx, I read this Spring blog Introducing Spring Modulith which should help Spring developers with creating domain-aligned Spring Boot applications. 6.2 Women in Tech There were quite some talks about the topic Women in Tech. Food for thought are: Women believe that they should fit to 90% of the requirements of an application compared to 60% for men. In order to attrack women to our profession, we could try to have a bit less of a bro culture. Meaning, less beer and pizza, but a bit more things where women in general feel more comfortable with. A lot of women have been groundbreaking in the software industry. Software programming was initially even a profession for women. Have we forgotten that, do we recognize this enough? 6.3 Learning Through Tinkering In our profession you have to be a life-long learner. You cannot learn everything though, so you need to follow some kind of learning path. The Zone of Proximal Development can help you with that. It consists out of three circles. What you know: these are all the things you already know; You can learn this: these are all the things based upon your current knowledge which you are able to learn; You cannot learn this yet: these are all the things which are out-of-reach at the moment. If you would like to learn these items while skipping the You can learn this phase, you will most likely fail. That is because you would need to learn too many things at once. If you would like to mentor someone or you are looking for a mentor, you should definitely check out the Coding Coach website. 7. Conclusion Devoxx 2022 was great and I am glad I was able to attend the event. As you can read in this blog, I learned a lot and I need to take a closer look at many topics. At least I do not need to search for inspiration for future blogs!
November 9, 2022
· 6,240 Views · 2 Likes
article thumbnail
Docker Files and Volumes: Permission Denied
Encountered a ‘Permission Denied’ error on a file copied to a Docker image or when accessing a file on a mounted volume within a Docker container? In this blog, you will learn why you get a ‘Permission Denied’ error and how to solve it. Enjoy! 1. Introduction Before diving into the Permission Denied problem within Docker containers, it is good to have a basic knowledge how permissions function in Linux. If you are already familiar with these concepts, you can skip this paragraph. A good starting point for getting acquainted with permissions can be found at the Ubuntu documentation and this excellent explanation about umask. If you want a quick summary, read on! When you create a new directory blog and list the properties of that directory, you will see the following output in a terminal window: Shell $ mkdir blog $ ls -la drwxrwxr-x 2 user group 4096 Aug 14 06:15 blog/ Let’s examine some items which are listed here from left to right: d Indicates this is a directory. rwx The owner’s permissions. In this case, the owner has read, write and execute permissions. rwx The group’s permissions. A user belongs to one or more groups. In this case, the permissions are identical as for the owner. r-x The other’s permissions. Anyone else, not being the owner or not belonging to the group, will have in this case read and execute permissions. user The directory is owned by this user. Under the hood, this logical name is mapped to a user id (uid). When you have only one user, this will probably be uid 1000. group The directory is owned by this group. Under the hood, this logical name is mapped to a group id (gid). Your gid will probably be gid 1000. When you create a new file defaultpermissions.txt and list the properties of the file, you will see a similar output: Shell $ touch defaultfilepermissions.txt $ ls -la -rw-rw-r-- 1 user group 0 Aug 14 06:20 defaultfilepermissions.txt The permissions are listed in a similar way as for the directory. There is no d as first item because it is not a directory of course and the file does not have any execute permissions. 2. Prerequisites The tests executed in the following paragraphs are executed from within a VirtualBox Virtual Machine (VM) based on Ubuntu 22.04 provided by osboxes.org. This can easily be setup if you follow the instructions in paragraph 2 of a previous post. Once logged in into the VM, docker needs to be installed. At the time of writing, Docker v20.10.14 is used. Shell $ sudo snap install docker You can also execute the tests from you own local installation of Ubuntu, no changes to your system settings are required for executing the tests. When using the OSBoxes VM, the user/group will be osboxes/osboxes. If you are using your own system, the user/group can be retrieved by using the users and groups command. The files used in the paragraphs below are available at GitHub. 3. Container Running as Root In this first test, a file will be copied from the local file system to the Docker image. The base image for the Docker image is the Alpine Linux image. Create a directory 1-defaultcontainer, navigate to the directory and create a test.txt file with some dummy contents. Create a Dockerfile in the same directory with the following contents: Plain Text FROM alpine:3.16.2 COPY test.txt /tmp/test.txt The FROM instruction will use the Alpine Linux 3.16.2 base Docker image and the COPY instruction will copy the local test.txt file into the Docker image at location /tmp/test.txt. From a terminal window, build the Docker image: Shell $ sudo docker build -f Dockerfile -t dockertest . [sudo] password for osboxes: Sending build context to Docker daemon 3.072kB Step 1/2 : FROM alpine:3.16.2 3.16.2: Pulling from library/alpine 213ec9aee27d: Pull complete Digest: sha256:bc41182d7ef5ffc53a40b044e725193bc10142a1243f395ee852a8d9730fc2ad Status: Downloaded newer image for alpine:3.16.2 ---> 9c6f07244728 Step 2/2 : COPY test.txt /tmp/test.txt ---> 842ef14a6a73 Successfully built 842ef14a6a73 Successfully tagged dockertest:latest Start the Docker container with interactive mode in order to be able to use the shell: Shell $ sudo docker run --rm -it dockertest /bin/sh Navigate to directory /tmp and list the files: Shell # ls -la -rw-rw-r-- 1 root root 23 Aug 14 10:33 test.txt Notice that the file permissions are preserved, but the user/group is root/root. By default, a Docker container runs as the root user which is a security concern. Try to execute cat test.txt and you will notice that the contents of the file are output. Try to edit the file by means of vi and save the file. This action is also allowed. These results are logical: the root user executes them and root can do anything. Exit the shell by typing exit. In order to ensure that the tests are executed independently from each other, remove the Docker image as follows: Shell $ sudo docker rmi dockertest 4. Container Running as User 1000 This test is similar as the first one, except that you will create a user for the Docker container. This way, the container will not run anymore as the root user, which is a more secure way for running a container. Create a directory 2-containeruser1000, navigate to the directory and create a test.txt file with some dummy contents. Create a Dockerfile in the same directory with the following contents: Dockerfile FROM alpine:3.16.2 RUN addgroup --g 1000 groupcontainer RUN adduser -u 1000 -G groupcontainer -h /home/containeruser -D containeruser USER containeruser COPY test.txt /home/containeruser/test.txt What is happening in this new Dockerfile? With RUN addgroup, a group groupcontainer is created with gid 1000; With RUN adduser, a user containeruser is created with uid 1000, belonging to group groupcontainer and home directory /home/containeruser; With USER containeruser, the container runs with user containeruser; The local test.txt file is copied to the home directory of containeruser. This Dockerfile can be made more efficient in order to reduce the number of layers. For more information about layers, read a previous post about this topic. For sake of simplicity, optimizing the Docker image is not considered here. Build and run the container just like you did before. First check which user is running the container: Shell # whoami containeruser As expected, the container runs as user containeruser. Navigate to the home directory of containeruser and list the files: Shell # ls -la -rw-rw-r-- 1 root root 23 Aug 14 10:58 test.txt This might surprise you, but the owner of the file is still root/root. Try to execute cat test.txt and you will notice that the contents of the file are output. This can be done because other has read permissions. Remember, the container runs as user containeruser now. Try to edit the file with vi and save the file. This is not possible: a warning is raised that the file is read-only. That is because other does not have write permissions. When you are still not convinced, execute the same test but with uid/gid 1024. The results are the same. The files are available in the repository in directory 3-containeruser1024. Below the corresponding Dockerfile: Dockerfile FROM alpine:3.16.2 RUN addgroup --g 1024 groupcontainer RUN adduser -u 1024 -G groupcontainer -h /home/containeruser -D containeruser USER containeruser COPY test.txt /home/containeruser/test.txt Remove the Docker image. 5. Container Running as User 1024 and Changed Ownership In this paragraph, you will solve the permission issue. The trick is to change the ownership of the file to the user running the Docker container. Create a directory 4-containeruser1024changedowner. The Dockerfile is: Dockerfile FROM alpine:3.16.2 RUN addgroup --g 1024 groupcontainer RUN adduser -u 1024 -G groupcontainer -h /home/containeruser -D containeruser USER containeruser COPY --chown=containeruser:groupcontainer test.txt /home/containeruser/test.txt In the line containing COPY, the ownership of the test.txt file is changed to user containeruser and group groupcontainer. Build and run the container just like you did before. Navigate to the home directory of user containeruser and list the files: Shell # ls -la -rw-rw-r-- 1 containe groupcon 23 Aug 14 10:58 test.txt Try to execute cat test.txt and you will notice that the contents of the file are output. Try to edit the file with vi and save the file. This is allowed, because this time, containeruser owns the file and has the proper write permissions. Remove the Docker image. 6. Volume Mappings With volume mappings, you will map a local directory to a directory inside the Docker container. This can be more tricky, because you must make some assumptions about the local system permissions, users, groups, etc. And often this just works fine because your local uid/gid is probably 1000/1000 and inside the container this will be similar. With volume mappings, it is important that the uid/gid of the owner is identical outside and inside the container. Let’s see how this works! Create a directory 5-volumemapping and create a directory testdir and a test.txt file with some dummy contents inside this directory. Check the uid/gid of your local user: Shell $ id -u osboxes 1000 $ id -g osboxes 1000 The permissions of the directory are: Shell $ ll drwxrwxr-x 2 osboxes osboxes 4096 Aug 14 04:19 testdir/ The permissions of the file are: Shell $ ll -rw-rw-r-- 1 osboxes osboxes 23 Aug 14 06:58 test.txt This time, you use the following Dockerfile: Dockerfile FROM alpine:3.16.2 RUN addgroup --g 1024 groupcontainer RUN adduser -u 1024 -G groupcontainer -h /home/containeruser -D containeruser USER containeruser RUN mkdir /home/containeruser/testdir Notice that for the test it is important that the uid/gid of your local user and the user inside the container are different. You do not copy the file this time to the container, but with RUN mkdir, you ensure that a directory exists where the local volume can be mapped to. Build the Docker image as before and run the container from inside directory 5-volumemapping as follows. The -v parameter will mount the local testdir directory to the testdir directory into the home directory of user containeruser. Shell $ sudo docker run -v $(pwd)/testdir:/home/containeruser/testdir --rm -it dockertest /bin/sh Navigate to directory /home/containeruser and list the contents: Shell # ls -la drwxrwxr-x 2 1000 1000 4096 Aug 14 10:23 testdir As you can see, the uid/gid has the values 1000/1000 which is the uid/gid of the local system user who has created the directory. Navigate to directory testdir and list the contents: Shell # ls -la -rw-rw-r-- 1 1000 1000 23 Aug 14 10:23 test.txt Again, you notice the same ownership for the file as for the directory. Try to read the contents of file test.txt, this succeeds. Try to create a new file test2.txt, this returns a Permission Denied error because other does not have write permissions in this directory. Shell # cat test.txt this is a test message # touch test2.txt touch: test2.txt: Permission denied How to solve this, is excellently explained in this blog. Change the ownership of the directory in order that group 1024 has the ownership on the local system. Shell $ sudo chown :1024 testdir/ Ensure that new files get the group ownership. Shell $ chmod g+s testdir/ Check the directory permissions from inside the container of directory testdir: Shell # ls -la drwxrwxr-x 2 1000 groupcon 4096 Aug 12 10:23 testdir Now you notice that the group groupcontainer has ownership of this directory. Navigate to directory testdir, create a file, edit it with vi and output the contents. All of this is possible now. Shell # touch test2.txt # vi test2.txt # cat test2.txt another test message Check the permissions of the files. Shell # ls -la -rw-rw-r-- 1 1000 1000 23 Aug 14 10:23 test.txt -rw-r--r-- 1 containe groupcon 0 Aug 14 10:37 test2.txt The file test.txt still has its original ownership for uid/gid 1000/1000, the new test2.txt file has ownership for containeruser/groupcontainer. From the local system, it will be possible to read the contents of test2.txt, but it will not be allowed to change its contents due to the read-only permissions for other. Depending on your use case, several solutions exist how to solve this as described in the mentioned blog post. Remove the Docker image. 7. Conclusion Permission Denied errors when copying files into Docker images can be easily solved within the Dockerfile. Just follow the provided solution described in this blog. Permission Denied errors with volume mappings between a local directory and a directory inside the container can be a bit more tricky. Hopefully, the information provided in this blog will help you understand and solve the Permission Denied errors.
October 25, 2022
· 5,587 Views · 1 Like
article thumbnail
Main Benefits of a Technical Blog
This blog is a special edition because it is my 100th blog! I will explain what this blog has given me in the past five years. If you are planning to start a blog of yourself, you may use this list of benefits in order to get you motivated to get started. 1. Introduction In the beginning of September, I already celebrated the fifth anniversary of my blog. Now I publish my 100th blog and I am pretty proud of it. It seems not so long ago that I started my blog, but on the other hand, it also feels like I am doing this for a long time. At least, I cannot imagine a life without my blog anymore. In the beginning, I really suffered of the imposter syndrome: I posted blogs, but did not let anyone in my direct environment know that I had a blog. After a few months, I let this feeling behind myself and let the world know that I write technical content. In those five years, I only had one or two negative comments, but many positive comments and I really do not bother about the negative ones. In the next section, I will try to list some of the benefits of a technical blog and if you would like to start with a blog yourself, do read Why Start a Technical Blog. Enjoy this post and up to the next five years! 2. Benefits Gain knowledge, share knowledge, be visible 2.1 Knowledge The main benefit is acquiring knowledge. I have noticed that experimenting with some technology gives you good insights but it is different when you also need to write it down. When you write it down, or need to explain it, you have to dig a bit deeper in the technology. It pushes you to go one step further. 2.2 Sharing Knowledge A side benefit of acquiring knowledge is that you can share it. Although it is a consequence of acquiring knowledge, it deserves its own mention because it is evenly important. Even if one person has got some benefit by reading your post, it was worth writing and sharing. I often refer to my own blogs when I need to explain something to colleagues. If I did not write it down, I always have to transfer my knowledge verbally. Besides that, I also need to remember what I investigated a few years ago. My memory is not that good but I can remember that I wrote a blog about the topic. I even read my own blogs again now and then. 2.3 Visibility Here we have the imposter syndrome again. Do not be afraid to be visible. It will bring you only advantages. People will appreciate what you are doing and will know that you learn many new things. It is also beneficial for your career and open doors that otherwise remain closed. Opportunities will pass by as long as you blog consistently. 3. List of Blogs As a bonus, you can find a list of all the blogs I have written the past five years, ordered by oldest first. First blog post Connect with git repository in Android Studio Combine git repositories Installation Eclipse with Java 9 Versions Maven Plugin Java 9: Introducing JShell Java 9: Collections, Streams The Scrum Guide 2017 update Java lambdas revisited Java 9 Modules introduction (part 1) Java 9 Modules with IntelliJ and Maven (part 2) Java 9 Modules directives (part 3) Maven git commit id plugin Spring WebFlux: Getting started Spring WebFlux: a basic CRUD application (part 1) Spring WebFlux: a basic CRUD application (part 2) Spring Boot Actuator in Spring Boot 2.0 Project Lombok: Reduce boilerplate code Build and deploy a Spring Boot app on Minikube (part 1) Build and deploy a Spring Boot app on Minikube (part 2) J-Spring 2018 impressions Introducing Red Hat CDK How to version your software Deploy to Kubernetes with Helm Create, install, upgrade, rollback a Helm Chart (part 1) Create, install, upgrade, rollback a Helm Chart (part 2) Git LFS: Why and how to use How to Solve Your Java Performance Problems (Part 1) How to Solve Your Java Performance Problems (Part 2) Speed up Development with Docker Compose Secure Docker in Production Setup Jenkins CI in 30 Minutes Check Docker Images for Vulnerabilities with Anchore Engine Anchore Container Image Scanner Jenkins Plugin Docker Layers Explained First Steps with GCP Kubernetes Engine Deploy Spring Boot App to GCP App Engine First Steps with GCP SQL Automatic Builds at Your Fingertips With GCP Cloud Build Spring Boot and GCP Cloud Pub/Sub Book Review: The Phoenix Project Google Cloud Vision with Spring Boot Discover Your Services With Spring Eureka Kafka Messaging Explored Kafka Streams Explored Kafka Streams: Joins Explored Devoxx Belgium 2019 Impressions Hack the OWASP Goat! Introduction to Spring Kafka Create Fast and Easy Docker Images With Jib Skaffold: k8s Development Made Easy How to Use the Jira API Book Review: The Unicorn Project How to Mock a Rest API in Python What Is Your Test Quality? Mutation Testing With SonarQube Easy Database Migration With Liquibase Easy Integration Testing With Testcontainers Automated Acceptance Testing With Robot Framework How to Write Data Driven Tests With Robot Framework Create Custom Robot Framework Libraries J-Spring Digital 2020 Impressions Parallel Testing With Robot Framework Java Streams By Example Getting Started With React How to Deploy a Spring Boot App to AWS Elastic Beanstalk How to Deploy a Spring Cloud Function on AWS Lambda How to Create an AWS Continuous Deployment Pipeline How to Create an AWS Continuous Deployment Pipeline Cont’d How to Start Contributing to Open Source Getting Started With RSocket Part 1 Getting Started With RSocket Part 2 How to Monitor a Spring Boot App Why Start a Technical Blog Improve Your Robot Framework Tests With Robocop Automated Pen Testing With Zed Attack Proxy Automated Pen Testing With ZAP CLI Automate ZAP With Docker Automated Visual Testing With Robot Framework How to Create an AWS EC2 VM How to Create an AWS ALB and ASG How to Deploy a Spring Boot App on AWS ECS Cluster What’s New Between Java 11 and Java 17? How to Deploy a Spring Boot App on AWS Fargate How to Create an AWS CloudFormation Fargate Template J-Fall 2021 Impressions How to Use Amazon SQS in a Spring Boot App Jenkins Multibranch Pipeline and Git LFS AWS Lambda Versions and Aliases Explained By Example How to Secure AWS API Gateway With Cognito User Pool Generate Server Code Using OpenAPI Generator An Introduction to AWS Serverless Application Model How to Get Started With Vaadin Flow An Introduction to AWS Step Functions How to Manage Your JDKs With SDKMAN How to Generate Fake Test Data How to Setup an Ansible Test Environment How to Pass the AWS Certified Developer – Associate Exam An Introduction to Ansible Inventory
October 15, 2022
· 4,985 Views · 3 Likes
article thumbnail
An Introduction to Ansible Inventory
In this post, you will learn how to set up a basic Ansible Inventory. Besides that, you will learn how to encrypt sensitive information by means of Ansible Vault. Enjoy! 1. Introduction In a previous post, you learned how to set up an Ansible test environment. In this post, you will start using the test environment. Just as a reminder, the environment consists of one Controller and two Target machines. The Controller and Target machines run in a VirtualBox VM. Development of the Ansible scripts is done with IntelliJ on the host machine. The files are synchronized from the host machine to the Controller by means of a script. In this blog, you will create an inventory file. The inventory file contains information about the Target machines in order for the Controller to locate and access the machines for executing tasks. The inventory file will also contain sensitive information such as the password being used for accessing the Target machines. In a second part of this blog you will solve this security problem by means of Ansible Vault. The files being used in this blog are available in the corresponding git repository at GitHub. 2. Prerequisites The following prerequisites apply to this blog: You need an Ansible test environment, see a previous blog how to set up a test environment; If you use your own environment, you should know that Ubuntu 22.04 LTS is used for the Controller and Target machines and Ansible version 2.13.3; Basic Linux knowledge. 3. Create an Inventory File The Ansible Controller will need to know some information about the Targets in order to be able to execute tasks. This information can be easily provided by means of an inventory file. Within an inventory, you will specify the name of the Target, its IP address, how to connect to the Target, etc. Take a look at the Ansible documentation for all the details. In this section, you will experiment with some of the inventory features. By default, Ansible will search for the inventory in /etc/ansible/hosts but you can also provide a custom location for the inventory when executing Ansible. That is what you will do in this section. Create in the root of the repository a directory inventory and create an inventory.ini file. Add the following content to the file: Plain Text target1 target2 [targets] target1 target2 [target1_group] target1 [target2_group] target2 [target_groups:children] target1_group target2_group The first two lines contain the names for the Target machines. You can give this any name you would like, but in this case, you just call them target1 and target2. When you want to address several machines at once, you can create groups. A group is defined between square brackets followed by the list of machines belonging to this group. In the inventory above, you can recognize group targets which contains target1 and target2. This group is not really necessary, because by default a group all exists which is equal to the group targets in this case. The groups target1_group and target2_group are for illustrative purposes and do not make much sense because they contain only one machine. However, in real life, you can imagine to have groups for application machines, database machines, etc. or you might want to group machines by region for example. You can also define a group of groups like target_groups. You need to add :children to the definition and then you can combine several groups into a new group. The group target_groups consists of the group target1_group and target2_group. This actually means that group target_groups consists of machines target1 and target2. 4. Define Variables The inventory file you created just contains names of machines and groups. But this information is not enough for Ansible to be able to locate and connect to the machines. One approach is to add variables in the inventory file containing this information. A better approach is to define a directory host_vars containing subdirectories for each machine containing the variables. Ansible will scan these directories in order to find the variables for each machine. You can also define variables for the groups. In this case, you create a directory group_vars. Create in directory inventory a directory host_vars containing the directories target1 and target2. The directory tree of directory inventory looks as follows: Plain Text ├── host_vars │ ├── target1 │ └── target2 └── inventory.ini Create in directory target1 a file vars with the following contents: YAML ansible_host: 192.168.2.12 ansible_connection: ssh ansible_user: osboxes ansible_ssh_pass: osboxes.org The variables defined here are some special variables for Ansible to be able to locate and connect to the machine: ansible_host: the IP address of the target1 machine; ansible_connection: the way you want to connect to target1; ansible_user: the system user Ansible can use to execute tasks onto the machine; ansible_ssh_pass: the password of the ansible_user. Do not store passwords in plain text in real life! This is only done for testing purposes and a proper solution is provided later on this post. Note that you can also define these variables in the inventory file on the same line as where you define the name of the machine. In this case, the variables need to be defined as key=value (with an equal sign and not with a colon). Add a vars file to directory target2 with similar contents but with the connection values for target2. 5. Test Inventory Settings Now it is time to do some testing in order to verify whether it works. Start the Controller and the two Target machines. Synchronize the files you created to the Controller machine and navigate in a terminal window to the MyAnsiblePlanet directory. Connect once manually to both Target machines so that the SSH fingerprint is available onto the Controller machine, otherwise you will get an error message when Ansible tries to connect to the Target machines. Shell $ ssh osboxes@192.168.2.12 $ ssh osboxes@192.168.2.13 With the following command, you will ping the target1 machine. The command consists of the following items: ansible: The Ansible executable; target1: The name of the machine where you want to execute the task. This corresponds to the name in the inventory; -m ping: Execute the ping command; -i inventory/inventory.ini: The path of the inventory file. The command to execute: Shell $ ansible target1 -m ping -i inventory/inventory.ini target1 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python3" }, "changed": false, "ping": "pong" } The response indicates a success. Execute the same command but for the target2 machine. The result should also be a success response. Just like you can execute a task on a single machine, you can also execute a task on a group. Execute the command for the targets group: Shell $ ansible targets -m ping -i inventory/inventory.ini target2 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python3" }, "changed": false, "ping": "pong" } target1 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python3" }, "changed": false, "ping": "pong" } As you can see, the command is executed on both Target machines, as expected. Execute the command for the other groups as well. 6. Encrypt Password Now that you know that the inventory configuration is working as expected, it is time to get back to the password in plain text problem. This can be solved by using Ansible Vault. Ansible Vault probably deserves its own blog, but in this section you just going to apply one way of encrypting sensitive information. The encryption will be done for the target1 machine. Create in directory inventory/target1 a file vault and copy the ansible_ssh_pass variable to this vault file. Change the variable name from ansible_ssh_pass into vault_ansible_ssh_pass. YAML vault_ansible_ssh_pass: osboxes.org In the vars file, you replace the plain text password with a reference to this new vault_ansible_ssh_pass variable using Jinja2 syntax. Note that it is also required to add double quotes around the reference. YAML ansible_host: 192.168.2.12 ansible_connection: ssh ansible_user: osboxes ansible_ssh_pass: "{{ vault_ansible_ssh_pass }" Encrypt the vault file with password itisniceweather (or whatever password you would like). Shell $ ansible-vault encrypt inventory/host_vars/target1/vault New Vault password: Confirm New Vault password: Encryption successful The vault file contents is now encrypted. Plain Text $ANSIBLE_VAULT;1.1;AES256 34353662643861663663363161366239343633636561663564653030663134623266323363353433 6233383939396335343639623165306330393031383836320a616430336132643638333862363965 36303837313239386566633332326165663336363464623437383638333936613038663366343833 3737316665323230620a343163356138656535363837646566643962393366353266613462616437 32346531613637396666623864333330643261366139306162373038633636633934326165616438 6565363034333137623539643539666234386339393965663362 The password you have used for encrypting the file should be saved in a password manager. Ansible will need it to decrypt the password. Try to execute the ping command for target1 like you did before. Shell $ ansible target1 -m ping -i inventory/inventory.ini ERROR! Attempting to decrypt but no vault secrets found This fails because Ansible cannot decrypt the password field. Add the parameter --ask-vault-pass to the command in order that Ansible asks you for the vault password. Shell $ ansible target1 -m ping -i inventory/inventory.ini --ask-vault-pass Vault password: target1 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python3" }, "changed": false, "ping": "pong" } And now it works again! This is a better way for handling sensitive information in your Ansible files. There are several more ways of handling sensitive information. As said before, Ansible Vault deserves its own blog. In the meanwhile, more information can be found in the Ansible documentation. 7. Conclusion In this post, you learned the basics of an Ansible Inventory file and you learned how to encrypt sensitive information in the inventory file. You have gained the basic skills to start setting up an inventory file yourself for your environment.
September 27, 2022
· 3,767 Views · 1 Like

Comments

Running LLMs Locally: A Step-by-Step Guide

Dec 28, 2023 · Gunter Rotsaert

Interesting question, I have read about the existence of Ollama, but did not use it.

A quick look at the documentation tells me that Ollama has a limited number of built-in models available, while LocalAI can be extended with a model of your choice (which you can download from HuggingFace). Another difference seems to be the Rest API. LocalAI uses the same Rest API as OpenAI which makes it easier to swap between LocalAI and OpenAI. Ollama seems to have a different API specification.

Devoxx Belgium 2023 Takeaways

Oct 30, 2023 · Gunter Rotsaert

you are welcome and thank you for the comment

How To Generate Spring Properties Documentation

Oct 24, 2023 · Gunter Rotsaert

You are welcome :-) Thank you for creating and maintaining this plugin!

How to Write Data Driven Tests With Robot Framework

Apr 30, 2023 · Gunter Rotsaert

Some people nowadays only 'scan' blog posts instead of taking the time to actually 'reading' them. But... they do find the time to express their frustration in a lengthy comment. This is an example of spending time in a negative way and pure a waste of time. So, my advice to you is: if you really want to learn something, you need to invest time. Read the introduction again, click the link to the GitHub repository and find the answers to your frustration.

How To Create a GraalVM Docker Image

Apr 13, 2023 · Gunter Rotsaert

Thank you and interesting what you mention. Do you have a link to an example for me?

How To Create a GraalVM Docker Image

Apr 06, 2023 · Gunter Rotsaert

Thanks for the nice comment. According to me, you have two options: graalvm native image or snapstart. This workshop is very insightful: https://catalog.workshops.aws/java-on-aws-lambda/en-US

How To Build an SBOM

Mar 09, 2023 · Gunter Rotsaert

Thank you very much, very much appreciated

Docker Best Practices

Dec 09, 2022 · Gunter Rotsaert

thank you :-)

Main Benefits of a Technical Blog

Oct 23, 2022 · Gunter Rotsaert

Thank you for the nice comment, I fully agree with you :-)

A Complete Guide About Scaled Agile Framework (SAFe)?

May 26, 2022 · Arvind Sarin

SAFe has nothing in common with agile, see also the opinions of leading experts: https://www.smharter.com/blog/safe-a-collection-of-comments-from-leading-experts/

An Introduction to AWS Step Functions

Apr 06, 2022 · Gunter Rotsaert

Thank you for the comment, always nice to receive feedback.

Easy Database Migration With Liquibase

Dec 25, 2021 · Gunter Rotsaert

Thank you for your comment. I would refer to the official Liquibase documentation for that. If it is not standard available in Liquibase, you still have the opportunity to use the sql changetype where you can execute sql statements: https://docs.liquibase.com/change-types/community/sql.html

Code Review: A Comprehensive Checklist

Nov 13, 2021 · Alex Omeyer

The problem with a checklist is that people eventually will only tend to use the checklist and will not think outside the checklist anymore. Therefore, a checklist for code reviews is in my opinion a bad idea. It would have been better if you had called it 'Items you at least need to address during a code review'. It might seem as a mere linguistic thing, but the term checklist often means for people: if I only check these, it will be ok.

What’s New Between Java 11 and Java 17?

Oct 11, 2021 · Gunter Rotsaert

thank you for the nice feedback, I really appreciate this.

Java 9 Modules (Part 1): Introduction

Aug 31, 2021 · Gunter Rotsaert

I do not have a Windows machine available right now, but did you try to use backslashes in the javac command instead of forward slashes?

Java 9 Modules (Part 2): IntelliJ and Maven

Jun 09, 2021 · Gunter Rotsaert

I do not really understand your question but in both cases the module-info file is at the same level as where the package starts. Part 1 is a simple Java project, Part 2 is a Maven project. Both projects can be opened with IntelliJ IDE. You can also reach out at my mail gunter@mydeveloperplanet.com

Top 20 Dockerfile Best Practices

Mar 14, 2021 · Álvaro Iradier

Great blog! I would like to add that CIS also provides a tool for checking vulnerabilities in your Docker images. I wrote a blog about it some time ago: https://mydeveloperplanet.com/2019/01/16/secure-docker-in-production/ It can help you identify security issues.

Create Fast and Easy Docker Images With Jib

Nov 13, 2020 · Gunter Rotsaert

Thank you for your comment. But what is properly in 'After configuring the credStore properly' more specifically?

The Five Ideals and The Unicorn Project

Nov 04, 2020 · Gene Kim

I can recommend this book, see also my review at https://mydeveloperplanet.com/2020/02/26/book-review-the-unicorn-project/

What Are the Responsibilities of a SAFe Agilist?

Jun 11, 2020 · Agilewaters Consulting

According to leading agile experts (several which have created the Agile Manifesto), SAFe is anything but agile: https://www.smharter.com/blog/safe-a-collection-of-comments-from-leading-experts/

How to Mock a Rest API in Python

Jun 07, 2020 · Gunter Rotsaert

Sources for this blog have been moved to branch feature/blog: https://github.com/mydeveloperplanet/jiratimereport/tree/feature/blog

How to Use the Jira API

Jun 07, 2020 · Gunter Rotsaert

Sources for this blog have been moved to branch feature/blog: https://github.com/mydeveloperplanet/jiratimereport/tree/feature/blog

Easy Integration Testing With Testcontainers

May 24, 2020 · Gunter Rotsaert

2. Also in paragraph 3: it is not needed to add the following to the unit test:

@Container
static PostgreSQLContainer postgreSQL = new PostgreSQLContainer();

@DynamicPropertySource
static void postgreSQLProperties(DynamicPropertyRegistry registry) {
registry.add(“spring.datasource.url”, postgreSQL::getJdbcUrl);
registry.add(“spring.datasource.username”, postgreSQL::getUsername);
registry.add(“spring.datasource.password”, postgreSQL::getPassword);
}

The errata's above are fixed in the original blog post: https://mydeveloperplanet.com/2020/05/05/easy-integration-testing-with-testcontainers/

Easy Integration Testing With Testcontainers

May 24, 2020 · Gunter Rotsaert

There are two errata's in the post:

1. In paragraph 3, the properties file contains a } too much:
${embedded.postgresql.user}}

Java 9 Modules (Part 3): Directives

May 19, 2020 · Gunter Rotsaert

yes, you are right, it should be 'list of modules'. Thank you for the comment. I cannot change it here at DZone anymore, but will do so at my personal blog.

Build Docker Image From Maven

May 11, 2020 · Preetdeep Kumar

The dockerfile Maven plugin seems easier to use in this case, in my opinion. An example can be found at: https://mydeveloperplanet.com/2018/05/16/build-and-deploy-a-spring-boot-app-on-minikube-part-1/

Deploy Spring Boot App to GCP App Engine

Jan 12, 2020 · Gunter Rotsaert

Set Up Jenkins CI in 30 Minutes

Oct 27, 2019 · Gunter Rotsaert

Thank you for the feedback. I agree that it would have been better to add jenkins to the docker group.

The pro's and con's of this setup are clearly described in the first part of paragraph 4.3. It is clearly stated that this setup should only be used in a playground setup. Your comment however implies that you did not read the article thoroughly.

It would have been better if you provided a link to an article how you think it should have been done. This would have been positive feedback instead of just criticizing and giving a vague advice.

Set Up Jenkins CI in 30 Minutes

Oct 02, 2019 · Gunter Rotsaert

Did you execute the command?

docker inspect myjenkins

It should show you the IP Address to use. What URL are you using?


Google Cloud Vision With Spring Boot

Sep 15, 2019 · Gunter Rotsaert

Of course, see the last line of the introduction, a link to the Java and Python repository are available there.

Java 9 Modules (Part 1): Introduction

Jun 24, 2019 · Gunter Rotsaert

First of all, you will have to look at the adoption of JDK 11. A prediction of InfoQ shows that they expect 10% of adoption of JDK 11 at the end of 2019 (https://www.infoq.com/news/2018/12/java-2019-predictions/). I don't think that everyone that adopts JDK 11 will start immediately with modules, so I assume that will lag even behind.

Deploy Spring Boot App to GCP App Engine

Apr 23, 2019 · Gunter Rotsaert

Hi Thiago, I am not completely sure, but according to me, you can use properties you define in your pom file and use them in the appengine-web.xml. If you can do that, then you can use maven profiles in order to set the correct property. You define a dev profile and a prod profile. Hope this guides you in the right direction.

Spring Boot Actuator in Spring Boot 2.0

Sep 24, 2018 · Gunter Rotsaert

You can enable it in any Spring Boot app. Spring Actuator will create the endpoints for you. It is up to you to to enable the different endpoints in the application.properties.

Kubernetes Local Development With Minikube on Hyper-V Windows 10

Aug 31, 2018 · Ion Mudreac

Is it working fine on Windows? According to the installation instructions of Minikube, Windows support is still experimental: https://github.com/kubernetes/minikube/releases

I had several problems in the past installing Minikube on Hyper-V, then turned to VirtualBox, but also this gave problems. Eventually, I turned to using an Ubuntu VM and installed Minikube but without using a vm-driver.

Spring Webflux: Getting started

Aug 04, 2018 · Gunter Rotsaert

I have no experience with an Image Based RESTful service, therefore I cannot answer your question.

Spring Webflux: Getting started

Aug 04, 2018 · Gunter Rotsaert

The Spring example has a GreetingRouter with a method route which is annotated as a Bean. I first created an own example ExampleRouter also with a method route and annotated as a Bean. The problem here is that two beans with the same name exist which causes the error. When you give the bean a unique name (exampleRouter in this case), the problem is solved. Hope this explains it enough.

Java 9 Modules (Part 2): IntelliJ and Maven

Aug 04, 2018 · Gunter Rotsaert

Common libraries are placed in the parent pom. The modules poms will inherit the libraries of the parent pom. If for some reason you need another version of a library in your module pom, then you can override it in your module pom. It is a similar approach as inheritance in Java. Hope this helps.

Spring Webflux: A Basic CRUD Application (Part 1)

Jun 13, 2018 · Gunter Rotsaert

Hello José,

you will use Spring Webflux whenever you need asynchronuous processing and with microservices. Take a look at the following article: https://dzone.com/articles/patterns-for-microservices-sync-vs-async It is quite good explained.

Using GitHub as a Maven Repository

May 19, 2018 · Anupam Gogoi

I don't really understand why you should do something like this. It is far more easy (and cheap if you run Nexus OSS) to run a docker instance of Nexus and don't abuse a VCS for this. Besides that, a repository manager is far more than only storing your own artifacts. It also serves as a proxy for your maven libraries and provides tooling for managing your artifacts.

Maybe your solution is cheap because you don't have to run Nexus yourself, but you will spend extra effort (and therefore extra costs) in managing your artifacts manually with the GitHub solution and abusing a VCS.

Java 9 Modules (Part 2): IntelliJ and Maven

May 13, 2018 · Gunter Rotsaert

Thank you for the feedback. I just tried the command again and it works for me with the semicolon. I guess you mean the semicolon in the module-path, right? I also checked the Oracle documentation and it says to use a semicolon: https://docs.oracle.com/javase/9/tools/java.htm#JSWOR624

Introducing the Maven git commit id Plugin

Feb 24, 2018 · Gunter Rotsaert

First of all, I did not create the plugin, I am a user of the plugin. No credits for me for creating it ;-)

I have read the post you mentioned, which is also interesting. It seems to me, but correct me if I am wrong, that it takes more effort to use the approach described and is less flexible in the information you want to put into a version properties (or class) file.

The git commit id plugin is limited to git, that is correct. The approach with the buildnumber maven plugin is less SCM dependent. So, if you are using another SCM than git, you will need to fall back to the buildnumber maven plugin.

I think a properties file is more convenient than a Version.java class:

* the properties file can be read by anyone without having to run the application. E.g. if someone quickly wants to see the versioning information, the data can be read with a text editor

* a properties file does not prevent you from having a Version.java class. You can still make your Version.java class and read the properties file and then have the possibility to retrieve the versioning information within you Java application. I think this would be the most flexible approach.

To summarize, the approach in the link you provided is also a valid way. It depends on your application, tooling, needs which solution you choose. In any way, make sure that the versioning information is generated and not manually maintained.

Java 9 Module Services

Jan 14, 2018 · Michael_Gates

You state that 'In addition, if the service is not in the application module, then the module declaration must have a requires directive that specifies the module which exports the service.' is not true. But, the application module is the Consumer module. The Consumer module must have a requires directive for the ServiceInterface module (which exports the service). This is according to the example you gave correct. So, the statement in the JDK documentation is correct, right?

Java 9 Modules (Part 1): Introduction

Jan 13, 2018 · Gunter Rotsaert

That is an interesting question. A drawback would be that there is extra configuration to make (certainly if you are planning to do this manually). However, I believe that the stronger encapsulation and the possibility to use a ‘small’ JVM will be the advantages developers will deal with the most. The cost of extra configuration is small, as long as the IDE offers enough support on this. In my next post, I will show how modules can be used with Maven and IntelliJ. IntelliJ makes it quite easy to manage the module-info in such a way that it almost costs no extra effort. Besides that, I did not yet migrate a legacy system to Java 9 Modules, there might be issues with that. Currently, I see no reasons for not using Java Modules. Any other opinions are welcome of course.

User has been successfully modified

Failed to modify user

ABOUT US

  • About DZone
  • Send feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: