DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Enterprise AI Trend Report: Gain insights on ethical AI, MLOps, generative AI, large language models, and much more.

2024 Cloud survey: Share your insights on microservices, containers, K8s, CI/CD, and DevOps (+ enter a $750 raffle!) for our Trend Reports.

PostgreSQL: Learn about the open-source RDBMS' advanced capabilities, core components, common commands and functions, and general DBA tasks.

AI Automation Essentials. Check out the latest Refcard on all things AI automation, including model training, data security, and more.

Tools

Development and programming tools are used to build frameworks, and they can be used for creating, debugging, and maintaining programs — and much more. The resources in this Zone cover topics such as compilers, database management systems, code editors, and other software tools and can help ensure engineers are writing clean code.

icon
Latest Refcards and Trend Reports
Trend Report
Kubernetes in the Enterprise
Kubernetes in the Enterprise
Refcard #366
Advanced Jenkins
Advanced Jenkins
Refcard #378
Apache Kafka Patterns and Anti-Patterns
Apache Kafka Patterns and Anti-Patterns

DZone's Featured Tools Resources

Efficient Message Distribution Using AWS SNS Fanout

Efficient Message Distribution Using AWS SNS Fanout

By Satrajit Basu DZone Core CORE
In the world of cloud computing and event-driven applications, efficiency and flexibility are absolute necessities. A critical component of such an application is message distribution. A proper architecture ensures that there are no bottlenecks in the movement of messages. A smooth flow of messages in an event-driven application is the key to its performance and efficiency. The volume of data generated and transmitted these days is growing at a rapid pace. Traditional methods often fall short in managing this kind of volume and scale, leading to bottlenecks impacting the performance of the system. Simple Notification Service (SNS), a native pub/sub messaging service from AWS can be leveraged to design a distributed messaging platform. SNS will act as the supplier of messages to various subscribers, resulting in maximizing throughput and effortless scalability. In this article, I’ll discuss the SNS Fanout mechanism and how it can be used to build an efficient and flexible distributed messaging system. Understanding AWS SNS Fanout Rapid message distribution and processing reliably and efficiently is a critical component of modern cloud-native applications. SNS Fanout can serve as a message distributor to multiple subscribers at once. The core component of this architecture is a message topic in SNS. Now, suppose I have several SQS queues that subscribe to this topic. So whenever a message is published to the topic the message is rapidly distributed to all the queues that are subscribed to the topic. In essence, SNS Fanout acts as a mediator that ensures your message gets broadcasted swiftly and efficiently, without the need for individual point-to-point connections. Fanout can work with various subscribers like Firehose delivery, SQS queue, Lambda functions, etc. However, I think that SQS subscribers bring out the real flavor of distributed message delivery and processing. By integrating SNS with SQS, applications can handle message bursts gracefully without losing data and maintain a smooth flow of communication, even during peak traffic times. Let’s take an example of an application that receives messages from an external system. The message needs to be stored, transformed, and analyzed. Also, note that these steps are not dependent on each other and so can run in parallel. This is a classic scenario where SNS Fanout can be used. The application would have three SQS queues subscribed to an SNS topic. Whenever a message gets published to the topic all three queues receive the message simultaneously. The queue listeners subsequently pick up the message and the steps can be executed in parallel. This results in a highly reliable and scalable system. The benefits of leveraging SNS Fanout for message dissemination are many. It enables real-time notifications, which are crucial for time-sensitive applications where response time is a major KPI. Additionally, it significantly reduces latency by minimizing the time it takes for a message to travel from its origin to its destination(s), much like delivering news via a broadcast rather than mailing individual letters. Why Choose SNS Fanout for Message Distribution? As organizations grow, so does the volume of messages that they must manage. Thus, scalability plays an important role in such scenarios. The scalability of an application ensures that as data volume or event frequency within the system increases, the performance of the message distribution system is not negatively impacted. SNS Fanout shines in its ability to handle large volumes of messages effortlessly. Whether you're sending ten messages or ten million, the service automatically scales to meet demand. This means your applications can maintain high performance and availability, regardless of workload spikes. When it comes to cost, SNS stands out from traditional messaging systems. Traditional systems may require upfront investments in infrastructure and ongoing maintenance costs, which can ramp up quickly as scale increases. SNS being a managed AWS service operates on a pay-as-you-go model where you only pay for what you use. This approach leads to significant savings, especially when dealing with variable traffic patterns. The reliability and redundancy features of SNS Fanout are worth noting. High-traffic scenarios often expose weak links in messaging systems. However, SNS Fanout is designed to ensure message delivery even when the going gets tough. SNS supports cross-account and cross-region message delivery thereby creating redundancy. This is like having several backup roads when the main highway is congested; traffic keeps moving, just through different paths. Best Practices Embarking on the journey to maximize your message distribution with AWS SNS Fanout begins with a clear, step-by-step setup. The process starts with creating an SNS topic — think of it as a broadcasting station. Once your topic is ready, you can move on to attach one or more SQS queues as subscribers; these act as the receivers for the messages you’ll be sending out. It’s essential to ensure that the right permissions are in place so that the SNS topic can write to the SQS queues. Don't forget to set up Dead Letter Queues (DLQ) for handling message delivery failures. DLQs are your safety net, allowing you to deal with undeliverable messages without losing them. For improved performance, configuring your SQS subscribers properly is crucial. Set appropriate visibility timeouts to prevent duplicate processing and adjust the message retention period to suit your workflow. This means not too long—avoiding clutter—and not too short—preventing premature deletion. Keep an eye on the batch size when processing messages: finding the sweet spot can lead to significant throughput improvements. Also, consider enabling Long Polling on your SQS queues: this reduces unnecessary network traffic and can lead to cost savings. Even the best-laid plans sometimes encounter hurdles, and with AWS SNS Fanout, common challenges include dealing with throttling and ensuring the order of message delivery. Throttling can be mitigated by monitoring your usage and staying within the service limits, or by requesting a limit increase if necessary. As for message ordering, while SNS doesn’t guarantee order, you can sequence messages on the application side using message attributes. When troubleshooting, always check the CloudWatch metrics for insights into what’s happening under the hood. And remember, the AWS support community is a goldmine for tips and solutions from fellow users who might’ve faced similar issues. Conclusion In our journey through the world of AWS SNS Fanout, we've uncovered a realm brimming with opportunities for efficiency and flexibility in message distribution. The key takeaways are clear: AWS SNS Fanout stands out as a sterling choice for broadcasting messages to numerous subscribers simultaneously, ensuring real-time notifications and reduced latency. But let's distill these advantages down to their essence one more time before we part ways. The architecture of AWS SNS Fanout brings forth a multitude of benefits. It shines when it comes to scalability, effortlessly managing an increase in message volume without breaking a sweat. Cost-effectiveness is another feather in its cap, as it sidesteps the hefty expenses often associated with traditional messaging systems. And then there's reliability – the robust redundancy features of AWS SNS Fanout mean that even in the throes of high traffic, your messages push through unfailingly. By integrating AWS SNS Fanout into your cloud infrastructure, you streamline operations and pave the way for a more responsive system. This translates not only into operational efficiency but also into a superior experience for end-users who rely on timely information. More
Top Secrets Management Tools for 2024

Top Secrets Management Tools for 2024

By Greg Bulmash
Managing your secrets well is imperative in software development. It's not just about avoiding hardcoding secrets into your code, your CI/CD configurations, and more. It's about implementing tools and practices that make good secrets management almost second nature. A Quick Overview of Secrets Management What is a secret? It's any bit of code, text, or binary data that provides access to a resource or data that should have restricted access. Almost every software development process involves secrets: credentials for your developers to access your version control system (VCS) like GitHub, credentials for a microservice to access a database, and credentials for your CI/CD system to push new artifacts to production. There are three main elements to secrets management: How are you making them available to the people/resources that need them? How are you managing the lifecycle/rotation of your secrets? How are you scanning to ensure that the secrets are not being accidentally exposed? We'll look at elements one and two in terms of the secrets managers in this article. For element three, well, I'm biased toward GitGuardian because I work there (disclaimer achieved). Accidentally exposed secrets don't necessarily get a hacker into the full treasure trove, but even if they help a hacker get a foot in the door, it's more risk than you want. That's why secrets scanning should be a part of a healthy secrets management strategy. What To Look for in a Secrets Management Tool In the Secrets Management Maturity Model, hardcoding secrets into code in plaintext and then maybe running a manual scan for them is at the very bottom. Manually managing unencrypted secrets, whether hardcoded or in a .env file, is considered immature. To get to an intermediate level, you need to store them outside your code, encrypted, and preferably well-scoped and automatically rotated. It's important to differentiate between a key management system and a secret management system. Key management systems are meant to generate and manage cryptographic keys. Secrets managers will take keys, passwords, connection strings, cryptographic salts, and more, encrypt and store them, and then provide access to them for personnel and infrastructure in a secure manner. For example, AWS Key Management Service (KMS) and AWS Secrets Manager (discussed below) are related but are distinct brand names for Amazon. Besides providing a secure way to store and provide access to secrets, a solid solution will offer: Encryption in transit and at rest: The secrets are never stored or transmitted unencrypted. Automated secrets rotation: The tool can request changes to secrets and update them in its files in an automated manner on a set schedule. Single source of truth: The latest version of any secret your developers/resources need will be found there, and it is updated in real-time as keys are rotated. Role/identity scoped access: Different systems or users are granted access to only the secrets they need under the principle of least privilege. That means a microservice that accesses a MongoDB instance only gets credentials to access that specific instance and can't pull the admin credentials for your container registry. Integrations and SDKs: The service has APIs with officially blessed software to connect common resources like CI/CD systems or implement access in your team's programming language/framework of choice. Logging and auditing: You need to check your systems periodically for anomalous results as a standard practice; if you get hacked, the audit trail can help you track how and when each secret was accessed. Budget and scope appropriate: If you're bootstrapping with 5 developers, your needs will differ from those of a 2,000-developer company with federal contracts. Being able to pay for what you need at the level you need it is an important business consideration. The Secrets Manager List Cyberark Conjur Secrets Manager Enterprise Conjur was founded in 2011 and was acquired by Cyberark in 2017. It's grown to be one of the premiere secrets management solutions thanks to its robust feature set and large number of SDKs and integrations. With Role Based Access Controls (RBAC) and multiple authentication mechanisms, it makes it easy to get up and running using existing integrations for top developer tools like Ansible, AWS CloudFormation, Jenkins, GitHub Actions, Azure DevOps, and more. You can scope secrets access to the developers and systems that need the secrets. For example, a Developer role that accesses Conjur for a database secret might get a connection string for a test database when they're testing their app locally, while the application running in production gets the production database credentials. The Cyberark site boasts an extensive documentation set and robust REST API documentation to help you get up to speed, while their SDKs and integrations smooth out a lot of the speed bumps. In addition, GitGuardian and CyberArk have partnered to create a bridge to integrate CyberArk Conjur and GitGuardian's Has My Secrets Leaked. This is now available as an open-source project on GitHub, providing a unique solution for security teams to detect leaks and manage secrets seamlessly. Google Cloud Secret Manager When it comes to choosing Amazon Web Services (AWS), Google Cloud Platform (GCP), or Microsoft Azure (Azure), it's usually going to come down to where you're already investing your time and money. In a multi-cloud architecture, you might have resources spread across the three, but if you're automatically rotating secrets and trying to create consistency for your services, you'll likely settle on one secrets manager as a single source of truth for third-party secrets rather than spreading secrets across multiple services. While Google is behind Amazon and Microsoft in market share, it sports the features you expect from a service competing for that market, including: Encryption at rest and in transit for your secrets CLI and SDK access to secrets Logging and audit trails Permissioning via IAM CI/CD integrations with GitHub Actions, Hashicorp Terraform, and more. Client libraries for eight popular programming languages. Again, whether to choose it is more about where you're investing your time and money rather than a killer function in most cases. AWS Secrets Manager Everyone with an AWS certification, whether developer or architect, has heard of or used AWS Secrets Manager. It's easy to get it mixed up with AWS Key Management System (KMS), but the Secrets Manager is simpler. KMS creates, stores, and manages cryptographic keys. Secrets Manager lets you put stuff in a vault and retrieve it when needed. A nice feature of AWS Secrets Manager is that it can connect with a CI/CD tool like GitHub actions through OpenID Connect (OIDC), and you can create different IAM roles with tightly scoped permissions, assigning them not only to individual repositories but specific branches. AWS Secrets Manager can store and retrieve non-AWS secrets as well as use the roles to provide access to AWS services to a CI/CD tool like GitHub Actions. Using AWS Lambda, key rotation can be automated, which is probably the most efficient way, as the key is updated in the secrets manager milliseconds after it's changed, producing the minimum amount of disruption. As with any AWS solution, it's a good idea to create multi-region or multi-availability-zone replicas of your secrets, so if your secrets are destroyed by a fire or taken offline by an absent-minded backhoe operator, you can fail over to a secondary source automatically. At $0.40 per secret per month, it's not a huge cost for added resiliency. Azure Key Vault Azure is the #2 player in the cloud space after AWS. Their promotional literature touts their compatibility with FIPS 140-2 standards and Hardware Security Modules (HSMs), showing they have a focus on customers who are either government agencies or have business with government agencies. This is not to say that their competitors are not suitable for government or government-adjacent solutions, but that Microsoft pushes that out of the gate as a key feature. Identity-managed access, auditability, differentiated vaults, and encryption at rest and in transit are all features they share with competitors. As with most Microsoft products, it tries to be very Microsoft and will more than likely appeal more to .Net developers who use Microsoft tools and services already. While it does offer a REST API, the selection of officially blessed client libraries (Java, .Net, Spring, Python, and JavaScript) is thinner than you'll find with AWS or GCP. As noted in the AWS and GCP entries, a big factor in your decision will be which cloud provider is getting your dominant investment of time and money. And if you're using Azure because you're a Microsoft shop with a strong investment in .Net, then the choice will be obvious. Doppler While CyberArk's Conjur (discussed above) started as a solo product that was acquired and integrated into a larger suite, Doppler currently remains a standalone key vault solution. That might be attractive for some because it's cloud-provider agnostic, coding language agnostic, and has to compete on its merits instead of being the default secrets manager for a larger package of services. It offers logging, auditing, encryption at rest and in transit, and a list of integrations as long as your arm. Besides selling its abilities, it sells its SOC compliance and remediation functionalities on the front page. When you dig deeper, there's a list of integrations as long as your arm testifies to its usefulness for integrating with a wide variety of services, and its list of SDKs is more robust than Azure's. It seems to rely strongly on injecting environment variables, which can make a lot of your coding easier at the cost of the environment variables potentially ending up in run logs or crash dumps. Understanding how the systems with which you're using it treat environment variables, scope them, and the best ways to implement it with them will be part of the learning curve in adopting it. Infisical Like Doppler, Infisical uses environment variable injection. Similar to the Dotenv package for Node, when used in Node, it injects them at run time into the process object of the running app so they're not readable by any other processes or users. They can still be revealed by a crash dump or logging, so that is a caveat to consider in your code and build scripts. Infisical offers other features besides a secrets vault, such as configuration sharing for developer teams and secrets scanning for your codebase, git history, and as a pre-commit hook. You might ask why someone writing for GitGuardian would mention a product with a competing feature. Aside from the scanning, their secrets and configuration vault/sharing model offers virtual secrets, over 20 cloud integrations, nine CI/CD integrations, over a dozen framework integrations, and SDKs for four programming languages. Their software is mostly open-source, and there is a free tier, but features like audit logs, RBAC, and secrets rotation are only available to paid subscribers. Akeyless AKeyless goes all out features, providing a wide variety of authentication and authorization methods for how the keys and secrets it manages can be accessed. It supports standards like RBAC and OIDC as well as 3rd party services like AWS IAM and Microsoft Active Directory. It keeps up with the competition in providing encryption at rest and in transit, real-time access to secrets, short-lived secrets and keys, automated rotation, and auditing. It also provides features like just-in-time zero trust access, a password manager for browser-based access control as well as password sharing with short-lived, auto-expiring passwords for third parties that can be tracked and audited. In addition to 14 different authentication options, it offers seven different SDKs and dozens of integrations for platforms ranging from Azure to MongoDB to Remote Desktop Protocol. They offer a reasonable free tier that includes 3-days of log retention (as opposed to other platforms where it's a paid feature only). 1Password You might be asking, "Isn't that just a password manager for my browser?" If you think that's all they offer, think again. They offer consumer, developer, and enterprise solutions, and what we're going to look at is their developer-focused offering. Aside from zero-trust models, access control models, integrations, and even secret scanning, one of their claims that stands out on the developer page is "Go ahead – commit your .env files with confidence." This stands out because .env files committed to source control are a serious source of secret sprawl. So, how are they making that safe? You're not putting secrets into your .env files. Instead, you're putting references to your secrets that allow them to be loaded from 1Password using their services and access controls. This is somewhat ingenious as it combines a format a lot of developers know well with 1Password's access controls. It's not plug-and-play and requires a bit of a learning curve, but familiarity doesn't always breed contempt. Sometimes it breeds confidence. While it has a limited number of integrations, it covers some of the biggest Kubernetes and CI/CD options. On top of that, it has dozens and dozens of "shell plugins" that help you secure local CLI access without having to store plaintext credentials in ~/.aws or another "hidden" directory. And yes, we mentioned they offer secrets scanning as part of their offering. Again, you might ask why someone writing for GitGuardian would mention a product with a competing feature. HashiCorp Vault HashiCorp Vault offers secrets management, key management, and more. It's a big solution with a lot of features and a lot of options. Besides encryption, role/identity-based secrets access, dynamic secrets, and secrets rotation, it offers data encryption and tokenization to protect data outside the vault. It can act as an OIDC provider for back-end connections as well as sporting a whopping seventy-five integrations in its catalog for the biggest cloud and identity providers. It's also one of the few to offer its own training and certification path if you want to add being Hashi Corp Vault certified to your resume. It has a free tier for up to 25 secrets and limited features. Once you get past that, it can get pricey, with monthly fees of $1,100 or more to rent a cloud server at an hourly rate. In Summary Whether it's one of the solutions we recommended or another solution that meets our recommendations of what to look for above, we strongly recommend integrating a secret management tool into your development processes. If you still need more convincing, we'll leave you with this video featuring GitGuardian's own Mackenzie Jackson. More
Establishing a Highly Available Kubernetes Cluster on AWS With Kops
Establishing a Highly Available Kubernetes Cluster on AWS With Kops
By Raghava Dittakavi DZone Core CORE
Logging and Monitoring in AWS
Logging and Monitoring in AWS
By Aditya Bhuyan
IntelliJ and Java Spring Microservices: Productivity Tips With GitHub Copilot
IntelliJ and Java Spring Microservices: Productivity Tips With GitHub Copilot
By Amol Gote DZone Core CORE
Exploring the Horizon of Microservices With KubeMQ's New Control Center
Exploring the Horizon of Microservices With KubeMQ's New Control Center

The software development landscape is rapidly evolving. New tools, technologies, and trends are always bubbling to the top of our workflows and conversations. One of those paradigm shifts that has become more pronounced in recent years is the adoption of microservices architecture by countless organizations. Managing microservices communication has been a sticky challenge for many developers. As a microservices developer, I want to focus my efforts on the core business problems and functionality that my microservices need to achieve. I’d prefer to offload the inter-service communication concerns—just like I do with authentication or API security. So, that brings me to the KubeMQ Control Center (KCC). It’s a service for managing microservices communication that’s quick to set up and designed with an easy-to-use UI. In this article, I wanted to unpack some of the functionality I explored as I tested it in a real-world scenario. Setting the Scene Microservices communication presents a complex challenge, akin to orchestrating a symphony with numerous distinct instruments. It demands precision and a deep understanding of the underlying architecture. Fortunately, KCC—with its no-code setup and Kubernetes-native integration—aims to abstract away this complexity. Let's explore how it simplifies microservices messaging. Initial Setup and Deployment Deploy KubeMQ Using Docker The journey with KCC starts with a Docker-based deployment. This process is straightforward: Shell $ docker run -d \ -p 8080:8080 \ -p 50000:50000 \ -p 9090:9090 \ -e KUBEMQ_TOKEN=(add token here) kubemq/kubemq This command sets up KubeMQ, aligning the necessary ports and establishing secure access. Send a "Hello World" Message After deployment, you can access the KubeMQ dashboard in your browser at http://localhost:8080/. Here, you have a clean, intuitive UI to help you manage your microservices. We can send a “Hello World” message to test the waters. In the Dashboard, click Send Message and select Queues. We set a channel name (q1) and enter "hello world!" in the body. Then, we click Send. Just like that, we successfully created our first message! And it’s only been one minute since we deployed KubeMQ and started using KCC. Pulling a Message Retrieving messages is a critical aspect of any messaging platform. From the Dashboard, select your channel to open the Queues page. Under the Pull tab, click Pull to retrieve the message that you just sent. The process is pretty smooth and efficient. We can review the message details for insights into its delivery and content. Send “Hello World” With Code Moving beyond the UI, we can send a “Hello world” message programmatically too. For example, here’s how you would send a message using C#. KCC integrates with most of the popular programming languages, which is essential for diverse development environments. Here are the supported languages and links to code samples and SDKs: C# and .NET Java Go Node.js Python Deploying KubeMQ in Kubernetes Transitioning to Kubernetes with KCC is pretty seamless, too. KubeMQ is shooting to design with scalability and the developer in mind. Here’s a quick guide to getting started. Download KCC Download KCC from KubeMQ’s account area. They offer a 30-day free trial so you can do a comprehensive evaluation. Unpack the Zip File Shell $ unzip kcc_mac_apple.zip -d /kubemq/kcc Launch the Application Shell $ ./kcc The above step integrates you into the KubeMQ ecosystem, which is optimized for Kubernetes. Add a KubeMQ Cluster Adding a KubeMQ cluster is crucial for scaling and managing your microservices architecture effectively. Monitor Cluster Status The dashboard provides an overview of your KubeMQ components, essential for real-time system monitoring. Explore Bridges, Targets, and Sources KCC has advanced features like Bridges, Targets, and Sources, which serve as different types of connectors between KubeMQ clusters, external messaging systems, and external cloud services. These tools will come in handy when you have complex data flows and system integrations, as many microservices architectures do. Conclusion That wraps up our journey through KubeMQ's Control Center. Dealing with the complexities of microservice communication can be a burden, taking the developer away from core business development. Developers can offload that burden to KCC. With its intuitive UI and suite of features, KCC helps developers be more efficient as they build their applications on microservice architectures. Of course, we’ve only scratched the surface here. Unlocking the true potential of any tool requires deeper exploration and continued use. For that, you can check out KubeMQ’s docs site. Or you can build on what we’ve shown above, continuing to play around on your own. With the right tools in your toolbox, you’ll quickly be up and running with a fleet of smoothly communicating microservices! Have a really great day!

By John Vester DZone Core CORE
Automate DNS Records Creation With ExternalDNS on AWS Elastic Kubernetes Service
Automate DNS Records Creation With ExternalDNS on AWS Elastic Kubernetes Service

ExternalDNS is a handy tool in the Kubernetes world, making it easy to coordinate Services and Ingresses with different DNS providers. This tool automates the process, allowing users to manage DNS records dynamically using Kubernetes resources. Instead of being tied to a specific provider, ExternalDNS works seamlessly with various providers. ExternalDNS intelligently determines the desired DNS records, paving the way for effortless DNS management. In this article, we'll explore what ExternalDNS is all about and why it's useful. Focusing on a specific situation — imagine a Kubernetes cluster getting updated and using Route 53 in AWS — we'll walk you through how ExternalDNS can automatically create DNS records in Route 53 whenever Ingresses are added. Come along for a simplified journey into DNS management and automation with ExternalDNS. A high-level illustration of creation of DNS records in R53 using ExternalDNS on EKS The Steps to Deploy ExternalDNS and Ingress Deploying ExternalDNS and Ingress involves several steps. Below are the general steps to deploy ExternalDNS in a Kubernetes cluster (EKS). 1. Create IAM Policy and Role Create an IAM policy and role with the necessary permissions for ExternalDNS to interact with Route53. YAML # External DNS policy to allow intract R53 ExternalDnsPolicy: Type: AWS::IAM::ManagedPolicy Properties: Description: External DNS controller policy PolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Sid: PermitR53Listings Action: - route53:ListResourceRecordSets - route53:ListHostedZones Resource: '*' - Effect: Allow Sid: PermitR53Changes Action: - route53:ChangeResourceRecordSets Resource: arn:aws:route53:::hostedzone/* # I AM Role for External DNS rExternalDnsRole: Type: AWS::IAM::Role Properties: RoleName: "ExternalDns-Role" AssumeRolePolicyDocument: Fn::Sub: - | { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": arn:aws:iam::<ACCOUNT_NUMBER>:oidc-provider/<OIDC_PROVIDER> }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "<<EKS Cluster Id>>": "system:serviceaccount:kube-system:external-dns" } } } ] } - clusterid: !Sub "<<EKS Issuer>>:sub" providerarn: Path: / ManagedPolicyArns: - !Ref ExternalDnsPolicy 2. Deploy ExternalDNS Deploy a service account that is mapped to the IAM role created in the previous step. Use the kubectl apply service_account.yaml to deploy the service account. service_account.yaml: YAML apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-addon: external-dns.addons.k8s.io k8s-app: external-dns name: external-dns namespace: kube-system annotations: eks.amazonaws.com/role-arn: <<provide IAM Role ARN that created on the above step>> To check the name of your service account, run the following command: Plain Text kubectl get sa Example output: Plain Text NAME SECRETS AGE default 1 1h external-dns 1 1h In the example output above, 'external-dns' is the assigned name for the service account during its creation. Run the following command: Plain Text kubectl apply external_dns.yaml external_dns.yaml file: YAML apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: external-dns labels: app.kubernetes.io/name: external-dns rules: - apiGroups: [""] resources: ["services","endpoints","pods","nodes"] verbs: ["get","watch","list"] - apiGroups: ["extensions","networking.k8s.io"] resources: ["ingresses"] verbs: ["get","watch","list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: external-dns-viewer namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: external-dns subjects: - kind: ServiceAccount name: external-dns namespace: kube-system --- apiVersion: apps/v1 kind: Deployment metadata: name: external-dns namespace: kube-system labels: app: external-dns spec: replicas: 1 selector: matchLabels: app: external-dns template: metadata: labels: app: external-dns spec: serviceAccountName: external-dns containers: - name: external-dns image: registry.k8s.io/external-dns/external-dns:v0.13.5 args: - --source=service - --source=ingress - --provider=aws - --aws-zone-type=private - --registry=txt - --txt-owner-id=external-dns-addon - --domain-filter=<< provide host zone id >> # will make ExternalDNS see only the hosted zones matching provided domain - --policy=upsert-only env: - name: AWS_REGION value: us-east-1 resources: limits: cpu: 300m memory: 400Mi requests: cpu: 200m memory: 200Mi imagePullPolicy: "Always" Verify that the deployment was successful: Plain Text kubectl get deployments Example output: Plain Text NAME READY UP-TO-DATE AVAILABLE AGE external-dns 1/1 1 1 15m Check the logs to verify the records are up-to-date: Plain Text kubectl logs external-dns-7f34d6d1b-sx4fx Plain Text time="2024-02-15T20:22:02Z" level=info msg="Instantiating new Kubernetes client" time="2024-02-15T20:22:02Z" level=info msg="Using inCluster-config based on serviceaccount-token" time="2024-02-15T20:22:02Z" level=info msg="Created Kubernetes client https://10.100.0.1:443" time="2024-02-15T20:22:09Z" level=info msg="Applying provider record filter for domains: [<yourdomainname>.com. .<yourdomainname>.com.]" time="2024-02-15T20:22:09Z" level=info msg="All records are already up to date" Deploying an Ingress Creating an Ingress Template for AWS Load Balancers involves several key components to ensure effective configuration. Rules: Define routing rules specifying how traffic is directed based on paths or hosts. Backend services: Specify backend services to handle the traffic, including service names and ports. Health checks: Implement health checks to ensure the availability and reliability of backend services. We'll walk through each component, detailing their significance and providing examples to create a comprehensive Ingress Template for AWS Load Balancers. This step-by-step approach ensures a well-structured and functional configuration tailored to your specific application needs. YAML apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: sample-ingress annotations: kubernetes.io/ingress.class: "alb" alb.ingress.kubernetes.io/scheme: "internet-facing or internal" alb.ingress.kubernetes.io/certificate-arn: "arn:aws:acm:your-region:your-account-id:certificate/your-acm-cert-arn" spec: rules: - host: "app.external.dns.test.com" http: paths: - path: /* pathType: Prefix backend: service: name: default-service port: number: 80 - path: /products pathType: Prefix backend: service: name: products-service port: number: 80 - path: /accounts pathType: Prefix backend: service: name: accounts-service port: number: 80 metadata: Specifies the name of the Ingress and includes annotations for AWS-specific settings. kubernetes.io/ingress.class: "alb": Specifies the Ingress class to be used, indicating that the Ingress should be managed by the AWS ALB Ingress Controller. alb.ingress.kubernetes.io/scheme: "internet-facing" or "internal": Determines whether the ALB should be internet-facing or internal.Options: "internet-facing": The ALB is accessible from the internet. "internal": The ALB is internal and not accessible from the internet alb.ingress.kubernetes.io/certificate-arn: "arn:aws:acm:your-region:your-account-id: certificate/your-acm-cert-arn": Specifies the ARN (Amazon Resource Name) of the ACM (AWS Certificate Manager) certificate to be associated with the ALB. spec.rules: Defines routing rules based on the host. The /* rule directs traffic to the default service, while /products and /accounts have specific rules for products and accounts services. pathType: Specifies the type of matching for the path. backend.service.name and backend. service.port: Specifies the backend services for each rule. ExternalDNS simplifies DNS management in Kubernetes by automating the creation and updating of DNS records based on changes to Ingress resources. For instance, when creating an Ingress with the hostname 'app.external.dns.test.com,' ExternalDNS actively monitors these changes and dynamically recreates corresponding DNS records in Amazon Route 53 (R53). This automation ensures that DNS entries align seamlessly with the evolving environment, eliminating manual interventions. After successfully deploying the ExternalDNS and Ingress template mentioned above, the corresponding hosted zone and records are automatically created. Conclusion ExternalDNS emerges as a pivotal solution for simplifying and automating DNS management within Kubernetes environments. By seamlessly connecting Ingress resources with DNS providers like Amazon Route 53, ExternalDNS eliminates the complexities of manual record management. Its dynamic approach ensures that DNS entries stay synchronized with the evolving Kubernetes landscape, providing a hassle-free experience for users. The tool's versatility and ease of integration make it an invaluable asset for streamlining operations and maintaining a consistent and up-to-date DNS infrastructure. As organizations embrace cloud-native architectures, ExternalDNS stands out as an essential component for achieving efficient and automated DNS management.

By KONDALA RAO PATIBANDLA
Building gdocweb With Java 21, Spring Boot 3.x, and Beyond
Building gdocweb With Java 21, Spring Boot 3.x, and Beyond

Starting a new project is always a mix of excitement and tough decisions, especially when you're stitching together familiar tools like Google Docs with powerhouses like GitHub Pages. This is the story of building gdocweb, a tool that I hoped would make life easier for many. I'll be diving into why I chose Java 21 and Spring Boot 3.x, ditched GraalVM after some trial and error, and why a simple VPS with Docker Compose won out over more complex options. I also went with Postgres and JPA, but steered clear of migration tools like Flyway. It's a no-frills, honest recount of the choices, changes, and the occasional "aha" moments of an engineer trying to make something useful and efficient. Introducing gdocweb Before we dive into the technical intricacies and the decision-making labyrinth of building gdocweb, let's set the stage by understanding what gdocweb is and the problem it solves. In simple terms, gdocweb connects Google Docs to GitHub Pages. It's a simple web builder that generates free sites with all the raw power of GitHub behind it, and all the usability of Google Docs. I decided to build gdocweb to eliminate the complexities typically associated with website building and documentation. It's for users who seek a hassle-free way to publish and maintain their content, but also for savvy users who enjoy the power of GitHub but don't want to deal with markdown nuances. Here's a short video explaining gdocweb for the general public: Java 21 and Spring Boot 3.x: Innovation and Maturity When you're spearheading a project on your own like I was with gdocweb, you have the liberty to make technology choices that might be more challenging in a team or corporate environment. This freedom led me to choose Java 21 and Spring Boot 3.x for this project. The decision to go with the current Long-Term Support (LTS) version of Java was a no-brainer. It's always tempting to use the latest and greatest, but with Java 21, it wasn't just about using something new; it was about leveraging a platform that has stood the test of time and has evolved to meet modern development needs. Virtual threads were a major part of the decision to go with Java 21. Cost is a huge factor in such projects, and squeezing the maximum throughput from a server is crucial in these situations. Java, being a mature technology, offered a sense of reliability even in its latest iteration. Similarly, Spring Boot 3.x, despite being a newer version, comes from a lineage of robust and well-tested frameworks. It's a conservative choice in the sense of its long-standing reputation, but innovative in its features and capabilities. However, this decision wasn't without its hiccups. During the process of integrating Google API access, I had to go through a security CASA tier 2 review. Here's where the choice of Java 21 threw a curveball. The review tool was tailored for JDK 11, and although it worked with JDK 21, it still added a bit of stress to the process. It was a reminder that when you're working with cutting-edge versions of technologies, there can be unexpected bumps along the road. Even if they are as mature as Java. The transition to Spring Boot 3.x had its own set of challenges, particularly with the changes in security configurations. These modifications rendered most online samples and guides obsolete, breaking a lot of what I had initially set up. It was a learning curve, adjusting to these changes and figuring out the new way of doing things. However, most other aspects were relatively simple and the best compliment I can give to Spring Boot 3.x is that it's very similar to Spring Boot 2.x. GraalVM Native Image for Efficiency My interest in GraalVM native image for gdocweb was primarily driven by its promise of reduced memory usage and faster startup times. The idea was that with lower memory requirements, I could run more server instances, leading to better scalability and resilience. Faster startup times also meant quicker recovery from failures, a crucial aspect of maintaining a reliable service. Implementing GraalVM Getting GraalVM to work was nontrivial but not too hard. After some trial and error, I managed to set up a Continuous Integration (CI) process that built the GraalVM project and uploaded it to Docker. This was particularly necessary because I'm using an M2 Mac, while my server runs on Intel architecture. This setup meant I had to deal with an 18-minute wait time for each update – a significant delay for any development cycle. Facing the Production Challenges Things started getting rocky when I started to test the project production and staging environments. It became a "whack-a-mole" scenario with missing library code from the native image. Each issue fixed seemed to only lead to another, and the 18-minute cycle for each update added to the frustration. The final straw was realizing the incompatibility issues with Google API libraries. Solving these issues would require extensive testing on a GraalVM build, which was already burdened by slow build times. For a small project like mine, this became a bottleneck too cumbersome to justify the benefits. The Decision To Move On While GraalVM seemed ideal on paper for saving resources, the reality was different. It consumed my limited GitHub Actions minutes and required extensive testing, which was impractical for a project of this scale. Ultimately, I decided to abandon the GraalVM route. If you do choose to use GraalVM then this was the GitHub Actions script I used, I hope it can help you with your journey: name: Java CI with Maven on: push: branches: [ "master" ] pull_request: branches: [ "master" ] jobs: build: runs-on: ubuntu-latest services: postgres: image: postgres:latest env: POSTGRES_PASSWORD: yourpassword ports: - 5432:5432 options: >- --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5 steps: - uses: actions/checkout@v3 - uses: graalvm/setup-graalvm@v1 with: java-version: '21' version: '22.3.2' distribution: 'graalvm' cache: 'maven' components: 'native-image' native-image-job-reports: 'true' github-token: ${{ secrets.GITHUB_TOKEN } - name: Wait for PostgreSQL run: sleep 10 - name: Build with Maven run: mvn -Pnative native:compile - name: Build Docker Image run: docker build -t autosite:latest . - name: Log in to Docker Hub uses: docker/login-action@v1 with: username: ${{ secrets.DOCKERHUB_USERNAME } password: ${{ secrets.DOCKERHUB_TOKEN } - name: Push Docker Image run: | docker tag autosite:latest mydockeruser/autosite:latest docker push mydockeruser/autosite:latest This configuration was a crucial part of my attempt to leverage GraalVM's benefits, but as the project evolved, so did my understanding of the trade-offs between idealism in technology choice and practicality in deployment and maintenance. Deployment: VPS and Docker Compose When it came to deploying gdocweb, I had a few paths to consider. Each option came with its pros and cons, but after careful evaluation, I settled on using a Virtual Private Server (VPS) with Docker Compose. Here’s a breakdown of my thought process and why this choice made the most sense for my needs. Avoiding Raw VPS Deployment I immediately ruled out the straightforward approach of installing the application directly on a VPS. This method fell short in terms of migration ease, testing, and flexibility. Containers offer a more streamlined and efficient approach. They provide a level of abstraction and consistency across different environments, which is invaluable. Steering Clear of Managed Containers and Orchestration Managed containers and orchestration (e.g., k8s) were another option, and while they offer scalability and ease of management, they introduce complexity in other areas. For instance, when using a managed Kubernetes service it would often mean relying on cloud storage for databases, which can get expensive quickly. My philosophy was to focus on cost before horizontal scale, especially in the early stages of a project. If we don't optimize and stabilize when we're small, the problems will only get worse as we grow. Scaling should ideally start with vertical scaling before moving to horizontal. Vertical scaling means more CPU/RAM while horizontal adds additional machines. Vertical scaling is not only more cost-effective but also crucial from a technical standpoint. It makes it easier to identify performance bottlenecks using simple profiling tools. In contrast, horizontal scaling can often mask these issues by adding more instances, which could lead to higher costs and hidden performance problems. The Choice of Docker Compose Docker Compose emerged as the clear winner for several reasons. It allowed me to seamlessly integrate the database and the application container. Their communication is contained within a closed network, adding an extra layer of security with no externally open ports. Moreover, the cost is fixed and predictable, with no surprises based on usage. This setup offered me the flexibility and ease of containerization without the overhead and complexity of more extensive container orchestration systems. It was the perfect middle ground, providing the necessary features without overcomplicating the deployment process. By using Docker Compose, I maintained control over the environment and kept the deployment process straightforward and manageable. This decision aligned perfectly with the overall ethos of gdocweb – simplicity, efficiency, and practicality. Front-End: Thymeleaf Over Modern Alternatives The front-end development of gdocweb presented a bit of a challenge for me. In an era where React and similar frameworks are dominating the scene, opting for Thymeleaf might seem like a step back. However, this decision was based on practical considerations and a clear understanding of the project's requirements and my strengths as a developer. React: Modern but Not a One-Size-Fits-All Solution React is undeniably modern and powerful, but it comes with its own set of complexities. My experience with React is akin to many developers dabbling outside their comfort zone - functional but not exactly proficient. I've seen the kind of perplexed expressions from seasoned React developers when they look at my code, much like the ones I have when I'm reading complex Java code written by others. React’s learning curve, coupled with its slower performance in certain scenarios and the risk of not achieving an aesthetically pleasing result without deep expertise, made me reconsider its suitability for gdocweb. The Appeal of Thymeleaf Thymeleaf, on the other hand, offered a more straightforward approach, aligning well with the project's ethos of simplicity and efficiency. Its HTML-based interfaces, while perhaps seen as antiquated next to frameworks like React, come with substantial advantages: Simplicity in page flow: Thymeleaf provides an easy-to-understand and easy-to-debug flow, making it a practical choice for a project like this. Performance and speed: It’s known for its fast performance, which is a significant factor in providing a good user experience. No need for NPM: Thymeleaf eliminates the need for additional package management, reducing complexity and potential vulnerabilities. Lower risk of client-side vulnerabilities: The server-side nature of Thymeleaf inherently reduces the risk of client-side issues. Considering HTMX for Dynamic Functionality The idea of incorporating HTMX for some dynamic behavior in the front end did cross my mind. HTMX has been on my radar for a while, promising to add dynamic functionalities easily. However, I had to ask myself if it was truly necessary for a tool like gdocweb, which is essentially a straightforward wizard. My conclusion was that opting for HTMX might be more of Resume Driven Design (RDD) on my part, rather than a technical necessity. In summary, the choice of Thymeleaf was a blend of practicality, familiarity, and efficiency. It allowed me to build a fast, simple, and effective front-end without the overhead and complexity of more modern frameworks, which, while powerful, weren't necessary for the scope of this project. Final Word The key takeaway in this post is the importance of practicality in technology choices. When we're building our own projects it's much easier to experiment with newer technologies, but this is a slippery slope. We need to keep our feet grounded in familiar territories while experimenting. My experience with GraalVM highlights the importance of aligning technology choices with project needs and being flexible in adapting to challenges. It’s a reminder that in technology, sometimes the simpler, tried-and-tested paths can be the most effective.

By Shai Almog DZone Core CORE
My ModelMapper Cheat Sheet
My ModelMapper Cheat Sheet

As the title says, this article will list my cheat sheet for ModelMapper. It will not provide any deep-dive tutorials or fancy descriptions, just some use cases. Models Used for This Article Any time you see User, UserDTO, LocationDTO in the code, refer to this section. Java @Data @NoArgsConstructor @AllArgsConstructor public class User { private String firstName; private String lastName; private List<String> subscriptions; private String country; private String city; } Java @Data @NoArgsConstructor @AllArgsConstructor public class UserDTO { private String firstName; private String secondName; private String subscriptions; private LocationDTO location; } Java @Data @NoArgsConstructor @AllArgsConstructor public class LocationDTO { private String country; private String city; } Basic typeMap Usage and addMapping Use typeMap instead of createTypeMap when custom mapping is needed. Example of how to map the lastName field to the secondName field. Java public UserDTO convert(User user) { return modelMapper.typeMap(User.class, UserDTO.class) .addMapping(User::getLastName, UserDTO::setSecondName) .map(user); } The Actual createTypeMap and getTypeMap Usage If you use createTypeMap, ensure to use getTypeMap. It is easy to forget, as everyone provides an example with createTypeMap, but calling it twice will throw an exception. Java public class CreateTypeMapConverter { private final ModelMapper modelMapper; public CreateTypeMapConverter(ModelMapper modelMapper) { this.modelMapper = modelMapper; // can be moved to a configuration class var typeMap = modelMapper.createTypeMap(User.class, UserDTO.class); typeMap.addMapping(User::getLastName, UserDTO::setSecondName); } public UserDTO convert(User user) { return modelMapper.getTypeMap(User.class, UserDTO.class).map(user); } } And it is always possible to do it lazily. Java public UserDTO convert(User user) { var typeMap = modelMapper.getTypeMap(User.class, UserDTO.class); if (typeMap == null) { typeMap = modelMapper.createTypeMap(User.class, UserDTO.class); typeMap.addMapping(User::getLastName, UserDTO::setSecondName); } return typeMap.map(user); } Adding Mapping for Setter With Converter or Use using Here is an example of how to convert List to String with typeMap before we set this value to the DTOs setter. For this, we call using with our converter logic. Java public UserDTO convertWithUsing(User user) { return modelMapper.typeMap(User.class, UserDTO.class) .addMappings(mapper -> { mapper.map(User::getLastName, UserDTO::setSecondName); mapper.using((MappingContext<List<String>, String> ctx) -> ctx.getSource() == null ? null : String.join(",", ctx.getSource())) .map(User::getSubscriptions, UserDTO::setSubscriptions); }) .map(user); } The same will work with any entity that requires custom conversion before setter. ModelMapper Does Not Support Converting in addMapping In the code sample, the value in the setter will be null, so we should avoid using addMapping to convert anything. Java // will throw NPE as o is null public UserDTO addMappingWithConversion(User user) { return modelMapper.typeMap(User.class, UserDTO.class) .addMapping(User::getLastName, UserDTO::setSecondName) // o is null .addMapping(User::getSubscriptions, (UserDTO dest, List<String> o) -> dest.setSubscriptions(String.join(",", o))) .map(user); } But ModelMapper Supports Nested Setter Example of mapping country and city fields to the nested object LocationDTO. Java public UserDTO convertNestedSetter(User user) { return modelMapper.typeMap(User.class, UserDTO.class) .addMapping(User::getLastName, UserDTO::setSecondName) .addMapping(User::getCountry, (UserDTO dest, String v) -> dest.getLocation().setCountry(v)) .addMapping(User::getCity, (UserDTO dest, String v) -> dest.getLocation().setCity(v)) .map(user); } Using preConverter and postConverter Use when left with no choice or when conditionally needed to change the source or destination, but it’s hard or not possible with using. Here is a super simplified example for preConverter: Java public UserDTO convertWithPreConverter(User user) { return modelMapper.typeMap(User.class, UserDTO.class) .setPreConverter(context -> { context.getSource().setFirstName("Joe"); return context.getDestination(); }) .addMapping(User::getLastName, UserDTO::setSecondName) .map(user); } The same logic is for postConverter. Java public UserDTO convertWithPostConverter(User user) { return modelMapper.typeMap(User.class, UserDTO.class) .setPostConverter(context -> { var location = new LocationDTO(context.getSource().getCountry(), context.getSource().getCity()); context.getDestination().setLocation(location); return context.getDestination(); }) .addMapping(User::getLastName, UserDTO::setSecondName) .map(user); } Using With Builder For immutable entities that use the builder pattern. Model Java @Data @Builder public class UserWithBuilder { private final String firstName; private final String lastName; } Java @Data @Builder public final class UserWithBuilderDTO { private final String firstName; private final String secondName; } Converter Java public class BuilderConverter { private final ModelMapper modelMapper; public BuilderConverter(ModelMapper modelMapper) { this.modelMapper = modelMapper; var config = modelMapper.getConfiguration().copy() .setDestinationNameTransformer(NameTransformers.builder()) .setDestinationNamingConvention(NamingConventions.builder()); var typeMap = modelMapper.createTypeMap(UserWithBuilder.class, UserWithBuilderDTO.UserWithBuilderDTOBuilder.class, config); typeMap.addMapping(UserWithBuilder::getLastName, UserWithBuilderDTO.UserWithBuilderDTOBuilder::secondName); } public UserWithBuilderDTO convert(UserWithBuilder user) { return modelMapper.getTypeMap(UserWithBuilder.class, UserWithBuilderDTO.UserWithBuilderDTOBuilder.class) .map(user).build(); } } But if all entities use the builder pattern, the configuration can be set up globally. The full source code for this article can be found on GitHub.

By Max Stepovyi
Kicking the Tires of Docker Scout
Kicking the Tires of Docker Scout

I never moved away from Docker Desktop. For some time, after you use it to build an image, it prints a message: Plain Text What's Next? View a summary of image vulnerabilities and recommendations → docker scout quickview I decided to give it a try. I'll use the root commit of my OpenTelemetry tracing demo. Let's execute the proposed command: Shell docker scout quickview otel-catalog:1.0 Here's the result: Plain Text ✓ Image stored for indexing ✓ Indexed 272 packages Target │ otel-catalog:1.0 │ 0C 2H 15M 23L digest │ 7adfce68062e │ Base image │ eclipse-temurin:21-jre │ 0C 0H 15M 23L Refreshed base image │ eclipse-temurin:21-jre │ 0C 0H 15M 23L │ │ What's Next? View vulnerabilities → docker scout cves otel-catalog:1.0 View base image update recommendations → docker scout recommendations otel-catalog:1.0 Include policy results in your quickview by supplying an organization → docker scout quickview otel-catalog:1.0 --org <organization> Docker gives out exciting bits of information: The base image contains 15 middle-severity vulnerabilities and 23 low-severity ones The final image has an additional two high-level severity Ergo, our code introduced them! Following Scout's suggestion, we can drill down the CVEs: Shell docker scout cves otel-catalog:1.0 This is the result: Plain Text ✓ SBOM of image already cached, 272 packages indexed ✗ Detected 18 vulnerable packages with a total of 39 vulnerabilities ## Overview │ Analyzed Image ────────────────────┼────────────────────────────── Target │ otel-catalog:1.0 digest │ 7adfce68062e platform │ linux/arm64 vulnerabilities │ 0C 2H 15M 23L size │ 160 MB packages │ 272 ## Packages and Vulnerabilities 0C 1H 0M 0L org.yaml/snakeyaml 1.33 pkg:maven/org.yaml/snakeyaml@1.33 ✗ HIGH CVE-2022-1471 [Improper Input Validation] https://scout.docker.com/v/CVE-2022-1471 Affected range : <=1.33 Fixed version : 2.0 CVSS Score : 8.3 CVSS Vector : CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:L 0C 1H 0M 0L io.netty/netty-handler 4.1.100.Final pkg:maven/io.netty/netty-handler@4.1.100.Final ✗ HIGH CVE-2023-4586 [OWASP Top Ten 2017 Category A9 - Using Components with Known Vulnerabilities] https://scout.docker.com/v/CVE-2023-4586 Affected range : >=4.1.0 : <5.0.0 Fixed version : not fixed CVSS Score : 7.4 CVSS Vector : CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:H/A:N The original output is much longer, but I stopped at the exciting bit: the two high-severity CVEs; first, we see the one coming from Netty still needs to be fixed — tough luck. However, Snake YAML fixed its CVE from version 2.0 onward. I'm not using Snake YAML directly; it's a Spring dependency brought by Spring. Because of this, no guarantee exists that a major version upgrade will be compatible. But we can surely try. Let's bump the dependency to the latest version: XML <dependency> <groupId>org.yaml</groupId> <artifactId>snakeyaml</artifactId> <version>2.2</version> </dependency> We can build the image again and check that it still works. Fortunately, it does. We can execute the process again: Shell docker scout quickview otel-catalog:1.0 Lo and behold, the high-severity CVE is no more! Plain Text ✓ Image stored for indexing ✓ Indexed 273 packages Target │ local://otel-catalog:1.0-1 │ 0C 1H 15M 23L digest │ 9ddc31cdd304 │ Base image │ eclipse-temurin:21-jre │ 0C 0H 15M 23L Conclusion In this short post, we tried Docker Scout, the Docker image vulnerability detection tool. Thanks to it, we removed one high-level CVE we introduced in the code. To Go Further Docker Scout 4 Free, Easy-To-Use Tools For Docker Vulnerability Scanning

By Nicolas Fränkel DZone Core CORE
Build a Serverless GenAI Solution With Lambda, DynamoDB, LangChain, and Amazon Bedrock
Build a Serverless GenAI Solution With Lambda, DynamoDB, LangChain, and Amazon Bedrock

In a previous blog, I demonstrated how to use Redis (Elasticache Serverless as an example) as a chat history backend for a Streamlit app using LangChain. It was deployed to EKS and also made use of EKS Pod Identity to manage the application Pod permissions for invoking Amazon Bedrock. This use-case here is a similar one: a chat application. I will switch back to implementing things in Go using langchaingo (I used Python for the previous one) and continue to use Amazon Bedrock. But there are a few unique things you can explore in this blog post: The chat application is deployed as an AWS Lambda function along with a Function URL. It uses DynamoDB as the chat history store (aka Memory) for each conversation - I extended langchaingo to include this feature. Thanks to the AWS Lambda Web Adapter, the application was built as a (good old) REST/HTTP API using a familiar library (in this case, Gin). And the other nice add-on was to be able to combine Lambda Web Adapter streaming response feature with Amazon Bedrock streaming inference API. Deploy Using SAM CLI (Serverless Application Model) Make sure you have Amazon Bedrock prerequisites taken care of and the SAM CLI installed. git clone https://github.com/abhirockzz/chatbot-bedrock-dynamodb-lambda-langchain cd chatbot-bedrock-dynamodb-lambda-langchain Run the following commands to build the function and deploy the entire app infrastructure (including the Lambda Function, DynamoDB, etc.) sam build sam deploy -g Once deployed, you should see the Lambda Function URL in your terminal. Open it in a web browser and start conversing with the chatbot! Inspect the DynamoDB table to verify that the conversations are being stored (each conversation will end up being a new item in the table with a unique chat_id): aws dynamodb scan --table-name langchain_chat_history Scan operation is used for demonstration purposes. Using Scan in production is not recommended. Quick Peek at the Good Stuff Using DynamoDB as the backend store history: Refer to the GitHub repository if you are interested in the implementation. To summarize, I implemented the required functions of the schema.ChatMessageHistory. Lambda Web Adapter Streaming response + LangChain Streaming: I used the chains.WithStreamingFunc option with the chains.Call call and then let Gin Stream do the heavy lifting of handling the streaming response. Here is a sneak peek of the implementation (refer to the complete code here): _, err = chains.Call(c.Request.Context(), chain, map[string]any{"human_input": message}, chains.WithMaxTokens(8191), chains.WithStreamingFunc(func(ctx context.Context, chunk []byte) error { c.Stream(func(w io.Writer) bool { fmt.Fprintf(w, (string(chunk))) return false }) return nil })) Closing Thoughts I really like the extensibility of LangChain. While I understand that langchaingo may not be as popular as the original Python version (I hope it will reach there in due time), it's nice to be able to use it as a foundation and build extensions as required. Previously, I had written about how to use the AWS Lambda Go Proxy API to run existing Go applications on AWS Lambda. The AWS Lambda Web Adapter offers similar functionality but it has lots of other benefits, including response streaming and the fact that it is language agnostic. Oh, and one more thing - I also tried a different approach to building this solution using the API Gateway WebSocket. Let me know if you're interested, and I would be happy to write it up! If you want to explore how to use Go for Generative AI solutions, you can read up on some of my earlier blogs: Building LangChain applications with Amazon Bedrock and Go - An introduction Serverless Image Generation Application Using Generative AI on AWS Generative AI Apps With Amazon Bedrock: Getting Started for Go Developers Use Amazon Bedrock and LangChain to build an application to chat with web pages Happy building!

By Abhishek Gupta DZone Core CORE
Mastering Scalability and Performance: A Deep Dive Into Azure Load Balancing Options
Mastering Scalability and Performance: A Deep Dive Into Azure Load Balancing Options

As organizations increasingly migrate their applications to the cloud, efficient and scalable load balancing becomes pivotal for ensuring optimal performance and high availability. This article provides an overview of Azure's load balancing options, encompassing Azure Load Balancer, Azure Application Gateway, Azure Front Door Service, and Azure Traffic Manager. Each of these services addresses specific use cases, offering diverse functionalities to meet the demands of modern applications. Understanding the strengths and applications of these load-balancing services is crucial for architects and administrators seeking to design resilient and responsive solutions in the Azure cloud environment. What Is Load Balancing? Load balancing is a critical component in cloud architectures for various reasons. Firstly, it ensures optimized resource utilization by evenly distributing workloads across multiple servers or resources, preventing any single server from becoming a performance bottleneck. Secondly, load balancing facilitates scalability in cloud environments, allowing resources to be scaled based on demand by evenly distributing incoming traffic among available resources. Additionally, load balancers enhance high availability and reliability by redirecting traffic to healthy servers in the event of a server failure, minimizing downtime, and ensuring accessibility. From a security perspective, load balancers implement features like SSL termination, protecting backend servers from direct exposure to the internet, and aiding in mitigating DDoS attacks and threat detection/protection using Web Application Firewalls. Furthermore, efficient load balancing promotes cost efficiency by optimizing resource allocation, preventing the need for excessive server capacity during peak loads. Finally, dynamic traffic management across regions or geographic locations capabilities allows load balancers to adapt to changing traffic patterns, intelligently distributing traffic during high-demand periods and scaling down resources during low-demand periods, leading to overall cost savings. Overview of Azure’s Load Balancing Options Azure Load Balancer: Unleashing Layer 4 Power Azure Load Balancer is a Layer 4 (TCP, UDP) load balancer that distributes incoming network traffic across multiple Virtual Machines or Virtual Machine Scalesets to ensure no single server is overwhelmed with too much traffic. There are 2 options for the load balancer: a Public Load Balancer primarily used for internet traffic and also supports outbound connection, and a Private Load Balancer to load balance traffic with a virtual network. The load balancer uses a five-tuple (source IP, source port, destination IP, destination port, protocol). Features High availability and redundancy: Azure Load Balancer efficiently distributes incoming traffic across multiple virtual machines or instances in a web application deployment, ensuring high availability, redundancy, and even distribution, thereby preventing any single server from becoming a bottleneck. In the event of a server failure, the load balancer redirects traffic to healthy servers. Provide outbound connectivity: The frontend IPs of a public load balancer can be used to provide outbound connectivity to the internet for backend servers and VMs. This configuration uses source network address translation (SNAT) to translate the virtual machine's private IP into the load balancer's public IP address, thus preventing outside sources from having a direct address to the backend instances. Internal load balancing: Distribute traffic across internal servers within a Virtual Network (VNet); this ensures that services receive an optimal share of resources Cross-region load balancing: Azure Load Balancer facilitates the distribution of traffic among virtual machines deployed in different Azure regions, optimizing performance and ensuring low-latency access for users of global applications or services with a user base spanning multiple geographic regions. Health probing and failover: Azure Load Balancer monitors the health of backend instances continuously, automatically redirecting traffic away from unhealthy instances, such as those experiencing application errors or server failures, to ensure seamless failover. Port-level load balancing: For services running on different ports within the same server, Azure Load Balancer can distribute traffic based on the specified port numbers. This is useful for applications with multiple services running on the same set of servers. Multiple front ends: Azure Load Balancer allows you to load balance services on multiple ports, multiple IP addresses, or both. You can use a public or internal load balancer to load balance traffic across a set of services like virtual machine scale sets or virtual machines (VMs). High Availability (HA) ports in Azure Load Balancer play a crucial role in ensuring resilient and reliable network traffic management. These ports are designed to enhance the availability and redundancy of applications by providing failover capabilities and optimal performance. Azure Load Balancer achieves this by distributing incoming network traffic across multiple virtual machines to prevent a single point of failure. Configuration and Optimization Strategies Define a well-organized backend pool, incorporating healthy and properly configured virtual machines (VMs) or instances, and consider leveraging availability sets or availability zones to enhance fault tolerance and availability. Define load balancing rules to specify how incoming traffic should be distributed. Consider factors such as protocol, port, and backend pool association. Use session persistence settings when necessary to ensure that requests from the same client are directed to the same backend instance. Configure health probes to regularly check the status of backend instances. Adjust probe settings, such as probing intervals and thresholds, based on the application's characteristics. Choose between the Standard SKU and the Basic SKU based on the feature set required for your application. Implement frontend IP configurations to define how the load balancer should handle incoming network traffic. Implement Azure Monitor to collect and analyze telemetry data, set up alerts based on performance thresholds for proactive issue resolution, and enable diagnostics logging to capture detailed information about the load balancer's operations. Adjust the idle timeout settings to optimize the connection timeout for your application. This is especially important for applications with long-lived connections. Enable accelerated networking on virtual machines to take advantage of high-performance networking features, which can enhance the overall efficiency of the load-balanced application. Azure Application Gateway: Elevating To Layer 7 Azure Application Gateway is a Layer 7 load balancer that provides advanced traffic distribution and web application firewall (WAF) capabilities for web applications. Features Web application routing: Azure Application Gateway allows for the routing of requests to different backend pools based on specific URL paths or host headers. This is beneficial for hosting multiple applications on the same set of servers. SSL termination and offloading: Improve the performance of backend servers by transferring the resource-intensive task of SSL decryption to the Application Gateway and relieving backend servers of the decryption workload. Session affinity: For applications that rely on session state, Azure Application Gateway supports session affinity, ensuring that subsequent requests from a client are directed to the same backend server for a consistent user experience. Web Application Firewall (WAF): Implement a robust security layer by integrating the Azure Web Application Firewall with the Application Gateway. This helps safeguard applications from threats such as SQL injection, cross-site scripting (XSS), and other OWASP Top Ten vulnerabilities. You can define your own WAF custom firewall rules as well. Auto-scaling: Application Gateway can automatically scale the number of instances to handle increased traffic and scale down during periods of lower demand, optimizing resource utilization. Rewriting HTTP headers: Modify HTTP headers for requests and responses, as adjusting these headers is essential for reasons including adding security measures, altering caching behavior, or tailoring responses to meet client-specific requirements. Ingress Controller for AKS: The Application Gateway Ingress Controller (AGIC) enables the utilization of Application Gateway as the ingress for an Azure Kubernetes Service (AKS) cluster. WebSocket and HTTP/2 traffic: Application Gateway provides native support for the WebSocket and HTTP/2 protocols. Connection draining: This pivotal feature ensures the smooth and graceful removal of backend pool members during planned service updates or instances of backend health issues. This functionality promotes seamless operations and mitigates potential disruptions by allowing the system to handle ongoing connections gracefully, maintaining optimal performance and user experience during transitional periods Configuration and Optimization Strategies Deploy the instances in a zone-aware configuration, where available. Use Application Gateway with Web Application Firewall (WAF) within a virtual network to protect inbound HTTP/S traffic from the Internet. Review the impact of the interval and threshold settings on health probes. Setting a higher interval puts a higher load on your service. Each Application Gateway instance sends its own health probes, so 100 instances every 30 seconds means 100 requests per 30 seconds. Use App Gateway for TLS termination. This promotes the utilization of backend servers because they don't have to perform TLS processing and easier certificate management because the certificate only needs to be installed on Application Gateway. When WAF is enabled, every request gets buffered until it fully arrives, and then it gets validated against the ruleset. For large file uploads or large requests, this can result in significant latency. The recommendation is to enable WAF with proper testing and validation. Having appropriate DNS and certificate management for backend pools is crucial for improved performance. Application Gateway does not get billed in stopped state. Turn it off for the dev/test environments. Take advantage of features for autoscaling and performance benefits, and make sure to have scale-in and scale-out instances based on the workload to reduce the cost. Use Azure Monitor Network Insights to get a comprehensive view of health and metrics, crucial in troubleshooting issues. Azure Front Door Service: Global-Scale Entry Management Azure Front Door is a comprehensive content delivery network (CDN) and global application accelerator service that provides a range of use cases to enhance the performance, security, and availability of web applications. Azure Front Door supports four different traffic routing methods latency, priority, weighted, and session affinity to determine how your HTTP/HTTPS traffic is distributed between different origins. Features Global content delivery and acceleration: Azure Front Door leverages a global network of edge locations, employing caching mechanisms, compressing data, and utilizing smart routing algorithms to deliver content closer to end-users, thereby reducing latency and enhancing overall responsiveness for an improved user experience. Web Application Firewall (WAF): Azure Front Door integrates with Azure Web Application Firewall, providing a robust security layer to safeguard applications from common web vulnerabilities, such as SQL injection and cross-site scripting (XSS). Geo filtering: In Azure Front Door WAF you can define a policy by using custom access rules for a specific path on your endpoint to allow or block access from specified countries or regions. Caching: In Azure Front Door, caching plays a pivotal role in optimizing content delivery and enhancing overall performance. By strategically storing frequently requested content closer to the end-users at the edge locations, Azure Front Door reduces latency, accelerates the delivery of web applications, and prompts resource conservation across entire content delivery networks. Web application routing: Azure Front Door supports path-based routing, URL redirect/rewrite, and rule sets. These help to intelligently direct user requests to the most suitable backend based on various factors such as geographic location, health of backend servers, and application-defined routing rules. Custom domain and SSL support: Front Door supports custom domain configurations, allowing organizations to use their own domain names and SSL certificates for secure and branded application access. Configuration and Optimization Strategies Use WAF policies to provide global protection across Azure regions for inbound HTTP/S connections to a landing zone. Create a rule to block access to the health endpoint from the internet. Ensure that the connection to the back end is re-encrypted as Front Door does support SSL passthrough. Consider using geo-filtering in Azure Front Door. Avoid combining Traffic Manager and Front Door as they are used for different use cases. Configure logs and metrics in Azure Front Door and enable WAF logs for debugging issues. Leverage managed TLS certificates to streamline the costs and renewal process associated with certificates. Azure Front Door service issues and rotates these managed certificates, ensuring a seamless and automated approach to certificate management, thereby enhancing security while minimizing operational overhead. Use the same domain name on Front Door and your origin to avoid any issues related to request cookies or URL redirections. Disable health probes when there’s only one origin in an origin group. It's recommended to monitor a webpage or location that you specifically designed for health monitoring. Regularly monitor and adjust the instance count and scaling settings to align with actual demand, preventing overprovisioning and optimizing costs. Azure Traffic Manager: DNS-Based Traffic Distribution Azure Traffic Manager is a global DNS-based traffic load balancer that enhances the availability and performance of applications by directing user traffic to the most optimal endpoint. Features Global load balancing: Distribute user traffic across multiple global endpoints to enhance application responsiveness and fault tolerance. Fault tolerance and high availability: Ensure continuous availability of applications by automatically rerouting traffic to healthy endpoints in the event of failures. Routing: Support various routing globally. Performance-based routing optimizes application responsiveness by directing traffic to the endpoint with the lowest latency. Geographic traffic routing is based on the geographic location of end-users, priority-based, weighted, etc. Endpoint monitoring: Regularly check the health of endpoints using configurable health probes, ensuring traffic is directed only to operational and healthy endpoints. Service maintenance: You can have planned maintenance done on your applications without downtime. Traffic Manager can direct traffic to alternative endpoints while the maintenance is in progress. Subnet traffic routing: Define custom routing policies based on IP address ranges, providing flexibility in directing traffic according to specific network configurations. Configuration and Optimization Strategies Enable automatic failover to healthy endpoints in case of endpoint failures, ensuring continuous availability and minimizing disruptions. Utilize appropriate traffic routing methods, such as Priority, Weighted, Performance, Geographic, and Multi-value, to tailor traffic distribution based on specific application requirements. Implement a custom page to use as a health check for your Traffic Manager. If the Time to Live (TTL) interval of the DNS record is too long, consider adjusting the health probe timing or DNS record TTL. Consider nested Traffic Manager profiles. Nested profiles allow you to override the default Traffic Manager behavior to support larger, more complex application deployments. Integrate with Azure Monitor for real-time monitoring and logging, gaining insights into the performance and health of Traffic Manager and endpoints. How To Choose When selecting a load balancing option in Azure, it is crucial to first understand the specific requirements of your application, including whether it necessitates layer 4 or layer 7 load balancing, SSL termination, and web application firewall capabilities. For applications requiring global distribution, options like Azure Traffic Manager or Azure Front Door are worth considering to efficiently achieve global load balancing. Additionally, it's essential to evaluate the advanced features provided by each load balancing option, such as SSL termination, URL-based routing, and application acceleration. Scalability and performance considerations should also be taken into account, as different load balancing options may vary in terms of throughput, latency, and scaling capabilities. Cost is a key factor, and it's important to compare pricing models to align with budget constraints. Lastly, assess how well the chosen load balancing option integrates with other Azure services and tools within your overall application architecture. This comprehensive approach ensures that the selected load balancing solution aligns with the unique needs and constraints of your application. Service Global/Regional Recommended traffic Azure Front Door Global HTTP(S) Azure Traffic Manager Global Non-HTTP(S) and HTTPS Azure Application Gateway Regional HTTP(S) Azure Load Balancer Regional or Global Non-HTTP(S) and HTTPS Here is the decision tree for load balancing from Azure. Source: Azure

By Shivaprasad Sankesha Narayana
How To Use AzureSignTool to Sign Executables With Azure DevOps
How To Use AzureSignTool to Sign Executables With Azure DevOps

AzureSignTool is a code-signing utility that organizations use to secure their software. This signing tool is compatible with all major executable files and works impeccably with all OV and EV code signing certificates. But, it's mostly used with Azure DevOps due to the benefit of Azure Key Vault. And the same is depicted by this guide. Here, you will undergo the complete procedure to sign the executable using AzureSignTool in Azure DevOps. Prerequisites To Complete to Sign With Azure DevOps To use Azure SignTool to sign with Azure DevOps, you will need the following components and mechanisms to be configured: A Code Signing Certificate (You should prefer purchasing an EV Cloud Code Signing Certificate or Azure Key Vault Code Signing Certificate.) Azure Platform Subscription Azure Key Vault App registration on Azure Active Directory Azure DevOps Azure SignTool Once you fulfill all the requirements, you can move forward with the signing procedure. Complete the Process To Use Azure SignTool With Azure DevOps To ease the process, we have divided it into six parts, each with sub-steps for quick completion. So, let's start with the procedure. Part 1: Configuring the Azure Platform Step 1: Sign in to your Azure platform account and create a resource group to manage all associated resources better. Step 2: In your resource group, add the Azure Key Vault and write down its URL, which will be used later in this process. You need to click “+Add,” then search for “Key Vault” and click on “Create.” Step 3: Enter the Key Vault details and click on “Review + Create.” Step 4: Now, note the URL for further processing. Part 2: Importing the Certificate The code signing certificate must be available on your machine, as you'll import it to the Azure Key Vault. Step 1: Under the settings, choose “Certificates” à “+Generate/Import.” Step 2: Enter the details of your certificate. As we are importing it to the Azure Key Vault, the method of certificate creation should be “Import.” Step 3: After the import, your certificate details will look similar to the following snippet. Part 3: Application Principle Configuration The application principle configuration aims to establish a secure way of accessing the certificate. It will help us to eliminate the direct use of hard-coded credentials. Step 1: From your Azure portal, navigate to Azure Active Directory (AD). Step 2: Go to “App registration” à “+ New registration.” Step 3: Register the application by inputting a name and selecting an option from the supported account types section. For this tutorial, the "Account in this organizational directory only" option is selected. Step 4: Click on "Register," after its creation, note the application ID. This ID will be used as the “Client ID.” Part 4: Pairing Client ID With a Secret Step 1: Navigate to the app registration page and choose the "Certificates & Secrets" option in the left panel. Further, click on “+New client secret.” Step 2: Generate your secret and issue it a descriptive name. In addition, copy and note the secret. Part 5: Configuring Key Vault to Access the Principal Step 1: Go to the Key Vault settings à “Access Policies” à “Add Access Policy.” Step 2: Define a new access policy according to the registered application. To define access policy, implement the following permissions. Parameter Permission Key Verify, Sign, Get, List Secret Get, List Certificate Get, List Step 3: While configuring the policies, your interface will look similar to the following. Step 4: Save the access policy settings. Till now, you have provided the application principal (Client ID + Secret) access to Key Vault. Part 6: Configuring the Azure DevOps and Signing the Executable To start signing the executable with AzureSignTool with Azure DevOps, you should download the .NET Core global tool. The AzureSignTool is a part of this .NET Core. To install it, add the following command in your Azure DevOps build. Now, you'll need the following information to set up the signing process. Key Vault URL Application ID or the Client ID Secret associated with app registration. Name of the imported certificate or the certificate available in the Azure Key Vault. List of the executable files that you want to sign. Further, follow the below process for the signing process. Step 1: Open the Azure DevOps and access the pipeline. Step 2: Go to “Library Menu”. Step 3: Click “+ Variable group” and add the variable for client secret, code signing certificate name, client ID, and key vault URL. Step 4: While hovering over the variable name, you will see a lock icon. Click that lock icon to mark the variable as sensitive. Step 5: Save all the defined variables. Step 6: Use the following script, using the variable name instead of the original names of the certificate, client ID, secret, and other parameters. In addition, using variables will provide you with an added security advantage. The logs containing signing data will only disclose the variable names instead of the original client ID, secret, and cert name. Thus, integrity and confidentiality will be retained. As a result, whenever your build runs, it will run the script, access the certificate and key, and utilize the AzureSignTool with Azure DevOps to sign executables. Conclusion To sign the executable files with AzureSignTool while using Azure DevOps, you will need a code signing certificate that is compatible with the platform. Primarily, an EV code signing certificate is recommended. In addition, a Key Vault, platform subscription, and active directory configuration are also needed. Once you fulfill all the requirements, you can proceed with signing the script configuration. The process begins by setting up the Azure Key Vault and then importing the code signing certificate to it. Following it, an application is registered, and an associated secret is generated. Additionally, the application and key vault are securely connected. Lastly, the variables are defined for every component, and the script to sign the executables is added to the Azure DevOps pipeline.

By Anna Shipman
Getting To Know You: Speeding Up Developer Onboarding With LLMs and Unblocked
Getting To Know You: Speeding Up Developer Onboarding With LLMs and Unblocked

As anyone who has hired new developers onto an existing software team can tell you, onboarding new developers is one of the most expensive things you can do. One of the most difficult things about onboarding junior developers is that it takes your senior developers away from their work. Even the best hires might get Imposter Syndrome since they feel like they need to know more than they do and need to depend on their peers. You might have the best documentation, but it can be difficult to figure out where to start with onboarding. Onboarding senior developers takes time and resources as well. With the rise of LLMs, it seems like putting one on your code, documentation, chats, and ticketing systems would make sense. The ability to converse with an LLM trained on the right dataset would be like adding a team member who can make sure no one gets bogged down with sharing something that’s already documented. I thought I’d check out a new service called Unblocked that does just this. In this article, we will take a spin through a code base I was completely unfamiliar with and see what it would be like to get going on a new team with this tool. Data Sources If you’ve been following conversations around LLM development, then you know that they are only as good as the data they have access to. Fortunately, Unblocked allows you to connect a bunch of data sources to train your LLM. Additionally, because this LLM will be working on your specific code base and documentation, it wouldn’t even be possible to train it on another organization’s data. Unblocked isn’t trying to build a generic code advice bot. It’s personalized to your environment, so you don’t need to worry about data leaking to someone else. Setting up is pretty straightforward, thanks to lots of integrations with developer tools. After signing up for an account, you’ll be prompted to connect to the sources Unblocked supports. You'll need to wait a few minutes or longer depending on the size of your team while Unblocked ingests your content and trains the model. Getting Started I tried exploring some of the features of Unblocked. While there’s a web dashboard that you’ll interact with most of the time, I recommend you install the Unblocked Mac app, also. The app will run in your menu bar and allow you to ask Unblocked a question from anywhere. There are a bunch of other features for teammates interacting with Unblocked. I may write about those later, but for now, I just like that it gives me a universal shortcut (Command+Shift+U) to access Unblocked at any time. Another feature of the macOS menu bar app is that it provides a quick way to install the IDE Plugins based on what I have installed on my machine. Of course, you don’t have to install them this way (Unblocked does this install for you), but it takes some of the thinking out of it. Asking Questions Since I am working on a codebase that is already in Unblocked, I don’t need to wait for anything after getting my account set up on the platform. If you set up your code and documentation, then you won’t need your new developers to wait either. Let’s take this for a spin and look at what questions a new developer might ask the bot. I started by asking a question about setting up the front end. This answer looks pretty good! It’s enough to get me going in a local environment without contacting anyone else on my team. Unblocked kept everyone else “unblocked” on their work and pointed me in the right direction all on its own. I decided to ask about how to get a development environment set up locally. Let’s see what Unblocked says if I ask about that. This answer isn’t what I was hoping for, but I can click on the README link and find that this is not really Unblocked’s fault. My team just hasn’t updated the README for the backend app and Unblocked found the incorrect boilerplate setup instructions. Now that I know where to go to get the code, I’ll just update it after I have finished setting up the backend on my own. In the meantime, though, I will let Unblocked know that it didn’t give me the answer I hoped for. Since it isn’t really the bot’s fault that it’s wrong, I made sure to explain that in my feedback. I had a good start, but I wanted some more answers to my architectural questions. Let’s try something a little more complicated than reading the setup instructions from a README. This is a pretty good high-level overview, especially considering that I didn’t have to do anything, other than type them in. Unblocked generated these answers with links to the relevant resources for me to investigate more as needed. Browse the Code I actually cloned the repos for the front end and back end of my app to my machine and opened them in VS Code. Let’s take a look at how Unblocked works with the repos there. As soon as I open the Unblocked plugin while viewing the backend repository, I’m presented with recommended insights asked by other members of my team. There are also some references to pull requests, Slack conversations, and Jira tasks that the bot thinks are relevant before I open a single file. This is useful. As I open various files, the suggestions change with the context, too. Browse Components The VS Code plugin also called out some topics that it discovered about the app I’m trying out. I clicked on the Backend topic, and it took me to the following page: All of this is automatically generated, as Unblocked determines the experts for each particular part of the codebase. However, experts can also update their expertise when they configure their profiles in our organization. Now, in addition to having many questions I can look at about the backend application, I also know which of my colleagues to go to for questions. If I go to the Components page on the Web Dashboard, I can see a list of everything Unblocked thinks is important about this app. It also gives me a quick view of who I can talk to about these topics. Clicking on any one of them provides me with a little overview, and the experts on the system can manage these as needed. Again, all of this was automatically generated. Conclusion This was a great start with Unblocked. I’m looking forward to next trying this out on some of the things that I’ve been actively working on. Since the platform is not going to be leaking any of my secrets to other teams, I’m not very concerned at all about putting it on even the most secret of my projects and expect to have more to say about other use cases later. Unblocked is in public beta and free and worth checking out!

By Michael Bogan DZone Core CORE
Building Robust Real-Time Data Pipelines With Python, Apache Kafka, and the Cloud
Building Robust Real-Time Data Pipelines With Python, Apache Kafka, and the Cloud

In today's highly competitive landscape, businesses must be able to gather, process, and react to data in real-time in order to survive and thrive. Whether it's detecting fraud, personalizing user experiences, or monitoring systems, near-instant data is now a need, not a nice-to-have. However, building and running mission-critical, real-time data pipelines is challenging. The infrastructure must be fault-tolerant, infinitely scalable, and integrated with various data sources and applications. This is where leveraging Apache Kafka, Python, and cloud platforms comes in handy. In this comprehensive guide, we will cover: An overview of Apache Kafka architecture Running Kafka clusters on the cloud Building real-time data pipelines with Python Scaling processing using PySpark Real-world examples like user activity tracking, IoT data pipeline, and support chat analysis We will include plenty of code snippets, configuration examples, and links to documentation along the way for you to get hands-on experience with these incredibly useful technologies. Let's get started! Apache Kafka Architecture 101 Apache Kafka is a distributed, partitioned, replicated commit log for storing streams of data reliably and at scale. At its core, Kafka provides the following capabilities: Publish-subscribe messaging: Kafka lets you broadcast streams of data like page views, transactions, user events, etc., from producers and consume them in real-time using consumers. Message storage: Kafka durably persists messages on disk as they arrive and retains them for specified periods. Messages are stored and indexed by an offset indicating the position in the log. Fault tolerance: Data is replicated across configurable numbers of servers. If a server goes down, another can ensure continuous operations. Horizontal scalability: Kafka clusters can be elastically scaled by simply adding more servers. This allows for unlimited storage and processing capacity. Kafka architecture consists of the following main components: Topics Messages are published to categories called topics. Each topic acts as a feed or queue of messages. A common scenario is a topic per message type or data stream. Each message in a Kafka topic has a unique identifier called an offset, which represents its position in the topic. A topic can be divided into multiple partitions, which are segments of the topic that can be stored on different brokers. Partitioning allows Kafka to scale and parallelize the data processing by distributing the load among multiple consumers. Producers These are applications that publish messages to Kafka topics. They connect to the Kafka cluster, serialize data (say, to JSON or Avro), assign a key, and send it to the appropriate topic. For example, a web app can produce clickstream events, or a mobile app can produce usage stats. Consumers Consumers read messages from Kafka topics and process them. Processing may involve parsing data, validation, aggregation, filtering, storing to databases, etc. Consumers connect to the Kafka cluster and subscribe to one or more topics to get feeds of messages, which they then handle as per the use case requirements. Brokers This is the Kafka server that receives messages from producers, assigns offsets, commits messages to storage, and serves data to consumers. Kafka clusters consist of multiple brokers for scalability and fault tolerance. ZooKeeper ZooKeeper handles coordination and consensus between brokers like controller election and topic configuration. It maintains cluster state and configuration info required for Kafka operations. This covers Kafka basics. For an in-depth understanding, refer to the excellent Kafka documentation. Now, let's look at simplifying management by running Kafka in the cloud. Kafka in the Cloud While Kafka is highly scalable and reliable, operating it involves significant effort related to deployment, infrastructure management, monitoring, security, failure handling, upgrades, etc. Thankfully, Kafka is now available as a fully managed service from all major cloud providers: Service Description Pricing AWS MSK Fully managed, highly available Apache Kafka clusters on AWS. Handles infrastructure, scaling, security, failure handling etc. Based on number of brokers Google Cloud Pub/Sub Serverless, real-time messaging service based on Kafka. Auto-scaling, at least once delivery guarantees. Based on usage metrics Confluent Cloud Fully managed event streaming platform powered by Apache Kafka. Free tier available. Tiered pricing based on features Azure Event Hubs High throughput event ingestion service for Apache Kafka. Integrations with Azure data services. Based on throughput units The managed services abstract away the complexities of Kafka operations and let you focus on your data pipelines. Next, we will build a real-time pipeline with Python, Kafka, and the cloud. You can also refer to the following guide as another example. Building Real-Time Data Pipelines A basic real-time pipeline with Kafka has two main components: a producer that publishes messages to Kafka and a consumer that subscribes to topics and processes the messages. The architecture follows this flow: We will use the Confluent Kafka Python client library for simplicity. 1. Python Producer The producer application gathers data from sources and publishes it to Kafka topics. As an example, let's say we have a Python service collecting user clickstream events from a web application. In a web application, when a user acts like a page view or product rating, we can capture these events and send them to Kafka. We can abstract the implementation details of how the web app collects the data. Python from confluent_kafka import Producer import json # User event data event = { "timestamp": "2022-01-01T12:22:25", "userid": "user123", "page": "/product123", "action": "view" } # Convert to JSON event_json = json.dumps(event) # Kafka producer configuration conf = { 'bootstrap.servers': 'my_kafka_cluster-xyz.cloud.provider.com:9092', 'client.id': 'clickstream-producer' } # Create producer instance producer = Producer(conf) # Publish event producer.produce(topic='clickstream', value=event_json) # Flush and close producer producer.flush() producer.close() This publishes the event to the clickstream topic on our cloud-hosted Kafka cluster. The confluent_kafka Python client uses an internal buffer to batch messages before sending them to Kafka. This improves efficiency compared to sending each message individually. By default, messages are accumulated in the buffer until either: The buffer size limit is reached (default 32 MB). The flush() method is called. When flush() is called, any messages in the buffer are immediately sent to the Kafka broker. If we did not call flush(), and instead relied on the buffer size limit, there would be a risk of losing events in the event of a failure before the next auto-flush. Calling flush() gives us greater control to minimize potential message loss. However, calling flush() after every production introduces additional overhead. Finding the right buffering configuration depends on our specific reliability needs and throughput requirements. We can keep adding events as they occur to build a live stream. This gives downstream data consumers a continual feed of events. 2. Python Consumer Next, we have a consumer application to ingest events from Kafka and process them. For example, we may want to parse events, filter for a certain subtype, and validate schema. Python from confluent_kafka import Consumer import json # Kafka consumer configuration conf = {'bootstrap.servers': 'my_kafka_cluster-xyz.cloud.provider.com:9092', 'group.id': 'clickstream-processor', 'auto.offset.reset': 'earliest'} # Create consumer instance consumer = Consumer(conf) # Subscribe to 'clickstream' topic consumer.subscribe(['clickstream']) # Poll Kafka for messages infinitely while True: msg = consumer.poll(1.0) if msg is None: continue # Parse JSON from message value event = json.loads(msg.value()) # Process event based on business logic if event['action'] == 'view': print('User viewed product page') elif event['action'] == 'rating': # Validate rating, insert to DB etc pass print(event) # Print event # Close consumer consumer.close() This polls the clickstream topic for new messages, consumes them, and takes action based on the event type - prints, updates database, etc. For a simple pipeline, this works well. But what if we get 100x more events per second? The consumer will not be able to keep up. This is where a tool like PySpark helps scale out processing. 3. Scaling With PySpark PySpark provides a Python API for Apache Spark, a distributed computing framework optimized for large-scale data processing. With PySpark, we can leverage Spark's in-memory computing and parallel execution to consume Kafka streams faster. First, we load Kafka data into a DataFrame, which can be manipulated using Spark SQL or Python. Python from pyspark.sql import SparkSession # Initialize Spark session spark = SparkSession.builder \ .appName('clickstream-consumer') \ .getOrCreate() # Read stream from Kafka 'clickstream' df = spark.readStream \ .format("kafka") \ .option("kafka.bootstrap.servers", "broker1:9092,broker2:9092") \ .option("subscribe", "clickstream") \ .load() # Parse JSON from value df = df.selectExpr("CAST(value AS STRING)") df = df.select(from_json(col("value"), schema).alias("data")) Next, we can express whatever processing logic we need using DataFrame transformations: from pyspark.sql.functions import * # Filter for 'page view' events views = df.filter(col("data.action") == "view") # Count views per page URL counts = views.groupBy(col("data.page")) .count() .orderBy("count") # Print the stream query = counts.writeStream \ .outputMode("complete") \ .format("console") \ .start() query.awaitTermination() This applies operations like filter, aggregate, and sort on the stream in real-time, leveraging Spark's distributed runtime. We can also parallelize consumption using multiple consumer groups and write the output sink to databases, cloud storage, etc. This allows us to build scalable stream processing on data from Kafka. Now that we've covered the end-to-end pipeline let's look at some real-world examples of applying it. Real-World Use Cases Let's explore some practical use cases where these technologies can help process huge amounts of real-time data at scale. User Activity Tracking Many modern web and mobile applications track user actions like page views, button clicks, transactions, etc., to gather usage analytics. Problem Data volumes can scale massively with millions of active users. Need insights in real-time to detect issues and personalize content Want to store aggregate data for historical reporting Solution Ingest clickstream events into Kafka topics using Python or any language. Process using PySpark for cleansing, aggregations, and analytics. Save output to databases like Cassandra for dashboards. Detect anomalies using Spark ML for real-time alerting. IoT Data Pipeline IoT sensors generate massive volumes of real-time telemetry like temperature, pressure, location, etc. Problem Millions of sensor events per second Requires cleaning, transforming, and enriching Need real-time monitoring and historical storage Solution Collect sensor data in Kafka topics using language SDKs. Use PySpark for data wrangling and joining external data. Feed stream into ML models for real-time predictions. Store aggregate data in a time series database for visualization. Customer Support Chat Analysis Chat platforms like Zendesk capture huge amounts of customer support conversations. Problem Millions of chat messages per month Need to understand customer pain points and agent performance Must detect negative sentiment and urgent issues Solution Ingest chat transcripts into Kafka topics using a connector Aggregate and process using PySpark SQL and DataFrames Feed data into NLP models to classify sentiment and intent Store insights into the database for historical reporting Present real-time dashboards for contact center ops This demonstrates applying the technologies to real business problems involving massive, fast-moving data. Learn More To summarize, we looked at how Python, Kafka, and the cloud provide a great combination for building robust, scalable real-time data pipelines.

By Dmitrii Mitiaev

Top Tools Experts

expert thumbnail

Bartłomiej Żyliński

Software Engineer,
SoftwareMill

I'm a Software Engineer with industry experience in designing and implementing complex applications and systems, mostly where it's not visible to users - at the backend. I'm a self-taught developer and a hands-on learner, constantly working towards expanding my knowledge further. I contribute to several open source projects, my main focus being sttp (where you can see my contributions on the project's Github). I appreciate the exchange ef technical know-how - which is expressed by my various publications found on Medium and DZone, and appearances at top tech conferences and meetups, including Devoxx Belgium. I enjoy exploring topics that combine software engineering and mathematics. In my free time, I like to read a good book.
expert thumbnail

Abhishek Gupta

Principal Developer Advocate,
AWS

I mostly work on open-source technologies including distributed data systems, Kubernetes and Go
expert thumbnail

Yitaek Hwang

Software Engineer,
NYDIG

The Latest Tools Topics

article thumbnail
Maximize Kubernetes Security: Automate TLS Certificate Management With Cert-Manager on KIND Clusters
Effortlessly manage TLS certificates in Kubernetes with cert-manager. Enhance security and streamline deployments with automated certificate issuance and renewal.
March 30, 2024
by Rajesh Gheware
· 97 Views · 1 Like
article thumbnail
How To Run OWASP ZAP Security Tests in Azure DevOps Pipeline
In this article, learn how configuring OWASP ZAP security tests for webpage UI or API helps to identify the security risks.
Updated March 29, 2024
by Ganesh Hegde DZone Core CORE
· 28,382 Views · 2 Likes
article thumbnail
Secure and Scalable CI/CD Pipeline With AWS
Amazon and DevOps go hand-in-hand with a number of tools and processes that enable an efficient CI/CD pipeline.
Updated March 29, 2024
by Chandani Patel
· 37,001 Views · 10 Likes
article thumbnail
How to Implement Jenkins CI/CD With Git Crypt
Take a look at this tutorial that demonstrates how to implement Git secrets with a gpg private key and how to connect it with a Jenkins CI/CD pipeline.
Updated March 29, 2024
by Aditya C S
· 43,585 Views · 6 Likes
article thumbnail
How To Implement CI/CD for Multibranch Pipeline in Jenkins
In this guide, explore the process of creating a Jenkins multibranch pipeline project and configuring it with the Git repo.
Updated March 29, 2024
by Krishna Prasad Kalakodimi
· 90,837 Views · 16 Likes
article thumbnail
Easily Automate Your CI/CD Pipeline With Kubernetes, Helm, and Jenkins
Learn how to set up a workflow to automate your CI/CD pipeline for quick and easy deployments using Jenkins, Helm, and Kubernetes.
Updated March 29, 2024
by Eldad Assis
· 159,043 Views · 34 Likes
article thumbnail
Building a Fortified Foundation: The Essential Guide to Secure Landing Zones in the Cloud
Explore Secure Landing Zones (SLZ), a foundational architecture in the cloud that provides a secure environment for hosting workloads.
March 29, 2024
by Josephine Eskaline Joyce DZone Core CORE
· 150 Views · 1 Like
article thumbnail
Using Identity-Based Policies With Amazon DynamoDB
This article is designed to guide you through the benefits and implementation of these policies, supplemented with practical examples.
March 29, 2024
by Jagadish Nimmagadda
· 364 Views · 2 Likes
article thumbnail
New Web Management Console Simplifies High Availability and Disaster Recovery for Linux Environments
SIOS LifeKeeper's new web console simplifies HA/DR management for Linux, reducing complexity across multi-cloud environments and empowering IT generalists.
March 28, 2024
by Tom Smith DZone Core CORE
· 861 Views · 1 Like
article thumbnail
Advanced-Data Processing With AWS Glue
Discover how AWS Glue, a serverless data integration service, addresses challenges surrounding unstructured data with custom crawlers and built-in classifiers.
March 27, 2024
by Raghava Dittakavi DZone Core CORE
· 297 Views · 1 Like
article thumbnail
Implementing Disaster Backup for a Kubernetes Cluster: A Comprehensive Guide
Implementing a catastrophe backup strategy is necessary to limit the risk of data loss and downtime. Learn how to set up a catastrophe backup for a Kubernetes cluster.
March 27, 2024
by Aditya Bhuyan
· 297 Views · 3 Likes
article thumbnail
S3 Cost Savings
Amazon S3 is a fantastically versatile and scalable storage solution, but keeping costs under control is crucial. This blog post dives into key strategies to lower costs.
March 27, 2024
by Rohit Bordia
· 456 Views · 3 Likes
article thumbnail
Long Tests: Saving All App’s Debug Logs and Writing Your Own Logs
Approach for software QA to troubleshoot bugs using detailed logging — custom and app's debug logs, using Python and Bash scripts for logging and debugging.
March 27, 2024
by Konstantin Sakhchinskiy
· 1,227 Views · 2 Likes
article thumbnail
Integrating Salesforce APEX REST
In this post, walk through the process of invoking an APEX REST method in MuleSoft and fetching account information from Salesforce using APEX code snippets.
March 26, 2024
by Karan Gupta
· 1,107 Views · 2 Likes
article thumbnail
Automating AWS Infrastructure: Creating API Gateway, NLB, Security Group, and VPC With CloudFormation
In modern cloud environments, Infrastructure as Code (IaC) has become a cornerstone for managing and provisioning resources efficiently
March 26, 2024
by Vijay Panwar
· 906 Views · 1 Like
article thumbnail
Harnessing the Power of Observability in Kubernetes With OpenTelemetry
This article provides a thorough guide on implementing OpenTelemetry in a Kubernetes environment, showcasing how to enhance observability.
March 26, 2024
by Rajesh Gheware
· 1,079 Views · 1 Like
article thumbnail
Mastering Daily Kubernetes Operations: A Guide To Useful kubectl Commands for Software Engineers
Learn about a variety of commands in kubectl, a critical tool for software engineers for managing and troubleshooting Kubernetes environments.
March 26, 2024
by Elias Naduvath Varghese
· 1,433 Views · 1 Like
article thumbnail
Deploying Heroku Apps To Staging and Production Environments With GitLab CI/CD
Today we'll learn how to automatically deploy your app to the correct environment any time code is merged into your dev or main branches.
March 25, 2024
by Tyler Hawkins DZone Core CORE
· 198 Views · 2 Likes
article thumbnail
Service Mesh Unleashed: A Riveting Dive Into the Istio Framework
This article presents an in-depth analysis of the service mesh landscape, focusing specifically on Istio, one of the most popular service mesh frameworks.
March 25, 2024
by Pramodkumar Nedumpilli Ramakrishnan
· 1,061 Views · 2 Likes
article thumbnail
Performance Optimization in Agile IoT Cloud Applications: Leveraging Grafana and Similar Tools
Grafana is a vital open-source tool for real-time monitoring, offering customizable dashboards, proactive alerts, and seamless integration for IoT applications.
March 25, 2024
by Deep Manishkumar Dave
· 338 Views · 1 Like
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • ...
  • Next

ABOUT US

  • About DZone
  • Send feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: