Enterprise AI Trend Report: Gain insights on ethical AI, MLOps, generative AI, large language models, and much more.
2024 Cloud survey: Share your insights on microservices, containers, K8s, CI/CD, and DevOps (+ enter a $750 raffle!) for our Trend Reports.
The topic of security covers many different facets within the SDLC. From focusing on secure application design to designing systems to protect computers, data, and networks against potential attacks, it is clear that security should be top of mind for all developers. This Zone provides the latest information on application vulnerabilities, how to incorporate security earlier in your SDLC practices, data governance, and more.
NIST AI Risk Management Framework: Developer’s Handbook
Top Secrets Management Tools for 2024
The popularity of Kubernetes (K8s) as the defacto orchestration platform for the cloud is not showing any sign of pause. This graph, taken from the 2023 Kubernetes Security Report by the security company Wiz, clearly illustrates the trend: As adoption continues to soar, so do the security risks and, most importantly, the attacks threatening K8s clusters. One such threat comes in the form of long-lived service account tokens. In this blog, we are going to dive deep into what these tokens are, their uses, the risks they pose, and how they can be exploited. We will also advocate for the use of short-lived tokens for a better security posture. Service account tokens are bearer tokens (a type of token mostly used for authentication in web applications and APIs) used by service accounts to authenticate to the Kubernetes API. Service accounts provide an identity for processes (applications) that run in a Pod, enabling them to interact with the Kubernetes API securely. Crucially, these tokens are long-lived: when a service account is created, Kubernetes automatically generates a token and stores it indefinitely as a Secret, which can be mounted into pods and used by applications to authenticate API requests. Note: in more recent versions, including Kubernetes v1.29, API credentials are obtained directly by using the TokenRequest API and are mounted into Pods using a projected volume. The tokens obtained using this method have bounded lifetimes and are automatically invalidated when the Pod they are mounted into is deleted. As a reminder, the Kubelet on each node is responsible for mounting service account tokens into pods so they can be used by applications within those pods to authenticate to the Kubernetes API when needed: If you need a refresher on K8s components, look here. The Utility of Service Account Tokens Service account tokens are essential for enabling applications running on Kubernetes to interact with the Kubernetes API. They are used to deploy applications, manage workloads, and perform administrative tasks programmatically. For instance, a Continuous Integration/Continuous Deployment (CI/CD) tool like Jenkins would use a service account token to deploy new versions of an application or roll back a release. The Risks of Longevity While service account tokens are indispensable for automation within Kubernetes, their longevity can be a significant risk factor. Long-lived tokens, if compromised, give attackers ample time to explore and exploit a cluster. Once in the hands of an attacker, these tokens can be used to gain unauthorized access, elevate privileges, exfiltrate data, or even disrupt the entire cluster's operations. Here are a few leak scenarios that could lead to some serious damage: Misconfigured access rights: A pod or container may be misconfigured to have broader file system access than necessary. If a token is stored on a shared volume, other containers or malicious pods that have been compromised could potentially access it. Insecure transmission: If the token is transmitted over the network without proper encryption (like sending it over HTTP instead of HTTPS), it could be intercepted by network sniffing tools. Code repositories: Developers might inadvertently commit a token to a public or private source code repository. If the repository is public or becomes exposed, the token is readily available to anyone who accesses it. Logging and monitoring systems: Tokens might get logged by applications or monitoring systems and could be exposed if logs are not properly secured or if verbose logging is accidentally enabled. Insider threat: A malicious insider with access to the Kubernetes environment could extract the token and use it or leak it intentionally. Application vulnerabilities: If an application running within the cluster has vulnerabilities (e.g., a Remote Code Execution flaw), an attacker could exploit this to gain access to the pod and extract the token. How Could an Attacker Exploit Long-Lived Tokens? Attackers can collect long-lived tokens through network eavesdropping, exploiting vulnerable applications, or leveraging social engineering tactics. With these tokens, they can manipulate Kubernetes resources at their will. Here is a non-exhaustive list of potential abuses: Abuse the cluster's (often barely limited) infra resources for cryptocurrency mining or as part of a botnet. With API access, attackers could deploy malicious containers, alter running workloads, exfiltrate sensitive data, or even take down the entire cluster. If the token has broad permissions, it can be used to modify roles and bindings to elevate privileges within the cluster. The attacker could create additional resources that provide them with persistent access (backdoor) to the cluster, making it harder to remove their presence. Access to sensitive data stored in the cluster or accessible through it could lead to data theft or leakage. Why Aren’t Service Account Tokens Short-Lived by Default? Short-lived tokens are a security best practice in general, particularly for managing access to very sensitive resources like the Kubernetes API. They reduce the window of opportunity for attackers to exploit a token and facilitate better management of permissions as application access requirements change. Automating token rotation limits the impact of a potential compromise and aligns with the principle of least privilege — granting only the access necessary for a service to operate. The problem is that implementing short-lived tokens comes with some overhead. First, implementing short-lived tokens typically requires a more complex setup. You need an automated process to handle token renewal before it expires. This may involve additional scripts or Kubernetes operators that watch for token expiration and request new tokens as necessary. This often means integrating a secret management system that can securely store and automatically rotate the tokens. This adds a new dependency for system configuration and maintenance. Note: it goes without saying that using a secrets manager with Kubernetes is highly recommended, even for non-production workloads. But the overhead cannot be understated. Second, software teams running their CI/CD workers on top of the cluster will need adjustments to support dynamic retrieval and injection of these tokens into the deployment process. This could require changes in the pipeline configuration and additional error handling to manage potential token expiration during a pipeline run, which can be a true headache. And secrets management is just the tip of the iceberg. You will also need monitoring and alerts if you want to troubleshoot renewal failures. Fine-tuning token expiry time could break the deployment process, requiring immediate attention to prevent downtime or deployment failures. Finally, there could also be performance considerations, as many more API calls are needed to retrieve new tokens and update the relevant Secrets. By default, Kubernetes opts for a straightforward setup by issuing service account tokens without a built-in expiration. This approach simplifies initial configuration but lacks the security benefits of token rotation. It is the Kubernetes admin’s responsibility to configure more secure practices by implementing short-lived tokens and the necessary infrastructure for their rotation, thereby enhancing the cluster's security posture. Mitigation Best Practices For many organizations, the additional overhead is justified by the security improvements. Tools like service mesh implementations (e.g., Istio), secret managers (e.g., CyberArk Conjur), or cloud provider services can manage the lifecycle of short-lived certificates and tokens, helping to reduce the overhead. Additionally, recent versions of Kubernetes offer features like the TokenRequest API, which can automatically rotate tokens and project them into the running pods. Even without any additional tool, you can mitigate the risks by limiting the Service Account auto-mount feature. To do so, you can opt out of the default API credential automounting with a single flag in the service account or pod configuration. Here are two examples: For a Service Account: YAML apiVersion: v1 kind: ServiceAccount metadata: name: build-robot automountServiceAccountToken: false ... And for a specific Pod: YAML apiVersion: v1 kind: Pod metadata: name: my-pod spec: serviceAccountName: build-robot automountServiceAccountToken: false ... The bottom line is that if an application does not need to access the K8s API, it should not have a token mounted. This also limits the number of service account tokens an attacker can access if the attacker manages to compromise any of the Kubernetes hosts. Okay, you might say, but how do we enforce this policy everywhere? Enter Kyverno, a policy engine designed for K8s. Enforcement With Kyverno Kyverno allows cluster administrators to manage, validate, mutate, and generate Kubernetes resources based on custom policies. To prevent the creation of long-lived service account tokens, one can define the following Kyverno policy: YAML apiVersion: kyverno.io/v1 kind: ClusterPolicy metadata: name: deny-secret-service-account-token spec: validationFailureAction: Enforce background: false rules: - name: check-service-account-token match: any: - resources: kinds: - Secret validate: cel: expressions: - message: "Long lived API tokens are not allowed" expression: > object.type != "kubernetes.io/service-account-token" This policy ensures that only Secrets that are not of type kubernetes.io/service-account-token can be created, effectively blocking the creation of long-lived service account tokens! Applying the Kyverno Policy To apply this policy, you need to have Kyverno installed on your Kubernetes cluster (tutorial). Once Kyverno is running, you can apply the policy by saving the above YAML to a file and using kubectl to apply it: YAML kubectl apply -f deny-secret-service-account-token.yaml After applying this policy, any attempt to create a Secret that is a service account token of the prohibited type will be denied, enforcing a safer token lifecycle management practice. Wrap Up In Kubernetes, managing the lifecycle and access of service account tokens is a critical aspect of cluster security. By preferring short-lived tokens over long-lived ones and enforcing policies with tools like Kyverno, organizations can significantly reduce the risk of token-based security incidents. Stay vigilant, automate security practices, and ensure your Kubernetes environment remains robust against threats.
In this tutorial, we’ll learn how to build a website for collecting digital collectibles (or NFTs) on the blockchain Flow. We'll use the smart contract language Cadence along with React to make it all happen. We'll also learn about Flow, its advantages, and the fun tools we can use. By the end of this article, you’ll have the tools and knowledge you need to create your own decentralized application on the Flow blockchain. Let’s dive right in! What Are We Building? We're building an application for digital collectibles. Each collectible is a Non-Fungible Token (NFT). (If you are new and don’t understand NFT, then take a look here.) Our app will allow you to collect NFTs, and each item will be unique from the others. To make all this work, we’ll use Flow's NonFungibleToken Standard, which is a set of rules that helps us manage these special digital items (similar to ERC-721 in Ethereum). Prerequisites Before you begin, be sure to install the Flow CLI on your system. If you haven't done so, follow these installation instructions. Setting Up If you're ready to kickstart your project, first, type in the command flow setup. This command does some magic behind the scenes to set up the foundation of your project. It creates a folder system and sets up a file called flow.json to configure your project, making sure everything is organized and ready to go! Project Structure The project will contain a cadence folder and flow.json file. (A flow.json file is a configuration file for your project, automatically maintained.)The Cadence folder contains the following: /contracts: Contains all Cadence contracts. /scripts: Holds all Cadence scripts. /transactions: Stores all Cadence transactions. Follow the steps below to use Flow NFT Standard. Step 1: Create a File First, go to the flow-collectibles-portal folder and find the cadence folder. Then, open the contracts folder. Make a new file and name it NonFungibleToken.cdc. Step 2: Copy and Paste Now, open the link named NonFungibleToken, which contains the NFT standard. Copy all the content from that file and paste it into the new file you just created ("NonFungibleToken.cdc"). That's it! You've successfully set up the standards for your project.Now, let’s write some code! However, before we dive into coding, it's important for developers to establish a mental model of how to structure their code. At the top level, our codebase consists of three main components: NFT: Each collectible is represented as an NFT. Collection: A collection refers to a group of NFTs owned by a specific user. Global Functions and Variables: These are functions and variables defined at the global level for the smart contract and are not associated with any particular resource. Smart Contract Structure Smart Contract Basic Structure Create a new file named Collectibles.cdc inside cadence/contracts. This is where we will write the code. Contract Structure JavaScript import NonFungibleToken from "./NonFungibleToken.cdc" pub contract Collectibles: NonFungibleToken{ pub var totalSupply: UInt64 // other code will come here init(){ self.totalSupply = 0 } } Let's break down the code line by line: First, we'll need to standardize that we are building an NFT by including the so-called "NonFungibleToken." This is an NFT standard built by Flow which defines the following set of functionality that must be included by each NFT smart contract. After importing, let's create our contract. To do that, we use pub contract [contract name]. Use the same syntax each time you create a new contract. You can fill in the contract name with whatever you’d like to call your contract. In our case, let’s call it Collectibles. Next, we want to make sure our contract follows a certain set of functionality and rules of NonFungibleToken. To do that, we add NonFungibleToken interface with the help of `:`.Like this (`pub contract Collectibles: NonFungibleToken{}`) Every single contract MUST have the init() function. It is called when the contract is initially deployed. This is similar to what Solidity calls a Constructor. Now let’s create a global variable called totalSupply with a data type UInt64. This variable will keep track of your total Collectibles. Now initialize totalSupply with value 0. That's it! We set up the foundation for our Collectibles contract. Now, we can start adding more features and functionalities to make it even more exciting. Before moving forward, please check out the code snippet to understand how we define variables in Cadence: Resource NFT Add the following code to your smart contract: JavaScript import NonFungibleToken from "./NonFungibleToken.cdc" pub contract Collectibles: NonFungibleToken{ // above code… pub resource NFT: NonFungibleToken.INFT{ pub let id: UInt64 pub var name: String pub var image: String init(_id:UInt64, _name:String, _image:String){ self.id = _id self.name = _name self.image = _image } } // init()... } As you have seen before, the contract implements the NFT standard interface, represented by pub contract Collectibles: NonFungibleToken. Similarly, resources can also implement various resource interfaces. So let’s add NonFungibleToken.INFT interface to the NFT Resource, which mandates the existence of a public property called id within the resource.Here are the variables we will use in the NFT resource: id: Maintains the ID of NFT name: Name of the NFT. image: Image URL of NFT. After defining the variable, be sure to initialize the variable in the init() function. Let’s move forward and create another resource called Collection Resource. Collection Resource First, you need to understand how Collection Resources work. If you need to store a music file and several photos on your laptop, what would you do? Typically, you’d navigate to a local drive (let’s say your D-Drive) create a music folder, and photos folder. You’d then copy and paste the music and photo files into your destination folders.Similarly, this is how your digital collectibles on Flow work. Imagine your laptop as a Flow Blockchain Account, your D-Drive as Account Storage, and Folder as a Collection. So when interacting with any project to buy NFTs, the project creates its collection in your account storage, similar to creating a folder on your D-Drive. When you interact with 10 different NFT projects, you’ll end up with 10 different collections in your account. It's like having a personal space to store and organize your unique digital treasures! JavaScript import NonFungibleToken from "./NonFungibleToken.cdc" pub contract Collectibles: NonFungibleToken{ //Above code NFT Resource… // Collection Resource pub resource Collection{ } // Below code… } Each collection has a ownedNFTs variable to hold the NFT Resources. JavaScript pub resource Collection { pub var ownedNFTs: @{UInt64: NonFungibleToken.NFT} init(){ self.ownedNFTs <- {} } } Resource Interfaces A resource interface in Flow is similar to interfaces in other programming languages. It sits on top of a resource and ensures that the resource that implements it has the required functionality as defined by the interface. It can also be used to restrict access to the whole resource and be more restrictive in terms of access modifiers than the resource itself. In the NonFungibleToken standard, there are several resource interfaces like INFT, Provider, Receiver, and CollectionPublic. Each of these interfaces has specific functions and fields that need to be implemented by the resource that uses them. In this contract, we’ll use these three interfaces from NonFungibleToken: Provider, Receiver, and CollectionPublic. These interfaces define functions such as deposit, withdraw, borrowNFT, and getIDs. We’ll explain each of these in detail as we go. We will also add some events that we’ll emit from these functions, as well as declare some variables we’ll use further along in the tutorial. JavaScript pub contract Collectibles:NonFungibleToken{ // rest of the code… pub event ContractInitialized() pub event Withdraw(id: UInt64, from: Address?) pub event Deposit(id: UInt64, to: Address?) pub let CollectionStoragePath: StoragePath pub let CollectionPublicPath: PublicPath pub resource interface CollectionPublic{ pub fun deposit(token: @NonFungibleToken.NFT) pub fun getIDs(): [UInt64] pub fun borrowNFT(id: UInt64): &NonFungibleToken.NFT } pub resource Collection: CollectionPublic, NonFungibleToken.Provider, NonFungibleToken.Receiver, NonFungibleToken.CollectionPublic{ pub var ownedNFTs: @{UInt64: NonFungibleToken.NFT} init(){ self.ownedNFTs <- {} } } } Withdraw Now, let's create the withdraw() function required by the interface. JavaScript pub resource Collection: CollectionPublic, NonFungibleToken.Provider, NonFungibleToken.Receiver, NonFungibleToken.CollectionPublic{ // other code pub fun withdraw(withdrawID: UInt64): @NonFungibleToken.NFT { let token <- self.ownedNFTs.remove(key: withdrawID) ?? panic("missing NFT") emit Withdraw(id: token.id, from: self.owner?.address) return <- token } init()... } With the help of this function, you can move the NFT resource out of the collection. If it: Fails: Panic and throws an error. Successful: It emits a withdraw event and returns the resource to the caller. The caller can then use this resource and save it within their account storage. Deposit Now it’s time for the deposit() function required by NonFungibleToken.Receiver. JavaScript pub resource Collection: CollectionPublic, NonFungibleToken.Provider, NonFungibleToken.Receiver, NonFungibleToken.CollectionPublic{ // other code pub fun withdraw(withdrawID: UInt64): @NonFungibleToken.NFT { let token <- self.ownedNFTs.remove(key: withdrawID) ?? panic("missing NFT") emit Withdraw(id: token.id, from: self.owner?.address) return <- token } pub fun deposit(token: @NonFungibleToken.NFT) { let id = token.id let oldToken <- self.ownedNFTs[id] <-token destroy oldToken emit Deposit(id: id, to: self.owner?.address) } init()... } Borrow and GetID Now, let’s focus on the two functions required by NonFungibleToken.CollectionPublic: borrowNFT() and getID(). JavaScript pub resource Collection: CollectionPublic, NonFungibleToken.Provider, NonFungibleToken.Receiver, NonFungibleToken.CollectionPublic{ // other code pub fun withdraw(withdrawID: UInt64): @NonFungibleToken.NFT { let token <- self.ownedNFTs.remove(key: withdrawID) ?? panic("missing NFT") emit Withdraw(id: token.id, from: self.owner?.address) return <- token } pub fun deposit(token: @NonFungibleToken.NFT) { let id = token.id let oldToken <- self.ownedNFTs[id] <-token destroy oldToken emit Deposit(id: id, to: self.owner?.address) } pub fun borrowNFT(id: UInt64): &NonFungibleToken.NFT { if self.ownedNFTs[id] != nil { return (&self.ownedNFTs[id] as &NonFungibleToken.NFT?)! } panic("NFT not found in collection.") } pub fun getIDs(): [UInt64]{ return self.ownedNFTs.keys } init()... } Destructor The last thing we need for the Collection Resource is a destructor. JavaScript destroy (){ destroy self.ownedNFTs } Since the Collection resource contains other resources (NFT resources), we need to specify a destructor. A destructor runs when the object is destroyed. This ensures that resources are not left "homeless" when their parent resource is destroyed. We don't need a destructor for the NFT resource as it doesn’t contain any other resources. Let’s look at the complete collection resource source code: JavaScript import NonFungibleToken from "./NonFungibleToken.cdc" pub contract Collectibles: NonFungibleToken{ pub var totalSupply: UInt64 pub resource NFT: NonFungibleToken.INFT{ pub let id: UInt64 pub var name: String pub var image: String init(_id:UInt64, _name:String, _image:String){ self.id = _id self.name = _name self.image = _image } } pub resource interface CollectionPublic{ pub fun deposit(token: @NonFungibleToken.NFT) pub fun getIDs(): [UInt64] pub fun borrowNFT(id: UInt64): &NonFungibleToken.NFT } pub event ContractInitialized() pub event Withdraw(id: UInt64, from: Address?) pub event Deposit(id: UInt64, to: Address?) pub let CollectionStoragePath: StoragePath pub let CollectionPublicPath: PublicPath pub resource Collection: CollectionPublic, NonFungibleToken.Provider, NonFungibleToken.Receiver, NonFungibleToken.CollectionPublic{ pub var ownedNFTs: @{UInt64: NonFungibleToken.NFT} init(){ self.ownedNFTs <- {} } destroy (){ destroy self.ownedNFTs } pub fun withdraw(withdrawID: UInt64): @NonFungibleToken.NFT { let token <- self.ownedNFTs.remove(key: withdrawID) ?? panic("missing NFT") emit Withdraw(id: token.id, from: self.owner?.address) return <- token } pub fun deposit(token: @NonFungibleToken.NFT) { let id = token.id let oldToken <- self.ownedNFTs[id] <-token destroy oldToken emit Deposit(id: id, to: self.owner?.address) } pub fun borrowNFT(id: UInt64): &NonFungibleToken.NFT { if self.ownedNFTs[id] != nil { return (&self.ownedNFTs[id] as &NonFungibleToken.NFT?)! } panic("NFT not found in collection.") } pub fun getIDs(): [UInt64]{ return self.ownedNFTs.keys } } init(){ self.CollectionPublicPath = /public/NFTCollection self.CollectionStoragePath = /storage/NFTCollection self.totalSupply = 0 emit ContractInitialized() } } Now we have finished all the resources. Next, we’ll look at the global function. Global Function Global Functions are functions that are defined on the global level of the smart contract, meaning they are not part of any resource. These are accessible and called by the public and expose the core functionality of the smart contract to the public. createEmptyCollection: This function initializes an empty Collectibles.Collection into caller account storage. checkCollection: This handy function helps you discover whether or not your account already has a collection resource. mintNFT: This function is super cool because it allows anyone to create an NFT. JavaScript // pub resource Collection… pub fun createEmptyCollection(): @Collection{ return <- create Collection() } pub fun checkCollection(_addr: Address): Bool{ return getAccount(_addr) .capabilities.get<&{Collectibles.CollectionPublic}> (Collectibles.CollectionPublicPath)! .check() } pub fun mintNFT(name:String, image:String): @NFT{ Collectibles.totalSupply = Collectibles.totalSupply + 1 let nftId = Collectibles.totalSupply var newNFT <- create NFT(_id:nftId, _name:name, _image:image) return <- newNFT } init()... Wrapping up the Smart Contract And now, FINALLY, with everything in place, we’re done writing our smart contract. Take a look at the final code here. Now, let’s look at how a user interacts with smart contracts deployed on the Flow blockchain.There are two steps to interact with the Flow blockchain: Mutate the state by running transactions. Query the blockchain by running a script. Mutate the State by Running Transactions Transactions are cryptographically signed data that contain a set of instructions that interact with the smart contract to update the Flow state. In simple terms, this is like a function call that changes the data on the blockchain. Transactions usually involve some cost, which can vary depending on the blockchain you are on. A transaction includes multiple optional phases: prepare, pre, execute, and post phase.You can read more about this in the Cadence reference document on transactions. Each phase has a purpose; the two most important phases are prepare and execute. Prepare Phase: This phase is used to access data and information inside the signer's account (allowed by the AuthAccount type). Execute Phase: This phase is used to execute actions. Now, let’s create a transaction for our project. Follow the steps below to create a transaction in your project folder. Step 1: Create a File First, go to the project folder and open the cadence folder. Inside it, open the transaction folder and make a new file with the name Create_Collection.cdc and mint_nft.cdc Step 2: Add the Create Collection Transaction Code JavaScript import Collectibles from "../contracts/Collectibles.cdc" transaction { prepare(signer: AuthAccount) { if signer.borrow<&Collectibles.Collection>(from: Collectibles.CollectionStoragePath) == nil { let collection <- Collectibles.createEmptyCollection() signer.save(<-collection, to: Collectibles.CollectionStoragePath) let cap = signer.capabilities.storage.issue<&{Collectibles.CollectionPublic}>(Collectibles.CollectionStoragePath) signer.capabilities.publish( cap, at: Collectibles.CollectionPublicPath) } } } Let's break down this code line by line: This transaction interacts with the Collectibles smart contract. Then, it checks if the sender (signer) has a Collection resource stored in their account by borrowing a reference to the Collection resource from the specified storage path Collectibles.CollectionStoragePath. If the reference is nil, it means the signer does not yet have a collection. If the signer does not have a collection, then it creates an empty collection by calling the createEmptyCollection() function. After creating the empty collection, place it into the signer's account under the specified storage path Collectibles.CollectionStoragePath. This establishes a link between the signer's account and the newly created collection using link(). Step 3: Add the Mint NFT Transaction Code JavaScript import NonFungibleToken from "../contracts/NonFungibleToken.cdc" import Collectibles from "../contracts/Collectibles.cdc" transaction(name:String, image:String){ let receiverCollectionRef: &{NonFungibleToken.CollectionPublic} prepare(signer:AuthAccount){ self.receiverCollectionRef = signer.borrow<&Collectibles.Collection>(from: Collectibles.CollectionStoragePath) ?? panic("could not borrow Collection reference") } execute{ let nft <- Collectibles.mintNFT(name:name, image:image) self.receiverCollectionRef.deposit(token: <-nft) } } Let's break down this code line by line: We first import the NonFungibleToken and Collectibles contract. transaction(name: String, image: String) This line defines a new transaction. It takes two arguments, name, and image, both of type String. These arguments are used to pass the name and image of the NFT being minted. let receiverCollectionRef: &{NonFungibleToken.CollectionPublic} This line declares a new variable receiverCollectionRef. It is a reference to a public collection of NFTs of type NonFungibleToken.CollectionPublic. This reference will be used to interact with the collection where we will deposit the newly minted NFT. prepare(signer: AuthAccount) This line starts the prepare block, which is executed before the transaction. It takes an argument signer of type AuthAccount. AuthAccount represents the account of the transaction's signer. It borrows a reference to the Collectibles.Collection from the signer's storage inside the prepare block. It uses the borrow function to access the reference to the collection and store it in the receiverCollectionRef variable. If the reference is not found (if the collection doesn't exist in the signer's storage, for example), it will throw the error message “could not borrow Collection reference.” The execute block contains the main execution logic for the transaction. The code inside this block will be executed after the prepare block has successfully completed. nft <- Collectibles.mintNFT(_name: name, image: image) Inside the execute block, this line calls the mintNFT function from the Collectibles contract with the provided name and image arguments. This function is expected to create a new NFT with the given name and image. The <- symbol indicates that the NFT is being received as an object that can be moved (a resource). self.receiverCollectionRef.deposit(token: <-nft) This line deposits the newly minted NFT into the specified collection. It uses the deposit function on the receiverCollectionRef to transfer ownership of the NFT from the transaction's executing account to the collection. The <- symbol here also indicates that the NFT is being moved as a resource during the deposit process. Query the Blockchain by Running a Script We use a script to view or read data from the blockchain. Scripts are free and don’t need signing. Follow the steps below to create a script in your project folder. Step 1: Create a File First, go to the project folder and open the cadence folder. Inside it, open the script folder and make a new file with the name view_nft.cdc. Step 2: View the NFT Script JavaScript import NonFungibleToken from "../contracts/NonFungibleToken.cdc" import Collectibles from "../contracts/Collectibles.cdc" pub fun main(user: Address, id: UInt64): &NonFungibleToken.NFT? { let collectionCap= getAccount(user).capabilities .get<&{Collectibles.CollectionPublic}>(/public/NFTCollection) ?? panic("This public capability does not exist.") let collectionRef = collectionCap.borrow()! return collectionRef.borrowNFT(id: id) } Let's break down this code line by line: First, we import the NonFungibleToken and Collectibles contract. pub fun main(acctAddress: Address, id: UInt64): &NonFungibleToken.NFT? This line defines the entry point of the script, which is a public function named main. The function takes two parameters: acctAddress: An Address type parameter representing the address of an account on the Flow blockchain. id: A UInt64 type parameter representing the unique identifier of the NFT within the collection. Then we use getCapability to fetch the Collectibles.Collection capability for the specified acctAddress. A capability is a reference to a resource that allows access to its functions and data. In this case, it is fetching the capability for the Collectibles.Collection resource type. Then, we borrow an NFT from the collectionRef using the borrowNFT function. The borrowNFT function takes the id parameter, which is the unique identifier of the NFT within the collection. The borrow function of a capability allows reading the resource data. Finally, we return the NFT from the function. Step 3: Testnet Deployment Now, it's time to deploy our smart contract to the Flow testnet. 1. Set up a Flow account Run the following command in the terminal to generate a Flow account: Shell flow keys generate Be sure to write down your public key and private key. Next, we’ll head over to the Flow Faucet, create a new address based on our keys, and fund our account with some test tokens. Complete the following steps to create your account: Paste in your public key in the specified input field. Keep the Signature and Hash Algorithms set to default. Complete the Captcha. Click on Create Account. After setting up an account, we receive a dialogue with our new Flow address containing 1,000 test Flow tokens. Copy the address so we can use it going forward. 2. Configure the project. Now, let’s configure our project. Initially, when we set up the project, it created a flow.json file. This is the configuration file for the Flow CLI and defines the configuration for actions that the Flow CLI can perform for you. Think of this as roughly equivalent to hardhat.config.js on Ethereum. Now open your code editor and copy and paste the below code into your flow.json file. JavaScript { "contracts": { "Collectibles": "./cadence/contracts/Collectibles.cdc", "NonFungibleToken": { "source": "./cadence/contracts/NonFungibleToken.cdc", "aliases": { "testnet": "0x631e88ae7f1d7c20" } } }, "networks": { "testnet": "access.devnet.nodes.onflow.org:9000" }, "accounts": { "testnet-account": { "address": "ENTER YOUR ADDRESS FROM FAUCET HERE", "key": "ENTER YOUR GENERATED PRIVATE KEY HERE" } }, "deployments": { "testnet": { "testnet-account": [ "Collectibles" ] } } } 3. Copy and paste Paste your generated private key at the place (key: “ENTER YOUR GENERATED PRIVATE KEY HERE”) in the code. 4. Execute Now execute the code on the testnet. Go to the terminal and run the following code: Shell flow project deploy --network testnet 5. Wait for confirmation After submitting the transaction, you'll receive a transaction ID. Wait for the transaction to be confirmed on the testnet, indicating that the smart contract has been successfully deployed. Check your deployed contract here. Check the full code on GitHub. Final Thoughts and Congratulations! Congratulations! You have now built a collectibles portal on the Flow blockchain and deployed it to the testnet. What’s next? Now you can work on building the frontend, which we will cover in part 2 of this series. Have a really great day!
In the paradigm of zero trust architecture, Privileged Access Management (PAM) is emerging as a key component in a cybersecurity strategy designed to control and monitor privileged access within an organization. This article delves into the pivotal role of PAM in modern cybersecurity, exploring its principles, implementation strategies, and the evolving landscape of privileged access. What Is a Privileged User and a Privileged Account? A privileged user is someone who has been granted elevated permissions to access certain data, applications, or systems within an organization. These users are typically IT admins who require these privileges to perform their job duties, such as system administrators, database administrators, and network engineers. A privileged account refers to the actual set of login credentials that provides an elevated level of access. These accounts can perform actions that can have far-reaching implications within an IT environment. Examples of privileged accounts include: Interactive login accounts: These are standard accounts used for logging into systems and performing administrative tasks. Non-interactive accounts: These accounts don't interact directly with the user interface but are used for automated tasks like running batch jobs or scripts. Generic/shared/default accounts: Such as the "root" account in Unix systems or the "Administrator" account in Windows systems, these are often shared among multiple users and have significant privileges across systems. Service accounts: Used by applications or services to interact with the operating system or other applications, these accounts often have elevated privileges to perform specific tasks and are not tied to a personal user's credentials. Popular Data/Security Breaches: What’s the Common Link? The common link between popular data and security breaches is often the exploitation of privileged accounts. Whether the perpetrator is a script kiddie or a seasoned cybercriminal, gaining control of privileged accounts is typically a key objective. This is because privileged accounts have elevated access rights and permissions that allow wide-reaching control of IT systems, potentially allowing attackers to steal sensitive data, install malicious software, and create new accounts to maintain access for future exploitation. Anatomy of a Data Breach: Privileged Access Is the Key to the Kingdom In the journey of a security or data breach, it all starts with an initial breach point: an exploited vulnerability, a phishing email, or a compromised password. This serves as the entryway for threat actors. However, their ultimate target transcends this initial breach: privileged access. This access isn't just any key; it's the master key, unlocking access to critical systems and data. Imagine someone gaining control of a domain admin account — it's as if they've been given unrestricted access to explore every corner of an organization's digital domain. This stealthy movement and the exploitation of privileged accounts highlight the significant risks and underscore the importance of vigilant security measures in safeguarding an organization's digital assets. Cost of a Data Breach In 2023, businesses worldwide felt a significant financial hit from data breaches, with costs averaging $4.45 million. This trend highlights the increasing expenses linked to cybersecurity issues. The U.S. saw the highest costs at $9.48 million per breach, reflecting its complex digital and regulatory landscape. These figures emphasize the crucial need for strong cybersecurity investments to reduce the financial and operational impacts of data breaches. Integrating Privileged Access Management (PAM) solutions can substantially enhance cybersecurity defenses, minimizing the likelihood and impact of breaches (Source). Common Challenges With Privileged Identities and How a Pam Solution Can Prevent Cyberattacks Just-in-time admin access: In any organization, admins possess broad access to sensitive data, including financial, employee, and customer information, due to their role. This access makes privileged admin accounts a focal point for security breaches, whether through deliberate misuse or accidental exposure. Just-in-time admin access within the realm of Privileged Access Management (PAM) refers to granting privileged access on an as-needed basis. A PAM solution facilitates this by enabling root and admin-level access for a limited timeframe, significantly reducing the risk of a compromised account being used to infiltrate critical systems. Securing admin access further through multi-factor authentication and user behavior analytics enhances protection against unauthorized use. Compliance visibility: As organizations continuously integrate new IT devices to enable business operations, tracking, securing, and auditing privileged access becomes increasingly challenging. This complexity escalates with multiple vendors and contractors accessing these critical systems using personal devices, leading to substantial compliance costs. A PAM solution provides organizations with control over privileged accounts through continuous auto-discovery and reporting. Acting as a central repository for all infrastructure devices, it simplifies compliance and allows data owners to gain comprehensive visibility over privileged access across the network. Cyber risk with privileged identities: Cyberattacks often correlate directly with the misuse of privileged identities. Leading cybersecurity firms like Mandiant have linked 100% of data breaches to stolen credentials. These breaches typically involve the escalation from low-privileged accounts, such as those of sales representatives, to high-privileged accounts, like Windows or Unix administrators, in a phenomenon known as vertical privilege escalation. The risk is not limited to external hackers: disgruntled employees with admin access pose a significant threat. The increasing prevalence of security breaches via privileged identities underscores the importance of understanding who possesses critical access within an organization. A PAM solution addresses this by enabling frequent rotation of privileged account passwords each time they are checked out by a user. Integrating multi-factor authentication with PAM solutions can further minimize cyber risks, including those from social engineering and brute-force attacks. Stagnant/less complex passwords: Various factors can contribute to vulnerable or compromised passwords, including the lack of centralized password management, weak encryption across devices, the use of embedded and service accounts without password expiration, and the practice of using identical passwords across corporate and external sites. Furthermore, overly complex enterprise password policies may lead to insecure practices, such as writing passwords on sticky notes. A PAM solution effectively secures passwords in a vault and automates their rotation on endpoint devices, offering a robust defense against hacking tools like Mimikatz. It promotes secure access by allowing admins to use multi-factor authentication to connect to PAM and subsequently to devices without direct exposure to passwords, thus significantly reducing risk. Uncontrollable SSH keys: SSH keys, which utilize public-key cryptography for authentication, pose a challenge due to their perpetual validity and the ability to link multiple keys to a single account. Managing these keys is crucial, as their misuse can allow unauthorized root access to critical systems, bypassing security controls. A survey by Ponemon on SSH security vulnerabilities highlighted that three-quarters of enterprises lack security controls for SSH, over half have experienced SSH key-related compromises, and nearly half do not rotate or change SSH keys. Additionally, a significant number of organizations lack automated processes for SSH key policy enforcement and cannot detect new SSH keys, underscoring the ongoing vulnerability and the need for effective management solutions. Implementing RBAC (Role Based Access Control) in PAM: By assigning specific roles to users and linking these roles with appropriate access rights, RBAC ensures that individuals have only the access they need for their job tasks. This method adheres to the least privilege principle, effectively shrinking the attack surface by limiting high-level access. Such a controlled access strategy reduces the likelihood of cyber attackers exploiting privileged accounts. RBAC also aids in tightly managing access to critical systems and data, offering access strictly on a need-to-know basis. This targeted approach significantly lowers the risk of both internal and external threats, enhancing the security framework of an organization against potential cyber intrusions. Core Components of a PAM Solution Credential vault: This is a secure repository for storing and managing passwords, certificates, and keys. The vault typically includes functionality for automated password rotation and controlled access to credentials, enhancing security by preventing password misuse or theft. Access manager: This component is responsible for maintaining a centralized directory of users, groups, devices, and policies. It enables the administration of access rights, ensuring that only authorized individuals can access sensitive systems and data. Session management and monitoring: This provides the ability to monitor, record, and control active sessions involving privileged accounts. This also includes the capture of screen recordings and keystrokes for audits and reviews. Configuration management: Configuration Management within PAM maintains the system's health by managing integrations, updates, and security configurations, ensuring the PAM aligns with the broader IT policies and infrastructure. Key Considerations for Selecting the Right PAM Solution Integration capabilities: Look for a solution that seamlessly integrates with your existing IT infrastructure, including other IAM solutions, directories, databases, applications, and cloud services. Compliance requirements: Ensure the PAM solution aligns with your organization's regulatory requirements and industry standards, such as HIPAA, PCI DSS, SOX, etc. Security features: Look for solutions with robust security features such as privileged session management, password vaulting, multi-factor authentication (MFA), and granular access controls to ensure comprehensive protection of sensitive assets. Scalability: Evaluate whether the chosen deployment model (cloud or on-premise) can scale to accommodate your organization's growth, supporting an increasing number of privileged accounts, users, and devices while maintaining performance and security. High availability and disaster recovery: PAM can be a single point of failure. Look for features that ensure the PAM system remains available even in the face of an outage. This includes options for redundancy, failover, and backup capabilities to prevent downtime or data loss. Implementing a PAM Solution Implementation involves several stages, from initiation and design to deployment and continuous compliance, with an emphasis on stakeholder buy-in, policy development, and ongoing monitoring. Initiate Needs assessment: Evaluate the organization's current privileged access landscape, including existing controls and gaps. Project planning: Define the project's scope, objectives, and resources. Establish a cross-functional team with clear roles and responsibilities. Stakeholder buy-in: Secure commitment from management and key stakeholders by demonstrating the importance of PAM for security and compliance. Design and Develop Solution architecture: Design the PAM solution's infrastructure, considering integration with existing systems and future scalability. Policy definition: Develop clear policies for privileged account management, including credential storage, access controls, and monitoring. Configuration: Customize and configure the PAM software to fit organizational needs, including the development of any required integrations or custom workflows. Implement Deployment: Roll out the PAM solution in phases, starting with a pilot phase to test its effectiveness and make adjustments. Training and communication: Provide comprehensive training for users and IT staff and communicate changes organization-wide. Transition: Migrate privileged accounts to the PAM system, enforce new access policies, and decommission legacy practices. Continuous Compliance Monitoring and auditing: Use the PAM solution to continuously monitor privileged access and conduct regular audits for irregular activities or policy violations. Policy review and updating: Regularly review policies to ensure they remain effective and compliant with evolving regulations and business needs. Continuous improvement: Leverage feedback and insights gained from monitoring and audits to improve PAM practices and technologies. Considerations/Challenges While Implementing a PAM Solution Developing a clear and concise business case: Articulate the benefits and necessities of a PAM solution to gain buy-in from stakeholders. This should outline the risks mitigated and the value added in terms of security and compliance. Resistance to change: Admins and users may view the PAM system as an additional, unnecessary burden. Overcoming this requires change management strategies, training, and clear communication on the importance of the PAM system. Password vault as a single point of failure: The centralized nature of a password vault means it could become a single point of failure if not properly secured and managed. Implementing robust security measures and disaster recovery plans is essential. Load-balancing and clustering: To ensure high availability and scalability, the PAM system should be designed with load-balancing and clustering capabilities, which can add complexity to the implementation. Maintaining an up-to-date CMDB (Configuration Management Database): An accurate CMDB is crucial for the PAM solution to manage resources effectively. Risk-based approach to implementation: Prioritize the deployment of the PAM solution based on a risk assessment. Identify and protect the "crown jewels" of the organization first, ensuring that the most critical assets have the strongest controls in place. Final Thoughts Privileged Access Management is integral to safeguarding organizations against cyber threats by effectively managing and controlling privileged access. Implementation requires a comprehensive approach, addressing challenges while emphasizing stakeholder buy-in and continuous improvement to uphold robust cybersecurity measures. DISCLAIMER: The opinions and viewpoints are solely mine in this article and do not reflect my employer's official position or views. The information is provided "as is" without any representations or warranties, express or implied. Readers are encouraged to consult with professional advisors for advice concerning specific matters before making any decision. The use of information contained in this article is at the reader's own risk.
I started research for an article on how to add a honeytrap to a GitHub repo. The idea behind a honeypot weakness is that a hacker will follow through on it and make his/her presence known in the process. My plan was to place a GitHub personal access token in an Ansible vault protected by a weak password. Should an attacker crack the password and use the token to clone the private repository, a webhook should have triggered and mailed a notification that the honeypot repo has been cloned and the password cracked. Unfortunately, GitHub seems not to allow webhooks to be triggered after cloning, as is the case for some of its higher-level actions. This set me thinking that platforms as standalone systems are not designed with Dev(Sec)Ops integration in mind. DevOps engineers have to bite the bullet and always find ways to secure pipelines end-to-end. I, therefore, instead decided to investigate how to prevent code theft using tokens or private keys gained by nefarious means. Prevention Is Better Than Detection It is not best practice to have secret material on hard drives thinking that root-only access is sufficient security. Any system administrator or hacker that is elevated to root can view the secret in the open. They should, rather, be kept inside Hardware Security Modules (HSMs) or a secret manager, at the very least. Furthermore, tokens and private keys should never be passed in as command line arguments since they might be written to a log file. A way to solve this problem is to make use of a super-secret master key to initiate proceedings and finalize using short-lived lesser keys. This is similar to the problem of sharing the first key in applied cryptography. Once the first key has been agreed upon, successive transactions can be secured using session keys. It goes beyond saying that the first key has to be stored in Hardware Security Modules, and all operations against it have to happen inside an HSM. I decided to try out something similar when Ansible clones private Git repositories. Although I will illustrate at the hand of GitHub, I am pretty sure something similar can be set up for other Git platforms as well. First Key GitHub personal access tokens can be used to perform a wide range of actions on your GitHub account and its repositories. It authenticates and authorizes from both the command line and the GitHub API. It clearly can serve as the first key. Personal access tokens are created by clicking your avatar in the top right and selecting Settings: A left nav panel should appear from where you select Developer settings: The menu for personal access tokens will display where you can create the token: I created a classic token and gave it the following scopes/permissions: repo, admin:public_key, user, and admin:gpg_key. Take care to store the token in a reputable secret manager from where it can be copied and pasted when the Ansible play asks for it when it starts. This secret manager should clear the copy buffer after a few seconds to prevent attacks utilizing attention diversion. vars_prompt: - name: github_token prompt: "Enter your github personal access token?" private: true Establishing the Session GitHub deployment keys give access to private repositories. They can be created by an API call or from the repo's top menu by clicking on Settings: With the personal access token as the first key, a deployment key can finish the operation as the session key. Specifically, Ansible authenticates itself using the token, creates the deployment key, authorizes the clone, and deletes it immediately afterward. The code from my previous post relied on adding Git URLs that contain the tokens to the Ansible vault. This has now been improved to use temporary keys as envisioned in this post. An Ansible role provided by Asif Mahmud has been amended for this as can be seen in the usual GitHub repo. The critical snippets are: - name: Add SSH public key to GitHub account ansible.builtin.uri: url: "https://api.{{ git_server_fqdn }/repos/{{ github_account_id }/{{ repo }/keys" validate_certs: yes method: POST force_basic_auth: true body: title: "{{ key_title }" key: "{{ key_content.stdout }" read_only: true body_format: json headers: Accept: application/vnd.github+json X-GitHub-Api-Version: 2022-11-28 Authorization: "Bearer {{ github_access_token }" status_code: - 201 - 422 register: create_result The GitHub API is used to add the deploy key to the private repository. Note the use of the access token typed in at the start of play to authenticate and authorize the request. - name: Clone the repository shell: | GIT_SSH_COMMAND="ssh -i {{ key_path } -v -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" {{ git_executable } clone git@{{ git_server_fqdn }:{{ github_account_id }/{{ repo }.git {{ clone_dest } - name: Switch branch shell: "{{ git_executable } checkout {{ branch }" args: chdir: "{{ clone_dest }" The repo is cloned, followed by a switch to the required branch. - name: Delete SSH public key ansible.builtin.uri: url: "https://api.{{ git_server_fqdn }/repos/{{ github_account_id }/{{ repo }/keys/{{ create_result.json.id }" validate_certs: yes method: DELETE force_basic_auth: true headers: Accept: application/vnd.github+json X-GitHub-Api-Version: 2022-11-28 Authorization: "Bearer {{ github_access_token }" status_code: - 204 Deletion of the deployment key happens directly after the clone and switch, again via the API. Conclusion The short life of the deployment key enhances the security of the DevOps pipeline tremendously. Only the token has to be kept secured at all times as is the case for any first key. Ideally, you should integrate Ansible with a compatible HSM platform. I thank Asif Mahmud for using their code to illustrate the concept of using temporary session keys when cloning private Git repositories.
What Is Patch Management? Patch management is a proactive approach to mitigate already-identified security gaps in software. Most of the time, these patches are provided by third-party vendors to proactively close the security gaps and secure the platform, for example. RedHat provides security advisories and patches for various RedHat products such as RHEL, OpenShift, OpenStack, etc. Microsoft provides patches in the form of updates for Windows OS. These patches include updates to third-party libraries, modules, packages, or utilities. Patches are prioritized and, in most organizations, patching of systems is done at a specific cadence and handled through a change control process. These patches are deployed through lower environments first to understand the impact and then applied in higher environments, such as production. Various tools such as Ansible and Puppet can handle patch management seamlessly for enterprise infrastructures. These tools can automate the patch management process, ensuring that security patches and updates are promptly applied to minimize application disruptions and security risks. Coordination for patching and testing with various stakeholders using infrastructure is a big deal to minimize interruptions. What Is a Container? A container is the smallest unit of software that runs in the container platform. Unlike traditional software that, in most cases, includes application-specific components such as application files, executables, or binaries, containers include the operating system required to run the application and all other dependencies for the application. Containers include everything needed to run the application; hence, they are self-contained and provide greater isolation. With all necessary components packaged together, containers provide inherent security and control, but at the same time, are more vulnerable to threats. Containers are created using a container image, and a container image is created using a Dockerfile/Containerfile that includes instructions for building an image. Most of the container images use open-source components. Therefore, organizations have to make efforts to design and develop recommended methods to secure containers and container platforms. The traditional security strategies and tools would not work for securing containers. DZone’ previously covered how to health check Docker containers. For infrastructure using physical machines or virtual machines for hosting applications, the operations team would SSH to servers (manually or with automation) and then upgrade the system to the latest version or latest patch on a specific cadence. If the application team needs to make any changes such as updating configurations or libraries, they would do the same thing by logging in to the server and making changes. If you know what this means, in various cases, the servers are configured for running specific applications. In this case, the server becomes a pet that needs to be cared for as it creates a dependency for the application, and keeping such servers updated with the latest patches sometimes becomes challenging due to dependency issues. If the server is shared with multiple applications, then updating or patching such servers consumes a lot of effort from everyone involved to make sure applications run smoothly post-upgrade. However, containers are meant to be immutable once created and expected to be short-lived. As mentioned earlier, containers are created from container images; so it's really the container image that needs to be patched. Every image contains one or more file system layers which are built based on the instructions from Containerfile/Dockerfile. Let’s further delve into how to do the patch management and vulnerability management for containers. What Is Vulnerability Management? While patch management is proactive, vulnerability management is a reactive approach to managing and maintaining the security posture within an organization. Platforms and systems are scanned in real-time, at specific schedules, or on an ad hoc basis to identify common vulnerabilities. These are also known as CVEs (Common Vulnerability and Exposures). The tools that are used to discover CVEs use various vulnerability databases such as the U.S. National Vulnerability Database (NVD) and the CERT/CC Vulnerability Notes Database. Most of the vendors that provide scanning tools also maintain their own database to compare the CVEs and score them based on the impact. Every CVE gets a unique code along with a score in terms of severity (CVSS) and resolution, if any (e.g., CVE-2023-52136). Once the CVEs are discovered, these are categorized based on the severity and prioritized based on the impact. Not every Common Vulnerabilities and Exposure (CVE) has a resolution available. Therefore, organizations must continuously monitor such CVEs to comprehend their impact and implement measures to mitigate them. This could involve taking steps such as temporarily removing the system from the network or shutting down the system until a suitable solution is found. High-severity and critical vulnerabilities should be remediated so that they can no longer be exploited. As is evident, patch management and vulnerability management are intrinsically linked in terms of security. Their shared objective is to safeguard an organization's infrastructure and data from cyber threats. Container Security Container security entails safeguarding containerized workloads and the broader ecosystem through a mix of various security tools and technologies. Patch management and vulnerability management are integral parts of this process. The container ecosystem is also often referred to as a container supply chain. The container supply chain includes various components. When we talk about securing containers, it is essentially monitoring and securing the various components listed below. Containers A container is also called a runtime instance of a container image. It uses instructions provided in the container image to run itself. The container has lifecycle stages such as create, start, run, stop, delete, etc. This is the smallest unit which has existence in the container platform and you can log in to it, execute commands, monitor it, etc. Container Orchestration Platform Orchestration platforms provide various capabilities such as HA, scalability, self-healing, logging, monitoring, and visibility for container workloads. Container Registry A container registry includes one or more repositories where container images are stored, are version-controlled, and made available to container platforms. Container Images A container image is sometimes also called a build time instance of a container. It is a read-only template or artifact that includes everything needed to start and run the container (e.g., minimal operating system, libraries, packages, software) along with how to run and configure the container. Development Workspaces The development workspaces reside on developer workstations that are used for writing code, packaging applications, and creating and testing containers. Container Images: The Most Dynamic Component Considering the patch management and vulnerability management for containers, let's focus on container images, the most dynamic component of the supply chain. In the container management workflow, most of the exploits are encountered due to various security gaps in container images. Let’s categorize various container images used in the organization based on hierarchy. 1. Base Images This is the first level in the image hierarchy. As the name indicates, these base images are used as parent images for most of the custom images that are built within the organization. These images are pulled down from various external public and private image registries such as DockerHub, the RedHat Ecosystem Catalog, and the IBM Cloud. 2. Enterprise Images Custom images are created and built from base images and include enterprise-specific components, standard packages, or structures as part of enterprise security and governance. These images are then modified to meet certain standards for organization and published in private container registries for consumption by various application teams. Each image has an assigned owner responsible for managing the image's lifecycle. 3. Application Images These images are built using enterprise custom images as a base. Applications are added on top of them to build application images. These application images are further deployed as containers to container platforms. 4. Builder Images These images are primarily used in the CI/CD pipeline for compiling, building, and deploying application images. These images are based on enterprise custom images and include software required to build applications, create container images, perform testing, and finally, deploy images as part of the pipeline. 5. COTS Images These are vendor-provided images for vendor products. These are also called custom off-the-shelf (COTS) products managed by vendors. The lifecycle for these images is owned by vendors. For simplification, the image hierarchy is represented in the diagram below. Now that we understand various components of the container supply chain and container image hierarchy, let's understand how patching and vulnerability management are done for containers. Patching Container Images Most of the base images are provided by community members or vendors. Similar to traditional patches provided by vendors, image owners proactively patch these base images to mitigate security issues and make new versions available regularly in the container registries. Let's take an example of Python 3.11 Image from RedHat. RedHat patches this image regularly and also provides a Health Index based on scan results. RedHat proactively fixes vulnerabilities and publishes new versions post-testing. The image below indicates that the Python image is patched every 2-3 months, and corresponding CVEs are published by RedHat. This patching involves modifying the Containerfile to update required packages to fix vulnerabilities as well as building and publishing a new version (tag) of the image in the registry. Let’s move to the second level in the image hierarchy: Enterprise custom images. These images are created by organizations using base images (e.g., Python 3.11) to add enterprise-specific components to the image and harden it further for use within the organization. If the base image changes in the external registry, the enterprise custom image should be updated to use a newer version of the base image. This will create a new version of the Enterprise custom image using an updated Containerfile. The same workflow should be followed to update any of the downstream images, such as application and builder images that are built using Enterprise custom images. This way, the entire chain of images will be patched. In this entire process, the patching is done by updating the Containerfile and publishing new images to the image registry. As far as COTS images, the same process is followed by the vendor, and consumers of the images have to make sure new versions of images are being used in the organization. Vulnerability Management for Containers Patch management to secure containers is only half part of the process. Container images have to be scanned regularly or at a specific cadence to identify newly discovered CVEs within images. There are various scanning tools available in the market that scan container images as well as platforms to identify security gaps and provide visibility for such issues. These tools identify security gaps such as running images with root privileges, having directories world-writable, exposed secrets, exposed ports, vulnerable libraries, and many more. These vulnerability reports help organizations to understand the security postures of images being used as well as running containers in the platform. The reports also provide enough information to address these issues. Some of these tools also provide the ability to define policies and controls such that they can block running images if they violate policies defined by the organization. They could even stop running containers if that's what the organization decides to implement. As far as mitigating such vulnerabilities, the process involves the same steps mentioned in the patch management section; i.e., updating the Containerfile to create a new Docker image, rescanning the image to make sure reported vulnerabilities don’t exist anymore, testing the image and publish it to image registry. Depending on where the vulnerability exists in the hierarchy, the respective image and all downstream images need to be updated. Let’s look at an example. Below is the scan report from the python-3.11:1-34 image. It provides 2 important CVEs against 3 packages. These 2 CVEs will also be reported in all downstream images built based on the python-3.11:1-34 image. On further browsing CVE-2023-38545, more information is provided, including action required to remediate the CVE. It indicates that, based on the operating system within the corresponding image, the curl package should be upgraded in order to resolve the issue. From an organizational standpoint, to address this vulnerability, a new Dockerfile or Containerfile needs to be developed. This file should contain instructions to upgrade the curl package and generate a new image with a unique tag. Once the new image is created, it can be utilized in place of the previously affected image. As per the hierarchy mentioned in image-1, all downstream images should be updated with the new image in order to fix the reported CVE across all images. All images, including COTS images, should be regularly scanned. For COTS images, the organization should contact the vendor (image owner) to fix critical vulnerabilities. Shift Left Container Security Image scanning should be part of every stage in the supply chain pipeline. Detecting and addressing security issues early is crucial to avoid accumulating technical debt as we progress through the supply chain. The sooner we identify and rectify security vulnerabilities, the less disruptive they will be to our operations and the lower the amount of work required to fix them later. Local Scanning In order to build Docker images locally, developers need to have tools such as Docker and Podman installed locally on the workstation. Along with these tools, scanning tools should be made available so that developers can scan images pulled from external registries to determine if those images are safe to use. Also, once they build application images, they should have the ability to scan those images locally before moving to the next stage in the pipeline. Analyzing and fixing vulnerabilities at the source is a great way to minimize the security risks further in the lifecycle. Most of the tools provide a command line interface or IDE plugins for security tools for the ease of local scanning. Some organizations create image governance teams that pull, scan, and approve images from external registries before allowing them to be used within the organization. They take ownership of base images and manage the lifecycle of these images. They communicate with all stakeholders on the image updates and monitor new images being used by downstream consumers. This is a great way to maintain control of what images are being used within an organization. Build Time Scanning Integrate image scanning tools in the CI/CD pipeline during the image build stage to make sure every image is getting scanned. Performing image scans as soon as the image is built and determining if the image can be published to the image registry is a good approach to allowing only safe images in the image registry. Additional control gates can be introduced before the production use of the image by enforcing certain policies specifically for production images. Image Registry Scanning Build-time scanning is essentially an on-demand scanning of images. However, given that new vulnerabilities are constantly being reported and added to the Common Vulnerabilities and Exposures (CVE) database, images stored in the registry need to be scanned at regular intervals. Images with critical vulnerabilities have to be reported to the image owners to take action. Runtime Container Scanning This is real-time scanning of running containers within a platform to identify the security posture of containers. Along with analysis that's being done for images, runtime scan also determines additional issues such as the container running with root privileges, what ports it's listening on, if it's connected to the internet, and any runway process being executed. Based on the capability of the scanning tool, it provides full visibility and a security view of the entire container platform, including the hosts on which the platform is running. The tool could also enforce certain policies, such as blocking specific containers or images from running, identifying specific CVEs, and taking action. Note that this is the last stage in the container supply chain. Hence, fixing any issues at this stage is costlier than any other stage. Challenges With Container Security From the process standpoint, it looks straightforward to update base images with new versions and all downstream images. However, it comes with various challenges. Below are some of the common challenges you would encounter as you start looking into the process of patching and vulnerability management for containers: Identifying updates to any of the parent/base images in the hierarchy Identifying image hierarchy and impacted images in the supply chain Making sure all downstream images are updated when a new parent image is made available Defining ownership of images and identifying image owners Communication across various groups within the organization to ensure controls are being maintained Building a list of trusted images to be used within an organization and managing the lifecycle of the same Managing vendor images due to lack of control Managing release timelines at the same time as securing the pipeline Defining controls across the enterprise with respect to audit, security, and governance Defining exception processes to meet business needs Selecting the right scanning tool for the organization and integration with the supply chain Visibility of vulnerabilities across the organization; providing scan results post-scanning of images to respective stakeholders Patch Management and Containers Summarized This article talks about how important it is to keep things secure in container systems, especially by managing patches and dealing with vulnerabilities. Containers are like independent software units that are useful but need special security attention. Patch management means making sure everything is up to date, starting from basic images to more specific application and builder images. At the same time, vulnerability management involves regularly checking for potential security issues and fixing them, like updating files and creating new images. The idea of shifting left suggests including security checks at every step, from creating to running containers. Despite the benefits, there are challenges, such as communicating well in teams and handling images from external sources. This highlights the need for careful control and ongoing attention to keep organizations safe from cyber threats throughout the container process.
Data breaches, system failures, bugs, and website defacement can seriously harm a company's reputation and profits. Typically, companies realize the importance of auditing their infrastructure, evaluating established interaction patterns, and assessing the business logic of their services only after developing security processes or facing urgent challenges. This maturity often stems from the necessity to ensure product or service security and to meet regulatory requirements. One effective method for conducting an information security audit is through penetration testing (pen test). Companies can either develop this expertise internally or choose a skilled and trustworthy contractor to perform the tests. The contractor would conduct thorough testing and provide detailed penetration reports, complete with recommendations for safeguarding corporate data. The latter option, hiring a skilled contractor for penetration testing, is more frequently chosen, particularly by small and medium-sized businesses (SMBs), as it offers considerable savings in both time and money. The service provider outlines all stages of the process, develops a pen testing strategy, and suggests ways to eliminate threats. This approach ensures transparency, with a defined scope for testing, clear results, and compliance with both regulatory and business requirements. What Is Penetration Testing? Penetration testing broadly involves evaluating the security of information systems by mimicking the tactics of an actual attacker. However, it is not just about finding vulnerabilities and security gaps. It also includes a thorough examination of the business logic behind services. This means manually analyzing financial transactions and the flow of goods, scrutinizing mobile applications, web forms, etc. Sometimes, it is not the security perimeter that poses the risk but rather the business logic itself. This can inadvertently provide opportunities for an attacker or fraudster with legitimate access to the company's systems to siphon off funds or cause harm in various other ways. Penetration Testing Methodologies Let's now explore the diverse methodologies of penetration testing: Black Box Method In a black box testing method, the tester has little to no prior knowledge about the target system. They may only have basic information like URLs, IP addresses, or a list of systems and services. This method is primarily used for auditing perimeter security and externally accessible web services, where the tester simulates an external attack with limited initial data. Gray Box Method Here, the tester has some knowledge about the system they are testing but lacks admin rights or detailed operational patterns. This methodology is often applied to audit open banking services, mobile applications, and internal infrastructure. The penetration tester operates with a regular user's credentials, requiring them to independently analyze the business logic, conduct reverse engineering, attempt to escalate their privileges, and potentially breach more secure segments like processing centers, databases, or payment services. White Box Method In the white box approach, the tester has complete knowledge of the system, including source code, architecture diagrams, and administrative privileges. This method goes beyond just demonstrating hacking skills; it is more about identifying existing defects in software products or business services, understanding the implications of improper product use, exploring potential action vectors that could lead to malfunctions, and pinpointing process shortcomings, such as inadequate controls or regulatory non-compliance. A unique aspect of pen tests involves social engineering, where testers try to trick company employees into revealing critical data, assessing their awareness of information security. This may include making QR codes as part of social engineering tactics to evaluate employee susceptibility to phishing. Alongside, advanced AI language tools and specialized essay writing services are employed to create convincing phishing messages, making them challenging for even security professionals to detect. Additionally, the contractor might provide services like controlled DDoS attacks (stress testing) or simulated spam attacks. How To Implement Penetration Tests Implementing penetration tests begins with defining specific objectives and the scope of the test, which includes determining the systems, networks, or apps to be examined. Depending on these objectives, a suitable testing methodology is chosen. The next step is selecting the testing team, which can either be an internal group or external experts. Once the testing starts, the team simulates various attacks to identify system vulnerabilities, covering potential weaknesses in software, hardware, and human factors. After the test, analyzing the results is critical to understanding the vulnerabilities and their potential impacts. A Non-Disclosure Agreement A non-disclosure agreement (NDA) is signed with the contractor during a penetration test to ensure confidentiality. In some cases, a contrasting agreement, known as a "disclosure agreement," is also executed. This agreement permits the legitimate disclosure of discovered bugs or zero-day vulnerabilities, allowing for transparent communication of critical security findings under specific conditions. Pen Test Frequency and Duration In terms of frequency, it is recommended to run penetration testing after every noticeable change in the infrastructure. How often these changes occur depends on your business processes. Usually, full-fledged pen tests are done every six months or once a year - but agile businesses should consider running continuous pen testing if they are deploying at a faster pace. The rest of the time, after each minor configuration change, you can use scanners. Scans are cheaper and reveal basic problems. On average, the pen test lasts a month, sometimes longer. If they last for several months, it is already red teaming. Bug Bounty One of the methods for carrying out a penetration test is through a bug bounty program. This approach offers extensive coverage as numerous specialists attempt to uncover vulnerabilities in the company's services and products. A key benefit of this method is that it is cost-free until a threat is identified. However, there are drawbacks. A specialist might only report a vulnerability to the extent needed to claim a reward without delving deeper into the analysis. Additionally, there is a risk of vulnerabilities being disclosed before the company can address them, or even specialists may sell the discovered vulnerabilities on the black market if the offered reward is deemed insufficient. Red Teaming For a large or rapidly expanding operation, you may wish to consider a Red Team Assessment. This approach stands out for its complexity, thoroughness, and element of surprise. In such assessments, your information security specialists are kept in the dark about when, where, and on which systems the test attacks will occur. They will not know which logs to monitor or what precisely to look out for, as the testing team will endeavor to conceal their activities, just as an actual attacker would. Why a Pen Test May Fail Potential downsides of a pen test can include too much interference from the client, restrictions on specific testing actions (as if to prevent damage), and limiting the scope to a very narrow range of systems for evaluation. It is crucial to understand that even the most diligent contractor might not uncover critical or high-level vulnerabilities. However, this does not necessarily mean they have underperformed. Often, it may be the customer who has set conditions for the pen test that make it extremely challenging, if not impossible, to identify any vulnerabilities. Penetration testing is, by nature, a creative process. When a customer restricts the scope of work or the tools available to the contractor, they may inadvertently hinder the effectiveness of the test. This can lead to receiving a report that does not accurately reflect the actual state of their security, wasting both time and money on the service. How Not To Run Pen Tests BAS, an automated system for testing and modeling attacks, along with vulnerability scanners, are tools some might consider sufficient for pen testing. However, this is not entirely accurate. Not all business services can be translated into a machine-readable format, and the verification of business logic has its limitations. Artificial intelligence, while helpful, still falls short of the intelligence and creativity of a human specialist. Therefore, while BAS and scanners are valuable for automating routine checks, they should be integrated as part of a comprehensive penetration testing process rather than being relied upon exclusively. Pen Testing Stages From the perspective of the attacking team, penetration testing typically involves these stages: Planning and reconnaissance: Define test scope and goals and gather intelligence on the target system or network to identify vulnerabilities. Scanning: Use static (code analysis) and dynamic (running code analysis) methods to understand the target reactions to intrusion attempts. Gaining access: Exploit vulnerabilities using attacks like SQL injection or cross-site scripting (XSS) to understand the potential damage. Maintaining access: Test if the vulnerability allows for prolonged system access, mimicking persistent threats that aim to steal sensitive data. Analysis: Compile findings into a report detailing exploited vulnerabilities, accessed data, undetected duration in the system, and security recommendations. How To Choose a Reliable Penetration Testing Provider When selecting a provider for penetration testing services, it is important to establish a level of trust with the contractor. Key factors to consider include: The contractor's overall experience and history in providing these services Achievements and awards received by specific individuals, teams, or projects within the contractor's organization; recent involvement in CREST is also a notable indicator Certifications held by the contractor's team members, as well as licenses for conducting such activities Customer testimonials and recommendations, which may also include anonymous feedback The contractor's expertise in particular audit areas, with examples of involvement in complex projects, such as those with high-tech companies or process control systems Considering the arrangement of small-scale test tasks, mainly if the contractor is relatively unknown in the market The availability of qualified penetration testing specialists is limited, so it is crucial to prioritize companies for whom pen testing is a primary service. These companies should have a dedicated team of qualified specialists and a separate project manager to oversee pen tests. Opting for a non-specialized company often leads to outsourcing, resulting in unpredictable outcomes. If you consistently use the same pen test provider over the years, especially if your infrastructure remains static or undergoes minimal changes, there is a risk that the contractor's specialists might become complacent or overlook certain aspects. To maintain a high level of scrutiny and fresh perspectives, it is advisable to periodically rotate between different contractors. Best Penetration Testing Services 1. BreachLock BreachLock's pen testing service offers human-verified results, DevOps fix guidance, robust client support, and a secure portal for retests. It also provides third-party security certification and thorough, compliance-ready reports. Benefits Human-verified results with in-depth fix guidance Retest-capable client portal, adding service value Delivers third-party security certification and detailed reports for compliance Strong client support during and post-testing Drawbacks Somewhat unclear documentation that requires expertise in the field Clients may prefer BreachLock for its blend of human and tech solutions and focus on detailed, compliance-ready reports. 2. SecureWorks SecureWorks' penetration testing service is recognized for its comprehensive offerings and high-quality services, which have earned it a strong reputation in the field. They offer personalized solutions and tailor their services to industry-specific standards. While the cost is on the higher side, it is justified by their in-depth expertise and the overall value provided. Benefits Comprehensive service offerings with strong expertise Services are well-tailored for large enterprises Focus on long-term regulatory compliance and personalized solutions Recognized for high-quality services and strong industry reputation Drawbacks More expensive compared to some lower-cost options Clients seeking depth in security expertise and comprehensive, enterprise-level service might find SecureWorks a preferable option, especially for long-term, strategic IT security planning for evolving infrastructure. 3. CrowdStrike CrowdStrike's penetration testing service offers testing of various IT environment components using real-world threat actor tools, derived from CrowdStrike Threat Intelligence. This approach aims to exploit vulnerabilities to assess the risk and impact on an organization. Benefits Utilizes real-world threat actor tools for effective vulnerability assessment Focuses on testing different IT environment components comprehensively Drawbacks Focus on larger enterprises Clients might prefer CrowdStrike for its use of advanced threat intelligence tools and comprehensive testing of diverse IT components, suitable for organizations seeking detailed risk and impact analysis. Conclusion Security analysts predict a rise in the demand for penetration testing services, driven by the rapid digitalization of business operations, and growth in telecommunications, online banking, social and government services. As new information technologies are adopted, businesses and institutions increasingly focus on identifying security vulnerabilities to prevent hacks and comply with regulatory requirements.
In the ever-evolving landscape of digital innovation, the integrity of software supply chains has become a pivotal cornerstone for organizational security. As businesses increasingly rely on a complex web of developers, third-party vendors, and cloud-based services to build and maintain their software infrastructure, the risk of malicious intrusions and the potential for compromise multiply accordingly. Software supply chain security, therefore, is not just about protecting code — it's about safeguarding the lifeblood of a modern enterprise. This article seeks to unravel the complexities of supply chain security, presenting a clear and detailed exposition of its significance and vulnerabilities. It aims to arm readers with a robust checklist of security measures, ensuring that industry leaders can fortify their defenses against the insidious threats that lie in wait within the shadows of their software supply chain ecosystems. What Is Supply Chain Security? Supply chain security in the context of software refers to the efforts and measures taken to protect the integrity, reliability, and continuity of the software supply chain from design to delivery. It encompasses the strategies and controls implemented to safeguard every aspect of the software development and deployment process. This includes securing the code from unauthorized changes, protecting the development and operational environments from infiltration, ensuring the authenticity of third-party components, and maintaining the security of software during its transit through the supply chain. In today's digital landscape, the relevance of software supply chain security cannot be overstated. As organizations increasingly adopt cloud services, integrate open-source software, and engage with numerous vendors, the attack surface for potential cybersecurity threats widens considerably. Each entity or product in the supply chain potentially introduces risk, and a single vulnerability can be exploited to cause widespread damage. This is particularly crucial as the consequences of a breach can be catastrophic, not only in terms of financial loss but also in damage to customer trust and brand reputation. Moreover, regulatory compliance requires strict adherence to security protocols, making supply chain security an essential element of legal and ethical business operations in the digital age. Supply Chain Security Threats Software supply chains are fraught with various threats and risks that can arise at any point, from development to deployment. These threats can be broadly categorized into several types, including but not limited to: Compromised software components: This includes third-party libraries or open-source components that may contain vulnerabilities or malicious code. Code tampering: Unauthorized changes to the software code, which can occur during development or when code is in transit between supply chain entities Insider threats: Risks posed by individuals within the organization or supply chain who may have malicious intent or inadvertently compromise security through negligence Update mechanism compromise: Attackers may hijack update processes to distribute malware. Service providers and vendor risks: Security breaches at third-party vendors can cascade down the supply chain, affecting all who rely on them. There have been several notable supply chain attacks that illustrate the potential impact of these threats. For instance, the SolarWinds Orion attack, which came to light in late 2020, involved the compromise of the software update mechanism, leading to the distribution of a malicious code to thousands of organizations. Another significant event was the Heartbleed bug, a serious vulnerability in the OpenSSL cryptography library, which is a widely used method of securing the Internet's communication. The impact of these threats on organizations can be profound and multifaceted. They can lead to the exposure of sensitive data, financial loss, operational disruption, and erosion of customer trust. Number of organizations vulnerable to the SolarWinds Orion attack: 30K Estimated cost of SolarWinds Orion attack: $90 million The reputational damage to the organization from a supply chain attack can be long-lasting and can lead to legal and regulatory consequences. Moreover, the interconnected nature of today’s digital ecosystems means that a single breach can have a domino effect, impacting numerous entities connected to the compromised node in the supply chain. Thus, understanding and mitigating supply chain security threats is not just a technical necessity but a business imperative. Supply Chain Security Best Practices To fortify the software supply chain against the myriad of threats it faces, organizations must adhere to a set of essential best practices. These practices act as a framework for establishing a secure supply chain environment. Firstly, it is critical to conduct thorough due diligence on all third-party vendors and partners. This means evaluating their security policies, practices, and track records. It's not enough to assume security; it must be verified. Organizations should also employ the principle of least privilege, ensuring that access to systems and information is strictly controlled and limited to what is necessary for specific roles and tasks. Secondly, risk assessment and management must be continuous and evolving. This involves identifying, analyzing, and evaluating risks, followed by implementing strategies to mitigate them. Regularly updating the risk management strategies is crucial as new threats emerge and the digital landscape changes. Companies should employ tools for scanning and monitoring their software for vulnerabilities and ensure that all components are up-to-date and patched against known vulnerabilities. Lastly, regular security audits are vital for ensuring compliance with industry standards and regulatory requirements. These audits help uncover hidden vulnerabilities, assess the effectiveness of current security measures, and ensure that all aspects of the supply chain conform to the highest security standards. Audits should be performed not just internally but also extended to third-party vendors, ensuring that they too maintain the required security posture. Compliance with standards like ISO 27001, NIST, and others specific to the industry or region can help provide a structured approach to managing and securing information assets throughout the supply chain. By embedding these best practices into their operational ethos, organizations can significantly enhance the security of their software supply chains and protect themselves against the potentially devastating consequences of a breach. Supply Chain Security Checklist A comprehensive Supply Chain Security Checklist is an indispensable tool for organizations to protect their software supply chains from potential threats and breaches. Here is a detailed checklist focusing on various crucial areas: 1. Vendor Management Conduct thorough security assessments of all vendors. Ensure vendors comply with your organization's security requirements. Establish clear security expectations and responsibilities in vendor contracts. Regularly review and update vendor security measures and policies. 2. Secure Development Implement a Secure Software Development Lifecycle (SSDLC) with security checkpoints at each phase. Employ static and dynamic code analysis tools to detect vulnerabilities. Regularly update development tools and environments to address known security issues. Train developers in secure coding practices and keep them informed about the latest security threats. 3. Continuous Monitoring Deploy monitoring tools to track the integrity of software throughout the development and deployment phases. Utilize threat intelligence services to stay aware of emerging threats. Implement automated tools for vulnerability scanning and configuration management. Regularly review access logs and patterns to detect any unauthorized activity. 4. Incident Response Develop and regularly update an incident response plan tailored to supply chain specifics. Establish a dedicated incident response team with clear roles and responsibilities. Conduct regular incident response drills and simulations to ensure preparedness. Maintain clear communication channels with all supply chain stakeholders for prompt notification in case of a security incident. 5. Compliance and Reporting Identify and understand all applicable industry standards and regulations relevant to your supply chain. Ensure regular compliance audits are carried out and documented. Implement policies and procedures for reporting breaches and non-compliance issues. Keep all compliance documentation up to date and easily accessible for review and auditing purposes. This checklist serves as a guideline for organizations to create a secure framework around their software supply chains. It is important to note that while this checklist provides a strong foundation, it should be adapted to fit the specific context and needs of each organization. Regular updates and improvements to the checklist should be made in response to evolving threats and changing industry practices to maintain a robust supply chain security posture. Ensuring Software Supply Chain Security As we navigate through the intricate networks of modern software development and distribution, the security of the supply chain emerges as a non-negotiable facet of organizational resilience. The threats are real and evolving, and the potential impacts are too severe to overlook — from financial repercussions to irreparable damage to trust and reputation. The safeguarding of the software supply chain is not merely a technical duty; it is a strategic imperative that underpins business continuity, innovation, and growth. The practices and checklists outlined in this article are more than just a defensive playbook; they are a proactive blueprint for building a robust and trustworthy software ecosystem. By prioritizing supply chain security, organizations can not only thwart potential attacks but also cultivate a culture of security mindfulness that permeates every level of the supply chain. The call to action is clear: implement these practices, adhere to the checklist, and continuously refine your security strategies. The path to a secure software supply chain requires vigilance, collaboration, and an unwavering commitment to excellence. Let this be the cornerstone upon which secure digital futures are built.
In the ever-evolving landscape of cloud-native computing, containers have emerged as the linchpin, enabling organizations to build, deploy, and scale applications with unprecedented agility. However, as the adoption of containers accelerates, so does the imperative for robust container security strategies. The interconnected realms of containers and the cloud have given rise to innovative security patterns designed to address the unique challenges posed by dynamic, distributed environments. Explore the latest patterns, anti-patterns, and practices that are steering the course in an era where cloud-native architecture, including orchestration intricacies of Kubernetes that span across Amazon Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), Google Kubernetes Engine (GKE), including nuances of securing microservices. Related: Amazon ETL Tools Compared. What Is Container Security? Container security is the practice of ensuring that container environments are protected against any threats. As with any security implementation within the software development lifecycle (SDLC), the practice of securing containers is a crucial step to take, as it not only protects against malicious actors but also allows containers to run smoothly in production. Learn how to incorporate CI/CD pipelines into your SDLC. The process of securing containers is a continuous one and can be implemented on the infrastructure level, runtime, and the software supply chain, to name a few. As such, securing containers is not a one-size-fits-all approach. In future sections, we will discuss different container management strategies and how security comes into play. Review additional CI/CD design patterns. How to Build a Container Strategy With Security Forensics Embdedded A container management strategy involves a structured plan to oversee the creation, deployment, orchestration, maintenance, and discarding of containers and containerized applications. It encompasses key elements to ensure efficiency, security, and scalability throughout the software development lifecycle based around containerization. Let's first analyze the prevailing and emerging anti-patterns for container management and security. Then, we will try to correlate possible solutions or alternative recommendations corresponding to each anti-pattern along with optimization practices for fortifying container security strategies for today's and tomorrow's threats. Review more DevOps anti-pattern examples. "Don't treat container security like a choose-your-own-adventure book; following every path might lead to a comedy of errors, not a happy ending!" Container Security Best Practices Weak Container Supply Chain Management This anti-pattern overlooks container supply chain management visible in "Docker history," risking compromised security. Hastily using unofficial Docker images without vetting their origin or build process poses a significant threat. Ensuring robust container supply chain management is vital for upholding integrity and security within the container environment. Learn how to perform a docker container health check. Anti-Pattern: Potential Compromise Pushing malicious code into Docker images is straightforward, but detecting such code is challenging. Blindly using others' images or building new ones from these can risk security, even if they solve similar problems. Pattern: Secure Practices Instead of relying solely on others' images, inspect their Dockerfiles, emulate their approach, and customize them for your needs. Ensure FROM lines in the Dockerfile point to trusted images, preferably official ones or those you've crafted from scratch, despite the added effort, ensuring security over potential breach aftermaths. Installing Non-Essential Executables Into a Container Image Non-essential executables for container images encompass anything unnecessary for the container's core function or app interpreter. For production, omit tools like text editors. Java or Python apps may need specific executables, while Go apps can run directly from a minimal "scratch" base image. Anti-Pattern: Excessive Size Adding non-essential executables to a container amplifies vulnerability risks and enlarges image size. This surplus bulk slows pull times and increases network data transmission. Pattern: Trim the Fat Start with a minimal official or self-generated base image to curb potential threats. Assess your app's true executable necessities, avoiding unnecessary installations. Exercise caution while removing language-dependent executables to craft a lean, cost-effective container image. Cloning an Entire Git Repo Into a Container Image It could look something like : GitHub Flavored Markdown RUN git clone https://github.org/somerepo Anti-Pattern: Unnecessary Complexity External dependency: Relying on non-local sources for Docker image files introduces risk, as these files may not be vetted beforehand. Git clutter: A git clone brings surplus files like the .git/ directory, increasing image size. The .git/ folder may contain sensitive information, and removing it is error-prone. Network dependency: Depending on container engine networking to fetch remote files adds complexity, especially with corporate proxies, potentially causing build errors. Executable overhead: Including the Git executable in the image is unnecessary unless directly manipulating Git repositories. Pattern: Streamlined Assembly Instead of a direct git clone in the Dockerfile, clone to a sub-directory in the build context via a shell script. Then, selectively add needed files using the COPY directive, minimizing unnecessary components. Utilize a .dockerignore file to exclude undesired files from the Docker image. Exception: Multi-Stage Build For a multi-stage build, consider cloning the repository to a local folder and then copying it to the build-stage container. While git clone might be acceptable, this approach offers a more controlled and error-resistant alternative. Building a Docker Container Image “On the Fly” Anti-Pattern: Skipping Registry Deployment Performing cloning, building, and running a Docker image without pushing it to an intermediary registry is an anti-pattern. This skips security screenings, lacks a backup, and introduces untested images to deployment. The main reason is that there are security and testing gaps: Backup and rollback: Skipping registry upload denies the benefits of having a backup, which is crucial for quick rollbacks in case of deployment failures. Vulnerability scanning: Neglecting registry uploads means missing out on vulnerability scanning, a key element in ensuring data and user safety. Untested images: Deploying unpushed images means deploying untested ones, a risky practice, particularly in a production environment. DZone's previously covered how to use penetration tests within an organization. Pattern: Registry Best Practices Build and uniquely version images in a dedicated environment, pushing them to a container registry. Let the registry scan for vulnerabilities and ensure thorough testing before deployment. Utilize deployment automation for seamless image retrieval and execution. Running as Root in the Container Anti-Pattern: Defaulting to Root User Many new container users inadvertently run containers with root as the default user, a practice necessitated by container engines during image creation. This can lead to the following security risks: Root user vulnerabilities: Running a Linux-based container as root exposes the system to potential takeovers and breaches, allowing bad actors access inside the network and potentially the container host system. Container breakout risk: A compromised container could lead to a breakout, granting unauthorized root access to the container host system. Pattern: User Privilege Management Instead of defaulting to root, use the USER directive in the Dockerfile to specify a non-root user. Prior to this, ensure the user is created in the image and possesses adequate permissions for required commands, including running the application. This practice reduces security vulnerabilities associated with root user privileges. Running Multiple Services in One Container Anti-Pattern: Co-Locating Multiple Tiers This anti-pattern involves running multiple tiers of an application, such as APIs and databases, within the same container, contradicting the minimalist essence of container design. The complexity and deviation from the design cause the following challenges: Minimalism violation: Containers are meant to be minimalistic instances, focusing on the essentials for running a specific application tier. Co-locating services in a single container introduces unnecessary complexity. Exit code management: Containers are designed to exit when the primary executable ends, relaying the exit code to the launching shell. Running multiple services in one container requires manual management of unexpected exceptions and errors, deviating from container engine handling. Pattern: Service Isolation Adopt the principle of one container per task, ensuring each container hosts a single service. Establish a local virtualized container network (e.g., docker network create) for intra-container communication, enabling seamless interaction without compromising the minimalist design of individual containers. Embedding Secrets in an Image Anti-Pattern: Storing Secrets in Container Images This anti-pattern involves storing sensitive information, such as local development secrets, within container images, often overlooked in various parts like ENV directives in Dockerfiles. This causes the following security compromises: Easy to forget: Numerous locations within container images, like ENV directives, provide hiding spots for storing information, leading to inadvertent negligence and forgetfulness. Accidental copy of secrets: Inadequate precautions might result in copying local files containing secrets, such as .env files, into the container image. Pattern: Secure Retrieval at Runtime Dockerignore best practices: Implement a .dockerignore file encompassing local files housing development secrets to prevent inadvertent inclusion in the container image. This file should also be part of .gitignore. Dockerfile security practices: Avoid placing secrets in Dockerfiles. For secure handling during build or testing phases, explore secure alternatives to passing secrets via --build-arg, leveraging Docker's BuildKit for enhanced security. Runtime secret retrieval: Retrieve secrets at runtime from secure stores like HashiCorp Vault, cloud-based services (e.g., AWS KMS), or Docker's built-in secrets functionality, which requires a docker-swarm setup for utilization. Failing to Update Packages When Building Images Anti-Pattern: Static Base Image Packages This anti-pattern stems from a former best practice where container image providers discouraged updating packages within base images. However, the current best practice emphasizes updating installed packages every time a new image is built. The main reason for this is outdated packages, which causes lagging updates. Base images may not always contain the latest versions of installed packages due to periodic or scheduled image builds, leaving systems vulnerable to outdated packages, including security vulnerabilities. Pattern: Continuous Package Updates To address this, regularly update installed packages using the distribution's package manager within the Dockerfile. Incorporate this process early in the build, potentially within the initial RUN directive, ensuring that each new image build includes updated packages for enhanced security and stability. When striving to devise a foolproof solution, a frequent misstep is to undervalue the resourcefulness of total novices. Building Container Security Into Development Pipelines Creates a Dynamic Landscape In navigating the ever-evolving realm of containers, which are at an all-time high in popularity and directly proportional to the quantum of security threats, we've delved into a spectrum of crucial patterns and anti-patterns. From fortifying container images by mastering the intricacies of supply chain management to embracing the necessity of runtime secrets retrieval, each pattern serves as a cornerstone in the architecture of robust container security. Unraveling the complexities of co-locating services and avoiding the pitfalls of outdated packages, we've highlighted the significance of adaptability and continuous improvement. As we champion the ethos of one-container-per-task and the secure retrieval of secrets, we acknowledge that container security is not a static destination but an ongoing journey. By comprehending and implementing these patterns, we fortify our containers against potential breaches, ensuring a resilient and proactive defense in an ever-shifting digital landscape.
The use of Cloud Development Environments (CDEs) allows the migration of coding environments online. Solutions range from using a self-hosted platform or a hosted service. In particular, the advantage of using CDEs with data security, i.e., secure Cloud Development Environments, provides the dual benefits of enabling simultaneously productivity and security. Examples given in this article are based on the CDE platform proposed by Strong Network. The implementation of CDE platforms is still in its infancy, and there needs to be a clear consensus on the standard functionalities. The approach taken by Strong Network is to have a dual focus, i.e., leverage CDEs from both a productivity and security standpoint. This is in contrast to using CDEs primarily as a source of efficiency. Embedding Security in CDEs allows for their deployment in Enterprise settings where data and infrastructure security is required. Furthermore, it is possible to deliver via CDE security mechanisms in a way that improves productivity instead of setting additional hurdles for developers. This is because these mechanisms aim to automate many of the manual security processes falling on developers in classic environments, such as the knowledge and handling of credentials. The review of benefits in this article spans three axes of interest for organizations with structured processes. They also align with the main reasons for enterprise adoption of CEDs, as suggested in Gartner's latest DevOps and Agile report. The reasons hover around the benefits of centralized management, improved governance, and opportunities for data security. We revisit these themes in detail below. The positioning of Cloud Development Environments in Garther's Technology Hype Curve, in comparison with Generative AI, is noteworthy. The emergence of this technology provides significant opportunities for CDE platform vendors to deliver innovative functionalities. Streamline the Management of Cloud Development Environments Let's first consider a classic situation where developers each have the responsibility to install and manage their development environment on their devices. This is a manual, often time-consuming, and local operation. In addition, jumping from one project to another will require duplicating the effort, in addition to potentially having to deal with interference between the project’s specific resources. Centralized Provisioning and Configuration The above chore can be streamlined with a CDE managed online. Using an online service, the developer can select a development stack from a catalog and ask for a new environment to be built on demand and in seconds. When accessing the platform, the developer can deal with any number of such environments and immediately start developing in any of them. This functionality is possible thanks to the definition of infrastructure as code and lightweight virtualization. Both aspects are implemented with container technology. The centralized management of Cloud Development Environments allows for remote accessibility and funnels all resource access through a single entry point. Development Resources and Collaboration Environment definition is only one of the needs when starting a new project. The CDE platform can also streamline access to resources, from code repositories to APIs, down to the access of secrets necessary to authenticate to cloud services. Because coding environments are managed online using a CDE platform, it opens the possibility for new collaboration paradigms between developers. For example, as opposed to more punctual collaboration patterns, such as providing feedback on submitted code via a code repository application (i.e., via a Pull-Request), more interactive patterns become available thanks to the immediacy of using an online platform. Using peer coding, two developers can type in the same environment, for example, in order to collaboratively improve the code during a discussion via video conference. Some of the popular interactive patterns explored by vendors are peer-coding and the sharing of running applications for review. Peer coding is the ability to work on the same code at the same time by multiple developers. If you have used an online text editor such as Google Docs and shared it with another user for co-editing, peer-coding is the same approach applied to code development. This allows a user to edit someone else's code in her environment. When running an application inside a CDE-based coding environment, it is possible to share the application with any user immediately. In a classic setting, this will require to pre-emptively deploy the application to another server or share a local IP address for the local device, provided this is possible. This process can be automated with CDEs. Cloud-Delivered Enterprise Security Using Secure CDEs CDEs are delivered using a platform that is typically self-hosted by the organization in a private cloud or hosted by an online provider. In both cases, functionalities delivered by these environments are available to the local devices used to access the service without any installation. This delivery method is sometimes referred to as cloud delivery. So far, we have mentioned mostly functionality attached to productivity, such as the management of environments, access to resources, and collaborative features. In the same manner, security features can also be Cloud-delivered, yielding the additional benefit of realizing secure development practices with CDEs. From an economic perspective, this becomes a key benefit at the enterprise level because many of the security features managed using locally installed endpoint security software can be reimagined. It is our opinion that there's a great deal of innovation that can flourish by rethinking security using CDEs. This is why the Strong Network platform delivers data security as a core part of its functionalities. Using secure Cloud Development Environments, the data accessed by developers can be protected using different mechanisms enabled based on context, for example, based on the status of the developer in the organization. Why Development Data Requires Security Most, if not all, companies today deliver some of their shareholder's value via the development of code, the generation and processing of data, and the creation of intellectual property, likely through the leverage of both resources above. Hence, the protection of the data feeding the development workforce is paramount to running operations aligned with the shareholders’ strategy. Unfortunately, the diversity and complexity from an infrastructure perspective of the development processes often make data protection an afterthought. Even when anticipated, it is often a partial initiative based on opportunity-cost considerations. In industries such as Banking and Insurance, where regulations forbid any shortcuts, resorting to remote desktops and other heavy, productivity-impacting technology is often a parsimoniously applied solution. When the specter of regulation is not a primary concern, companies making the shortcuts may end up paying the price of a bad headline in a collision course with stakeholder interests. In 2023, security-minded company Okta leaked source code, along with many others such as CircleCI, Slack, etc. The Types of Security Mechanisms The opportunity to use CDEs to deliver security via the Cloud makes it efficient because, as mentioned previously, no installation is required, but also because: Mechanisms are independent of the device’s operating system; They can be updated and monitored remotely; They are independent of the user’s location; They can be applied in an adaptive manner, for example, based on the specific role and context of the user. Regarding the type of security mechanisms that can be delivered, these are the typical ones: Provide centralized access to all the organization's resources such that access can be monitored continuously. Centralized access enables the organization to take control of all the credentials for these resources, i.e., in a way that users do not have direct access to them. Implement data loss prevention measures via the applications used by developers, such as the IDE (i.e., code editor), code repository applications, etc. Enable real-time observability of the entire workforce via the inspection of logs using an SIEM application. Realize Secure Software Development Best-Practices With Secure CDEs We explained that using secure cloud development environments jointly benefits both the productivity and the security of the development process. From a productivity standpoint, there's a lot to gain from the centralized management opportunity that the use of a secure CDE platform provides. From a security perspective, delivering security mechanisms via the Cloud brings a load of benefits that transcend the hardware used across the developers to participate in the development process. In other words, the virtualization of development environment delivery is an enabler to foster the efficiency of a series of maintenance and security operations that are performed locally. It brings security to software development and allows organizations to implement secure software development best practices. This also provides an opportunity to template process workflows in an effort to make both productivity and security more systematic, in addition to reducing the cost of managing a development workforce.
In the dynamic landscape of cloud computing, ensuring the security of your applications is paramount. This is particularly true when dealing with a Red Hat OpenShift Kubernetes Service (ROKS) cluster, where applications may be exposed to the public internet. In this article, we will explore how to enhance the security of your applications by routing traffic through edge nodes in the ROKS cluster. Additionally, we will integrate Istio Egress Gateway to manage outbound traffic for even greater control and security. For detailed Istio Egress Gateway configurations and use cases, you can refer to the following Git repository. Understanding the Challenge By default, applications deployed in a ROKS cluster may be accessible from the public internet. To strengthen security, we aim to direct all external traffic through edge nodes, effectively bypassing the default worker nodes. This ensures that the applications are shielded from direct exposure to potential threats on the public internet. Istio Egress Gateway Integration Step 1: Identifying Edge Nodes Accessing ROKS cluster dashboard: Start by accessing the ROKS cluster dashboard through the Cloud console. Identifying edge nodes: Navigate to the cluster details and identify the edge nodes. Edge nodes typically act as the entry point for external traffic, providing an additional layer of security. Step 2: Removing Public Gateway for Default Nodes Accessing node configuration: In the ROKS cluster dashboard, locate the configuration settings for the default worker nodes. Removing public gateway: Modify the network configuration of default nodes to remove the public gateway. This step ensures that external traffic will no longer be directed to the default nodes. Step 3: Configuring Routing via Edge Nodes and Istio Egress Gateway In an OpenShift cluster, Istio Egress Gateway plays a crucial role in managing outbound traffic from services within the cluster to external services. It focuses on controlling and routing traffic leaving the cluster. Here, in this example, we are setting Egress Gateway to allow outbound connection to “ibm.com” from our application pods.If you want to allow a list of different domains for egress connection, replace “ibm.com” with the required domain name in all the resources below. Copy and paste the below resources in specific files and apply them using “oc apply -f <file-name>.” Gateway: Configured to capture outgoing traffic and direct it to the Egress Gateway for processing. YAML apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: istio-egressgateway spec: selector: istio: egressgateway servers: - hosts: - ibm.com port: name: tls number: 443 protocol: TLS tls: mode: PASSTHROUGH ServiceEntry: Used to define external services that microservices within the cluster need to communicate with. YAML apiVersion: networking.istio.io/v1beta1 kind: ServiceEntry metadata: name: serviceentry spec: hosts: - ibm.com location: MESH_EXTERNAL ports: - name: tls number: 443 protocol: TLS resolution: DNS VirtualService: Configured to define routing rules for outgoing traffic, specifying how requests to external services should be processed. YAML # Below Hostname "istio-egressgateway.istio-system.svc.cluster.local" is given, # assuming your gateway pods are running in istio-system namespace. # Else replace "istio-system" based on your setup. apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: ibm-vs spec: exportTo: - . - istio-system gateways: - mesh - istio-egressgateway hosts: - ibm.com tls: - match: - gateways: - mesh port: 443 sniHosts: - ibm.com route: - destination: host: istio-egressgateway.istio-system.svc.cluster.local port: number: 443 - match: - gateways: - istio-egressgateway port: 443 sniHosts: - ibm.com route: - destination: host: ibm.com port: number: 443 DestinationRule: Defines policies for outgoing traffic to external services, influencing how the Egress Gateway communicates with those services YAML # Below Hostname "istio-egressgateway.istio-system.svc.cluster.local" is given, # assuming your gateway pods are running in istio-system namespace. # Else replace "istio-system" based on your setup. apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: dr-egressgateway spec: host: istio-egressgateway.istio-system.svc.cluster.local 3.2. Implementing Network Policies Define network policies that enforce the desired routing rules. This can include specifying that traffic must pass through the edge nodes before reaching the application pods. Conclusion Securing applications in a ROKS cluster from the public internet requires a thoughtful approach to routing traffic. By removing the public gateway for default nodes, configuring routing through edge nodes, and integrating Istio Egress Gateway, you can significantly enhance the security posture of your applications. Always remember to test these changes in a controlled environment to ensure the uninterrupted functionality of your applications while safeguarding them against potential security threats.
Apostolos Giannakidis
Product Security,
Microsoft
Kellyn Gorman
Director of Data and AI,
Silk
Boris Zaikin
Lead Solution Architect,
CloudAstro GmBH