DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Enterprise AI Trend Report: Gain insights on ethical AI, MLOps, generative AI, large language models, and much more.

2024 Cloud survey: Share your insights on microservices, containers, K8s, CI/CD, and DevOps (+ enter a $750 raffle!) for our Trend Reports.

PostgreSQL: Learn about the open-source RDBMS' advanced capabilities, core components, common commands and functions, and general DBA tasks.

AI Automation Essentials. Check out the latest Refcard on all things AI automation, including model training, data security, and more.

Avatar

Christian Posta

VP, Global Field CTO at Solo.io

Phoenix, US

Joined Jul 2009

http://blog.christianposta.com

About

Christian Posta (@christianposta) is Field CTO at solo.io and well known in the community for being an author (Istio in Action, Manning, Microservices for Java Developers, O’Reilly 2016), frequent blogger, speaker, open-source enthusiast and committer on various open-source projects including Istio and Kubernetes. Christian has spent time at web-scale companies and now helps companies create and deploy large-scale, resilient, distributed architectures - many of what we now call Serverless and Microservices. He enjoys mentoring, training and leading teams to be successful with distributed systems concepts, microservices, devops, and cloud-native application design.

Stats

Reputation: 1994
Pageviews: 1.2M
Articles: 13
Comments: 6
  • Articles
  • Refcards
  • Trend Reports
  • Comments

Articles

article thumbnail
Full Lifecycle API Management Is Dead: Build APIs Following Your Software Development Lifecycle With an Internal Developer Platform
Choose the best-of-bread tools for API development and policy management to build a powerful software development platform that improves developer productivity.
March 24, 2023
· 9,068 Views · 3 Likes
article thumbnail
Microservices Orchestration
Learn how to get the most out of your services in this article as you take a look at patterns and strategies for microservices communication.
August 24, 2022
· 12,918 Views · 11 Likes
article thumbnail
Guidance for Building a Control Plane for Envoy, Part 4: Build for Extensibility
This is part 4 of a series on Envoy. In today's post, we explore building a control plane for Envoy Proxy.
June 6, 2019
· 5,791 Views · 2 Likes
article thumbnail
Guidance for Building a Control Plane for Envoy, Part 1
To kick off the series, let's look at using Envoy's dynamic configuration APIs to update Envoy at runtime to deal with changes in topology and deployments.
February 20, 2019
· 8,936 Views · 4 Likes
article thumbnail
FaaS vs. Microservices
There's no-one-size-fits-all strategy to adopting microservices. You need a pragmatic lens through which to judge and apply microservices + FaaS to your technology stack.
December 17, 2018
· 23,966 Views · 13 Likes
article thumbnail
Traffic Shadowing With Istio: Reducing the Risk of Code Release
Istio.io can control the routing of traffic between services, making it valuable for traffic control in applications with microservices.
March 9, 2018
· 14,054 Views · 1 Like
article thumbnail
A Quick Guide to Golang for Java Developers
Go is awesome and you should learn it. Here's how.
November 16, 2015
· 29,349 Views · 23 Likes
article thumbnail
Blue-green Deployments, A/B Testing, and Canary Releases
Methods like blue-green and canary deployments, along with A/B testing, have been staples of DevOps. This article will clarify the differences between each one.
August 5, 2015
· 20,414 Views · 5 Likes
article thumbnail
Lessons Learned: ActiveMQ, Apache Camel and Connection Pooling
Every once in a while, I run into an interesting problem related to connections and pooling with ActiveMQ, and today I’d like to discuss something that is not always very clear and could potentially cause you to drink heavily when using ActiveMQ and Camel JMS. Not to say that you won’t want to drink heavily when using ActiveMQ and Camel anyway… in celebration of how delightful integration and messaging become when using them of course. So first up. Connection pooling. Sure, you’ve always heard to pool your connections. What does that really mean, and why do you want to do it? Opening up a connection to an ActiveMQ broker is a relativley expensive operation when compared to other actions like creating a session or consumer. So when sending or receiving messages and generally interacting with the broker, you’d like to reuse existing connections if possible. What you don’t want to do is rely on a JMS library (like Spring JmsTemplate for example) that opens and closes connections for each send or receive of a message… unless you can pool/cache your connections. So if we can agree that pooling connections is a good idea, take a look at an example config: You may even want to use Apache Camel and its wonderful camel-jms component because doing otherwise would just be silly. So maybe you want to set up a JMS config similar to so: This config basically means for consumers, set up 15 concurrent consumers, use transactions (local), use PERSISTENT messages for producers, set a timeout for 10000 for request-reply etc, etc. Huge note: If you want a more thorough taste of the configs for the jms component, especially around caching consumers, transactions and more, please take a look at Torsten’s excellent blog on Camel JMS with transactions – lesson learned. Maybe you should also spend some time poking around his blog as he’s got lots of good Camel/ActiveMQ stuff too Awesome so far. We have a connection pool of 10 connections, we will expect 10 sessions per connection (for a total of 100 sessions if we needed that…), and 15 concurrent consumers. We should be able to deal with some serious load, right? Take a look at this route here. It’s simple enough, exposes the activemq component (which will use the jmsConfig from above, so 15 concurrent consumers) and just does some logging: from("activemq:test.queue") .routeId("test.queue.routeId") .to("log:org.apache.camel.blog?groupSize=100"); Try and run this. You will find your consumers blocked up right away and stack traces will show this beauty: "Camel (camel-1) thread #1 - JmsConsumer[test.queue]" daemon prio=5 tid=7f81eb4bc000 nid=0x10abbb000 in Object.wait() [10abba000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <7f40e9070> (a org.apache.commons.pool.impl.GenericKeyedObjectPool$Latch) at java.lang.Object.wait(Object.java:485) at org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1151) - locked <7f40e9070> (a org.apache.commons.pool.impl.GenericKeyedObjectPool$Latch) at org.apache.activemq.pool.ConnectionPool.createSession(ConnectionPool.java:146) at org.apache.activemq.pool.PooledConnection.createSession(PooledConnection.java:173) at org.springframework.jms.support.JmsAccessor.createSession(JmsAccessor.java:196) .... How can that possibly be? We have connection pooling… we have sessions per connection set to 10 per connection, so how are we all blocked up on creating new sessions? The answer is you’re exhausting the number of sessions, as you can expect by the stack trace. But how? And how much do I need to drink to resolve this? Well hold on now. Grab a beer and hear me out. First understand this. ActiveMQ’s pooling implementation uses commons-pool and the maxActiveSessionsPerConnection attribute is actually mapped to the maxActive property of the underlying pool. From the docs this means: maxActive controls the maximum number of objects (per key) that can allocated by the pool (checked out to client threads, or idle in the pool) at one time. The key here is “key” (literally… the ‘per key’ clause of the documentation). So in the ActiveMQ implementation the key is an object that represents 1) whether the session mode is transacted and 2) what the acknowledgement mode is () as seen here. So in plain terms, you’ll end up with a “maxActive” sessions for each key that’s used on that connection.. so if you have clients that use transactions, no transactions, client-ack, auto-ack, transacted-session, dups-okay, etc you can start to see that you’d end up with “maxActive” sessions for each permutation. So if you have maxActiveSesssionsPerConnection set to 10, you could really end up with 10 x 2 x 4 == 80 sessions. This is something to tuck away in the back of your mind. The second key here is that when the camel-jms component sets up consumers, it ends up sharing a single connection among all the consumers specified by the concurrentConsumers session. This is an interesting point, because camel-jms uses the underlying Spring framework’s DefaultMessageListenerContainer and unfortunately this restriction comes from that library. So if you have 15 concurrent consumers, they will all share a single connection (even if pooling… it will grab one connection from the pool and hold it). So if you have 15 consumers that each share a connection, each share a transacted mode, each share an ack mode, then you end up trying to create 15 sessions for that one connection. And you end up with the above. So my rule of thumb for avoiding these scenarios: Understand exactly what each of your producers and consumers are doing, what their TX and ACK modes are Always tune the max sessions param when you NEED to (too many session threads? i dunno..) but always do concurrentConsumers+1 as the value AT LEAST If producers and consumers are producing/consuming the same destination SPLIT UP THE CONNECTION POOL: one pool for consumers, one pool for producers Dunno how valuable this info will be, but I wanted to jot it down for myself. If someone else finds it valuable, or has questions, let me know in the comments.
March 4, 2014
· 24,456 Views · 1 Like
article thumbnail
JMS-style selectors on Amazon SQS with Apache Camel
This blog post demonstrates how easy it is to use Apache Camel and its new json-path component along with the camel-sqs component to produce and consume messages on Amazon SQS. Amazon Web Services SQS is a message queuing “software as a service” (SaaS) in the cloud. To be able to use it, you need to sign up for AWS. It’s primary access mechanism is XML over HTTP through various AWS SDK clients provided by Amazon. Please check out the SQS documentation for more. And as “luck” would have it, one of the users in the Apache Camel community created a component to be able to integrate with SQS. This makes it trivial to add a producer or consumer to an SQS queue and plugs in nicely with the Camel DSL. SQS, however, is not a “one-size fits all” queueing service; you must be aware of your use case and make sure it fits (current requirements as well as somewhat into the future…). There are limitations that, if not studied and accounted for ahead of time, could come back to sink your project. An example of a viable alternative, and one that more closely fits the profile of a high performance and full featured message queue is Apache ActiveMQ. For example, one limitation to keep in mind is that unlike traditional JMS consumers, you cannot create a subscription to a queue that filters messages based on some predicate (at least not using the AWS-SQS API — you’d have to build that into your solution). Some other things to keep in mind when using SQS: The queue does not preserve FIFO messaging That is, message order is not preserved. They can arrive out of order from when they were sent. Apache Camel can help with its resequencer pattern. Bilgin Ibryam, now a colleague of mine at Red Hat, has written a great blog post about how to restore message order using the resequencer pattern. Message size is limited to 256K This is probably sufficient, but if your message sizes are variable, or contain more data that 256K, you will have to chunk them and send in smaller chunks. No selector or selective consumption If you’re familiar with JMS, you know that you can specify consumers to use a “selector” or a predicate expression that is evaluated on the broker side to determine whether or not a specific message should be dispatched to a specific consumer. For example, Durability constraints Some use cases call for the message broker to store messages until consumers return. SQS allows a limit of up to 14 days. This is most likely sufficient, but something to keep in mind. Binary payloads not allowed SQS only allows text-based messages, e.g., XML, JSON, fixed format text, etc. Binary such as Avro, Protocol Buffers, or Thrift are not allowed. For some of these limitations, you can work around them by building out the functionality yourself. I would always recommend taking a look at how an integration library like Apache Camel can help — which has out-of-the-box support for doing some of these things. Doing JMS-style selectors So the basic problem is we want to subscribe to a SQS queue, but we want to filter which messages we process. For those messages that we do not process, those should be left in the queue. To do this, we will make use of Apache Camel’s Filter EIP as well as the visibility timeouts available on the SQS queue. By default, SQS will dispatch all messages in its queue when it’s queried. We cannot change this, and thus not avoid the message being dispatched to us — we’ll have to do the filtering on our side (this is different than how a full-featured broker like ActiveMQ does it, i.e., filtering is done on the broker side so the consumer doesn’t even see the message it does not want to see). Once SQS dispatches a message, it does not remove it from the queue unless the consumer has acknowledged that it has it and is finished with it. The consumer does this by sending a DeleteMessage command. Until the DeleteMessage command is sent, the message is always in the queue, however visibility comes in to play here. When a message is dispatched to a consumer, there is a period of time which it will not be visible to other consumers. So if you browsed the queue, you would not see it (it should appear in the stats as “in-flight”). However, there is a configurable period of time you can specify for how long this “visibility timeout” should be active. So if you set the visibility to a lower time period (default is 30 seconds), you can more quickly get messages re-dispatched to consumers that would be able to handle the message. Take a look at the following Camel route which does just that: @Override public void configure() throws Exception { // every two seconds, send a message to the "demo" queue in SQS from("timer:kickoff?period=5000") .setBody().method(this, "generateJsonString") .to("aws-sqs://demo?amazonSQSClient=#sqsClient&defaultVisibilityTimeout=2"); } In the above Camel Route, we create a new message every 5 seconds and send it to an SQS queue named demo — note we set the defaultVisibilityTimeout to 2 seconds. This means that after a message gets dispatched to a consumer, SQS will wait about 2 seconds before considering it eligible to be dispatched to another consumer if it has not been deleted. On the consumer side, we take advantage of a couple Apache Camel conveniences Using JSON Path + Filter EIP Camel has an excellent new component named JSON-Path. Claus Ibsen tweeted about it when he hacked it up. This allows you to do Content-Based Routing on a JSON payload very easily by using XPath-style expressions to pick out and evaluate attributes in a JSON encoded object. So in the following example, we can test an attribute named ‘type’ to be equal to ‘LOGIN’ and use Camel’s Filter EIP to allow only those messages that match to go through and continue processing: public class ConsumerRouteBuilder extends RouteBuilder { @Override public void configure() throws Exception { from("aws-sqs://demo?amazonSQSClient=#sqsClient&deleteIfFiltered=false") .setHeader("identity").jsonpath("$['type']") .filter(simple("${header.identity} == 'login'")) .log("We have a message! ${body}") .to("file:target/output?fileName=login-message-${date:now:MMDDyy-HHmmss}.json"); } } To complete the functionality, we have to pay attention to a new configuration option added for the Camel-SQS component: deleteIfFiltered — Whether or not to send the DeleteMessage to the SQS queue if an exchange fails to get through a filter. If ‘false’ and exchange does not make it through a Camel filter upstream in the route, then don’t send DeleteMessage. By default, Camel will send the “DeleteMessage” command to SQS after a route has completed successfully (without an exception). However, in this case, we are specifying to not send the DeleteMessage command if the message had been previously filtered by Camel. This example demonstrates how easy it is to use Apache Camel and its new json-path component along with the camel-sqs component to produce and consume messages on Amazon SQS. Please take a look at the source code on my github repo to play with the live code and try it out yourself.
October 28, 2013
· 11,217 Views · 0 Likes
article thumbnail
ActiveMQ Message Priorities: How it Works
There’s usually a steady drip of questions on the mailing list surrounding ActiveMQ’s message-priority support as well as good questions about observed behaviors and “what’s really supported”? I hope to help you understand what happens under the covers and what levels of priority can be supported. The details could get gory for some. If you’re not interested in the details, take a look at the ActiveMQ wiki for the high-level overview. First, since ActiveMQ supports JMS 1.1, let’s take a look at what the JMS Spec says about support for “JMSPriority”: JMS defines a ten-level priority value, with 0 as the lowest priority and 9 as the highest. In addition, clients should consider priorities 0-4 as gradations of normal priority and priorities 5-9 as gradations of expedited priority. JMS does not require that a provider strictly implement priority ordering of messages; however, it should do its best to deliver expedited messages ahead of normal messages. ActiveMQ observes three distinct levels of “Priority”: Default (JMSPriority == 4) High (JMSPriority > 4 && <= 9) Low (JMSPriority > 0 && < 4) If you don’t specify a priority for your MessageProducer or individual messages (see MessageProducer#send(message, deliveryMode, priority, timeToLive)), ActiveMQ’s client will default to using a JMSPriority == 4. As a JMS consumer, you can expect a FIFO ordering if the producers aren’t using priority or you’re not using some other form of selection criteria on the destination. ActiveMQ also “does its best” to deliver expedited messages ahead of “normal” messages, as the spec states. The message store that your broker uses greatly contributes to how that’s exactly done, but in general you can expect the broker to honor strict (0-9) priority support for only the JDBC backed messages stores. For KahaDB-backed message stores, only “category priority” is supported (Low, Default, High, where priorities in each category are not always differentiated, that is 5 and 9 are considered “High”). However, with the right settings and messaging profile, you can affect how [strict] prioritization happens even with KahaDB, so let’s take a quick look. Enabling Message Priority You can enable message priority on your Queues with the following setting in your activemq.xml configuration file: For queueName there is wildcard support, so you can enable priority support on a hierarchy of messages. When you enable priority support, the broker will use prioritized linked-list structures in its messages cursors as well as give KahaDB a hint to use priority categories when storing messages onto disk. There are varying levels of how strict the priority ordering can get, but at worst, you can assume priorities will be upheld by category. The following factors come into play which control how strict the priority ordering can get when using the KahaDB store: Caching enabled/disabled in the queue cursor MaxPageInSize for how many messages to page from the store in a batch Consumer prefetching Expired-message checking Broker Memory settings Persistent/Non-persistent messages The next section presents a little detail about what happens in KahaDB to support priority, while the following sections will go into how things happen in broker memory and are finally dispatched to a consumer and will point out along the way how the different factors from above come into play. KahaDB Prioritization Categories First we’ll start with how messages are stored on disk and loaded into a destination. KahaDB (the default message store) is a file-based message database that the broker uses to persist messages in a “log” or “journal”. The broker also keeps track of which messages are in the log by keeping a separate “index” that holds information about messages (like its location in the log, to which destination it’s associated, ordering, etc). The index also has a notion of message “priority”, which is implemented with three B+Tree structures, one for each priority level (see MessageOrderIndex in org.apache.activemq.store.kahadb.MessageDatabase). This implementation detail is the root of message prioritization and has implications for the rest of the broker as messages are removed from the store. When messages are retrieved from the store, they are done so in batches (maxPageInSize), and messages that are in the “highPriority” BTree are retrieved first. When the high-priority messages are exhausted, the store will then offer up the default priority and subsequently the low priority messages. You can set the maxPageInSize like so: The larger the page size, the larger the number of messages in a batch and the more messages you can see at a time per “snapshot”. For each batch that’s brought into memory, it’s messages are going to be strictly prioritized as described below by the store cursor. The downside is that if your messages are large, bringing in 500 at a time could exhaust your broker memory. The default setting is 200. Message Cursor Priority Lists When persistent messages come into the broker from a producer, they will be stored onto disk, but they will also be cached in memory waiting to be dispatched to a consumer. This is a default setting, so no need to explicitly set it. The idea behind this is to be able to dispatch to fast consumers without having to retrieve it directly from disk (if consumers become slow, the broker will auto-tune itself to not use the cache once it’s filled so as to not OOM). The good thing about this is that when prioritization support is used for a queue, the internal lists used for the cursors will support strict priority (0-9), so for all of the messages that are currently in memory (in the cache), they will be sorted properly from highest to lowest. The trick is what happens when all of the messages in the cache are “lower priority messages” and then a high-priority message comes in to the broker but won’t fit in the cache because it’s full… in that case the message will go directly to the store, be indexed in the “high-priority” index, but won’t be available for dispatch ahead of the lower priority messages until it’s paged into memory in the next batch. When NON persistent messages come into the broker, they will not go to the message store. They will be kept in memory for as long as possible and only pushed to disk (in a temporary store) when memory has passed a defined threshold (> 70% by default). So the same behaviors for cached messages above apply for non-persistent messages, namely, those that are in memory will be ordered strictly (0-9), but once they get pushed to disk, only categories are observed. If you disable the cursor’s cache (with the following setting) then you could help to eliminate the above scenario where the cache becomes full with lower priority messages right when a high-priority message comes in (and becomes stuck on disk because it cannot be paged into memory). However, doing this will slow down your throughput because messages must be paged in from disk before sending to consumers which will slow down the dispatch. But note, when doing this, you are more likely to see messages not following “strict” priority even with the priority lists in the cursor. They will, however, follow the priority categories (High, Default, Low) properly. So to recap, if you disable the cache, you can get higher priority messages delivered more timely than you can if the cache is enabled and it’s filled with lower priority messages. But disabling the cache, by itself, won’t get you to strict priority. Disabling the cache helps getting high priority messages to consumers ahead of lower priority messages, however for this to work as intended (and has bitten me), you’ll want to disable the asynchronous message expiry check. This expiry check pages messages into memory every 30 seconds regardless if they’re ready to be dispatched (by default) and performs a TTL check (time to live) on them and discards those messages that should be expired. This sort of checking effectively brings messages into memory and will stall the normal “page in for dispatch” just enough to miss higher priority messages. Turning off expiry checking, however, will keep expired messages in the store longer because the only expiry check would be done right before dispatch, so make an educated decision on this, and all ActiveMQ settings you tinker with. But to move in the direction of strict(er) order priorities, you’ll want to disable this. Lastly, consumer prefetch plays a role in achieving “strict ordering.” By default, prefetch is set to 1000 for queue consumers, which means they will be sent 1000 messages in a batch. This helps speed up the consumer when it’s consuming messages, but in terms of priority handling it in essence also acts like a cache of messages (discussed above) and could contribute to not seeing “strict ordering”. “category priority” could also be violated if your prefetch is filled with lower priority messages, and there is a new high-priority message that came in to the broker, you wouldn’t see it until the next message dispatch to the consumer. So the lower the prefetch, the better chance of seeing higher priority messages ahead of lower ones. With prefetch of 1, you’ll always get the highest priority message that the store cursor knows about. Client side message priority ActiveMQ also has priority support built right into the message client and it’s enabled by default. This means, when messages are being sent to your consumer (even before your consumer is receiving them, using prefetch), they will be cached on the consumer side and prioritized by default. This is regardless of whether you’re using priority support on the broker side. This could impact the ordering you see on the consumer so just keep this in mind. To disable it, set the following configuration option on your broker URL, e.g., tcp://0.0.0.0:61616?jms.messagePrioritySupported=false But as mentioned above, you’ll want to lower the prefetch to 1 to get the best chance of achieving strict ordering. Tradeoffs So ultimately, getting strictly ordered messages with KahaDB is possible but there are significant tradeoffs to consider and it won’t apply for every messaging situation. Do you want optimized, fast messaging? or do you want to slow down the messaging to achieve strict(er) ordering for priorities. Each situation is different and should be evaluated on a case-by-case basis. In general, however, you can rely on category level priorities. Reordering messages across large queues AND keeping high performance is problematic, and most Message Queue vendors do not do that very well. ActiveMQ’s priority support is strong, but another good alternative exists as discussed on the ActiveMQ wiki describing message priority and that is: using message selectors and balancing out the consumers in such a way that high priority messages end up getting consumed first. This approach tends to give more flexibility and control, but that’s for another post Leave me some comments if something wasn’t clear, or drop an email in the mailing list!
April 2, 2013
· 17,699 Views · 0 Likes
article thumbnail
ActiveMQ: Understanding Memory Usage
As indicated by some recent mailing list emails and a lot of info returned from Google, ActiveMQ’s SystemUsage and particularly the MemoryUsage functionality has left some people confused. I’ll try to explain some details around MemoryUsage that might be helpful in understanding how it works. I won’t cover StoreUsage and TempUsage as my colleauges have covered thosein some depth. There is a section of the activemq.xml configuration you can use to specify SystemUsage limits, specifically around the memory, persistent store, and temporary store that a broker can use. Here is an example with the defaults that come with ActiveMQ 5.7: MemoryUsage MemoryUsage seems to cause the most confusion, so here goes my attempt to clarify its inner workings. When a message comes in to the broker, it has to go somewhere. It first gets unmarshalled off the wire into an ActiveMQ command object of type ActiveMQMessage. At this moment, the object is obviously in memory but the broker isn’t keeping track of it. Which brings us to our first point. The MemoryUsage is really just a counter of bytes that the broker needs and uses to keep track of how much of our JVM memory is being used by messages. This gives the broker some way of monitoring and ensuring we don’t hit our limits (more on that in a bit). Otherwise we could take on messages without knowing where our limits are until the JVM runs out of heap space. So we left off with the message coming in off the wire. Once we have that, the broker will take a look at which destination (or multiple destinations) the message needs to be routed. Once it finds the destination, it will “send” it there. The destination will increment a reference count of the message (to later know whether or not the message is considered “alive”) and proceed to do something with it. For the first reference count, the memory usage is incremented. For the last reference count, the memory usage is decremented. If the destination is a queue, it will store the message into a persistent location and try to dispatch it to a consumer subscription. If it’s a Topic, it will try to dispatch it to all subscriptions. Along the way (from the initial entry into the destination to the subscription that will send the message to the consumer), the message reference count may be incremented or decremented. As long as it has a reference count greater than or equal to 1, it will be accounted for in memory. Again, the MemoryUsage is just an object that counts bytes of messages to know how much JVM memory has been used to hold messages. So now that we have a basic understanding of what the MemoryUsage is, let’s take a closer look at a couple things: MemoryUsage hierarchies (what’s this destination memory limit that I can configure on policy entries)?? Producer Flow Control Splitting memory usage between destinations and subscriptions (producers and consumers)? Main Broker Memory, Destination Memory, Subscription Memory When the broker loads up, it will create its own SystemUsage object (or use the one specified in the configuration). As we know, the SystemUsage object has a MemoryUsage, StoreUsage, and TempUsage associated with it. The memory component will be known as the broker’s Main memory. It’s a usage object that keeps track of overall (destination, subscription, etc) memory. A destination, when it’s created, will create its own SystemUsage object (which creates its own separate Memory, Store, and Temp Usage objects) but it will set its parent to the be broker’s main SystemUsage object. A destination can have its memory limits tuned individually (but not Store and Temp, those will still delegate to the parent). To set a destination’s memory limit: So the destination usage objects can be used to more finely control MemoryUsage, but it will always coordinate with the Main memory for all usage counts. This functionality can be used to limit the number of messages that a destination keeps around so that a single destination cannot starve other destinations. For queues, it also affects the store cursor’s high water mark. A queue has different cursors for persistent and non-persistent messages. If we hit the high water mark (a threshold of the destination’s memory limit), no more messages be cached ready to be dispatched, and non-persistent messages can be purged to temp disk as necessary (if the StoreCursor will use FilePendingMessageCursor… otherwise it will just use a VMPendingMessageCursor and won’t purge to temporary store). If you don’t specify a memory limit for individual destinations, the destination’s SystemUsage will delegate to the parent (Main SystemUsage) for all usage counts. This means it will effectively use the broker’s Main SystemUsage for all memory-related counts. Consumer subscriptions, on the other hand, don’t have any notion of their own SystemUsage or MemoryUsage counters. They will always use the broker’s Main SystemUsage objects. The main thing to note about this is when using a FilePendingMessageCursor for subscriptions (for example, for a Topic subscription), the messages will not be swapped to disk until the cursor high-water mark (70% by default) is reached.. but that means 70% of Main memory will need to be reached. That could be a while, and a lot of messages could be kept in memory. And if your subscription is the one holding most of those messages, swapping to disk could take a while. As topics dispatch messages to one subscription at a time, if one subscription grinds to a halt because it’s swapping its messages to disk, the rest of the subscription ready to receive the message will also feel the slow down. You can set the cursor high water mark for subscriptions of a topic to be lower than the default: For those interested… When a message comes in the the destination, a MemoryUsage object is set on the message so that when Message.incrementReferenceCount() can increment the memory usage (on first referenced). So that means it’s accounted for by the destination’s Memory usage (and also the Main memory since the destination’s memory also informs its parent when its usage changes) and continues to do so. The only time this will change is if the message gets swapped to disk. When it gets swapped, its reference counts will be decremented, its memory usage will be decremented, and it will lose its MemoryUsage object once it gets to disk. So when it comes back to life, which MemoryUsage object will get associated with it, and where will it be counted? If it was swapped to a queue’s store, when it reconstitutes, it will be again associated with the destination memory usage. If it was swapped to a temp store in a subscription (like in a FilePendingMessageCursor), when it reconstitutes, it will NOT be associated with the destination’s memory usage anymore. It will be associated with the subscription’s memory usage (which is main memory). Producer Flow Control The big win for keeping track of memory used by messages is for Producer Flow Control (PFC). PFC is enabled by default and basically slows down the producers when usage limits are reached. This keeps the broker from exceeding its limits and running out of resources. For producers sending synchronously or for async sends with a producer window specified, if system usages are reached the broker will block that individual producer, but it will not block the connection. It will instead put the message away temporarily to wait for space to become available. It will only send back a ProducerAck once the message has been stored. Until then, the client is expected to block its send operation (which won’t block the connection itself). The ActiveMQ 5.x client libraries handle this for you. However, if an async send is sent without a producer window, or if a producer doesn’t behave properly and ignores ProducerAcks, PFC will actually block the entire connection when memory is reached. This could result in deadlock if you have consumers sharing the same connection. If producer flow control is turned off, then you have to be a little more careful about how you set up your system usages. When producer flow control is off, it basically means “broker, you have to accept every message that comes in, no matter if the consumers cannot keep up”. This can be used to handle spikes for incoming messages to a destination. If you’ve ever seen memory usages in your logs severely exceed the limits you’ve set, you probably had PFC turned off and that is expected behavior. Splitting Broker’s Main Memory So… I said earlier that a destination’s memory uses the broker’s main memory as a parent, and that subscriptions don’t have their own memory counters, they just use the broker’s main memory. Well this is true in the default case, but if you find a reason, you can further tune how memory is divided and limited. The idea here is you can partition the broker’s main memory into “Producer” and “Consumer” parts. The Producer part will be used for all things related to messages coming in to the broker, therefore it will be used in destinations. So this means when a destination creates its own MemoryUsage, it will use the Producer memory as its parent, and the Producer memory will use a portion of the broker’s main memory. On the other hand, the Consumer part will be used for all things related to dispatching messages to consumers. This means subscriptions. Instead of a subscription using the broker’s main memory directly, it will use the Consumer memory which will be a portion of the main memory. Ideally, the Consumer portion and the Producer portion will equal the entire broker’s main memory. To split the memory between producer and consumer, set the splitSystemUsageForProducersConsumers property on the main element: By default this will split the broker’s Main memory usage into 60% for the producers and 40% for the consumers. To tune this even further, set the producerSystemUsagePortion and consumerSystemUsagePortion on the main broker element: There you have it. Hopefully this sheds some light into the MemoryUsage of the broker. The topic is huge, and the tuning options are plenty, so if you have specific questions please ask in the activemq mailing list or leave a comment below.
December 10, 2012
· 24,607 Views · 0 Likes
article thumbnail
What is ActiveMQ?
Although the Active MQ website already gives a pithy, to-the-point explanation of ActiveMQ, I would like to add some more context to their definition. From the ActiveMQ project’s website: “ActiveMQ is an open sourced implementation of JMS 1.1 as part of the J2EE 1.4 specification.” Here’s my take: ActiveMQ is an open-source, messaging software which can serve as the backbone for an architecture of distributed applications built upon messaging. The creators of ActiveMQ were driven to create this open-source project for two main reasons: The available existing solutions at the time were proprietary/very expensive Developers with the Apache Software Foundation were working on a fully J2EE compliant application server (Geronimo) and they needed a JMS solution that had a license compatible with Apache’s licensing. Since its inception, ActiveMQ has turned into a strong competitor of the commercial alternatives, such as WebSphereMQ, EMS/TIBCO and SonicMQ and is deployed in production in some of the top companies in industries ranging from financial services to retail. Using messaging as an integration or communication style leads to many benefits such as: Allowing applications built with different languages and on different operating systems to integrate with each other Location transparency – client applications don’t need to know where the service applications are located Reliable communication – the producers/consumers of messages don’t have to be available at the same time, or certain segments along the route of the message can go down and come back up without impacting the message getting to the service/consumer Scaling – can scale horizontally by adding more services that can handle the messages if too many messages are arriving Asynchronous communication – a client can fire a message and continue other processing instead of blocking until the service has sent a response; it can handle the response message only when the message is ready Reduced coupling – the assumptions made by the clients and services are greatly reduced as a result of the previous 5 benefits. A service can change details about itself, including its location, protocol, and availability, without affecting or disrupting the client. Please see Gregor Hohpe’s description about messaging or the book he and Bobby Woolf wrote about messaging-based enterprise application integration. There are other advantages as well (hopefully someone can add other benefits or drawbacks in the comments), and ActiveMQ is a free, open-source software that can facilitate delivering those advantages and has proven to be highly reliable and scalable in production environments.
July 21, 2012
· 27,547 Views · 8 Likes

Refcards

Refcard #306

Getting Started With Istio

Getting Started With Istio

Refcard #170

Camel Essential Components

Camel Essential Components

Trend Reports

Trend Report

Software Integration

Seamless communication — that, among other consequential advantages, is the ultimate goal when integrating your software. And today, integrating modern software means fusing various applications and/or systems — many times across distributed environments — with the common goal of unifying isolated data. This effort often signifies the transition of legacy applications to cloud-based systems and messaging infrastructure via microservices and REST APIs.So what's next? Where is the path to seamless communication and nuanced architecture taking us? Dive into our 2023 Software Integration Trend Report and fill the gaps among modern integration practices by exploring trends in APIs, microservices, and cloud-based systems and migrations. You have to integrate to innovate!

Software Integration

Trend Report

Microservices and Containerization

According to our 2022 Microservices survey, 93% of our developer respondents work for an organization that runs microservices. This number is up from 74% when we asked this question in our 2021 Containers survey. With most organizations running microservices and leveraging containers, we no longer have to discuss the need to adopt these practices, but rather how to scale them to benefit organizations and development teams. So where do adoption and scaling practices of microservices and containers go from here? In DZone's 2022 Trend Report, Microservices and Containerization, our research and expert contributors dive into various cloud architecture practices, microservices orchestration techniques, security, and advice on design principles. The goal of this Trend Report is to explore the current state of microservices and containerized environments to help developers face the challenges of complex architectural patterns.

Microservices and Containerization

Trend Report

Migrating to Microservices

DZone Trend Reports expand on the tech content that our readers say is most helpful, including thought leadership and in-depth, original DZone research. The Migrating to Microservices Trend Report features expert predictions on the next phase of microservices adoption in the enterprise, as well as insights into some challenges and opportunities presented by current usage patterns.

Migrating to Microservices

Comments

Guidance for Building a Control Plane for Envoy, Part 1

Feb 28, 2019 · Jordan Baker

BTW, part 2 and 3 of this series will be coming out at the same time and hopefully early next week. Stay tuned!

Guidance for Building a Control Plane for Envoy, Part 1

Feb 28, 2019 · Jordan Baker

Thanks for the comment. Yah, the Gloo control plane and data plane can be deployed outside of Kuberentes. We are currently finishing the documentation needed to walk folks through setting up Gloo against Consul and other backend storage. Here's a quick preview for doing so with docker-compose. Fargate would be supported in a similar fashion: https://gloo.solo.io/installation/docker-compose/

Gmock: Mocking Framework for Groovy

Aug 27, 2013 · Mr B Loid

Thanks Martin for this post.

For those interested, I've also converted the test code to maven:

https://github.com/christian-posta/rw-concurrency

Definitely useful for evaluating different locking strategies!

What is an Architect?

Dec 05, 2011 · Alvin Ashcraft

Steve, Your principle that "The architect is the conduit between the business problem and the technical solution" is still kind of vague to me. You could apply the same principle to a software developer who's job it is to understand the business domain, the problem that's being solved, and develop a technical solution.
Useful model for an object-oriented design

Feb 01, 2010 · Christian Posta

Should be fixed now.
PHP: Calculate Pi

Jul 06, 2009 · Chao Xu

I honestly believe that the biggest problem that most developers have is that they take action without considering the reasons or implications

I agree, although I would take it a step further. Most developers might take action without considering the reasons or implications because 1) they don't want to invest the time to learn 2) it's easier to not. A lot of developers show up to work 8 hours and collect a pay check. Pushing themselves to better their skills and improve their understanding of software construction seems not to be on their 'to-do' list.

User has been successfully modified

Failed to modify user

ABOUT US

  • About DZone
  • Send feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: