Enterprise AI Trend Report: Gain insights on ethical AI, MLOps, generative AI, large language models, and much more.
2024 Cloud survey: Share your insights on microservices, containers, K8s, CI/CD, and DevOps (+ enter a $750 raffle!) for our Trend Reports.
A framework is a collection of code that is leveraged in the development process by providing ready-made components. Through the use of frameworks, architectural patterns and structures are created, which help speed up the development process. This Zone contains helpful resources for developers to learn about and further explore popular frameworks such as the Spring framework, Drupal, Angular, Eclipse, and more.
Spring Strategy Pattern Example
Spring Boot 3.2: Replace Your RestTemplate With RestClient
It’s been more than 20 years since Spring Framework appeared in the software development landscape and 10 since Spring Boot version 1.0 was released. By now, nobody should have any doubt that Spring has created a unique style through which developers are freed from repetitive tasks and left to focus on business value delivery. As years passed, Spring’s technical depth has continually increased, covering a wide variety of development areas and technologies. On the other hand, its technical breadth has been continually expanded as more focused solutions have been experimented, proof of concepts created, and ultimately promoted under the projects’ umbrella (towards the technical depth). One such example is the new Spring AI project which, according to its reference documentation, aims to ease the development when a generative artificial intelligence layer is aimed to be incorporated into applications. Once again, developers are freed from repetitive tasks and offered simple interfaces for direct interaction with the pre-trained models that incorporate the actual processing algorithms. By interacting with generative pre-trained transformers (GPTs) directly or via Spring AI programmatically, users (developers) do not need to (although it would be useful) possess extensive machine learning knowledge. As an engineer, I strongly believe that even if such (developer) tools can be rather easily and rapidly used to produce results, it is advisable to temper ourselves to switch to a watchful mode and try to gain a decent understanding of the base concepts first. Moreover, by following this path, the outcome might be even more useful. Purpose This article shows how Spring AI can be integrated into a Spring Boot application and fulfill a programmatic interaction with Open AI. It is assumed that prompt design in general (prompt engineering) is a state-of-the-art activity. Consequently, the prompts used during experimentation are quite didactic, without much applicability. The focus here is on the communication interface, that is, Spring AI API. Before the Implementation First and foremost, one shall clarify the rationale for incorporating and utilizing a GPT solution, in addition to the desire to deliver with greater quality, in less time, and with lower costs. Generative AI is said to be good at doing a great deal of time-consuming tasks, quicker and more efficiently, and outputting the results. Moreover, if these results are further validated by experienced and wise humans, the chances of obtaining something useful increase. Fortunately, people are still part of the scenery. Next, one shall resist the temptation to jump right into the implementation and at least dedicate some time to get a bit familiar with the general concepts. An in-depth exploration of generative AI concepts is way beyond the scope of this article. Nevertheless, the “main actors” that appear in the interaction are briefly outlined below. The Stage – Generative AI is part of machine learning that is part of artificial intelligence Input – The provided data (incoming) Output – The computed results (outgoing) Large Language Model(LLM) – The fine-tuned algorithm based on the interpreted input produces the output Prompt – A state-of-the-art interface through which the input is passed to the model Prompt Template – A component that allows constructing structured parameterized prompts Tokens – The components the algorithm internally translates the input into, then uses to compile the results and ultimately constructs the output from Model’s context window – The threshold the model limits the number of tokens counts per call (usually, the more tokens are used, the more expensive the operation is) Finally, an implementation may be started, but as it progresses, it is advisable to revisit and refine the first two steps. Prompts In this exercise, we ask for the following: Plain Text Write {count = three} reasons why people in {location = Romania} should consider a {job = software architect} job. These reasons need to be short, so they fit on a poster. For instance, "{job} jobs are rewarding." This basically represents the prompt. As advised, a clear topic, a clear meaning of the task, and additional helpful pieces of information should be provided as part of the prompts, in order to increase the results’ accuracy. The prompt contains three parameters, which allow coverage for a wide range of jobs in various locations. count – The number of reasons aimed as part of the output job – The domain, the job interested in location – The country, town, region, etc. the job applicants reside Proof of Concept In this post, the simple proof of concept aims the following: Integrate Spring AI in a Spring Boot application and use it. Allow a client to communicate with Open AI via the application. The client issues a parametrized HTTP request to the application. The application uses a prompt to create the input, sends it to Open AI retrieves the output. The application sends the response to the client. Setup Java 21 Maven 3.9.2 Spring Boot – v. 3.2.2 Spring AI – v. 0.8.0-SNAPSHOT (still developed, experimental) Implementation Spring AI Integration Normally, this is a basic step not necessarily worth mentioning. Nevertheless, since Spring AI is currently released as a snapshot, in order to be able to integrate the Open AI auto-configuration dependency, one shall add a reference to Spring Milestone/Snapshot repositories. XML <repositories> <repository> <id>spring-milestones</id> <name>Spring Milestones</name> <url>https://repo.spring.io/milestone</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>spring-snapshots</id> <name>Spring Snapshots</name> <url>https://repo.spring.io/snapshot</url> <releases> <enabled>false</enabled> </releases> </repository> </repositories> The next step is to add the spring-ai-openai-spring-boot-starter Maven dependency. XML <dependency> <groupId>org.springframework.ai</groupId> <artifactId>spring-ai-openai-spring-boot-starter</artifactId> <version>0.8.0-SNAPSHOT</version> </dependency> Open AI ChatClient is now part of the application classpath. It is the component used to send the input to Open AI and retrieve the output. In order to be able to connect to the AI Model, the spring.ai.openai.api-key property needs to be set up in the application.properties file. Properties files spring.ai.openai.api-key = api-key-value Its value represents a valid API Key of the user on behalf of which the communication is made. By accessing the Open AI Platform, one can either sign up or sign in and generate one. Client: Spring Boot Application Communication The first part of the proof of concept is the communication between a client application (e.g., browser, cURL, etc.) and the application developed. This is done via a REST controller, accessible via an HTTP GET request. The URL is /job-reasons together with the three parameters previously outlined when the prompt was defined, which conducts to the following form: Plain Text /job-reasons?count={count}&job={job}&location={location} And the corresponding controller: Java @RestController public class OpenAiController { @GetMapping("/job-reasons") public ResponseEntity<String> jobReasons(@RequestParam(value = "count", required = false, defaultValue = "3") int count, @RequestParam("job") String job, @RequestParam("location") String location) { return ResponseEntity.ok().build(); } } Since the response from Open AI is going to be a String, the controller returns a ResponseEntity that encapsulates a String. If we run the application and issue a request, currently nothing is returned as part of the response body. Client: Open AI Communication Spring AI currently focuses on AI Models that process language and produce language or numbers. Examples of Open AI models in the former category are GPT4-openai or GPT3.5-openai. For fulfilling an interaction with these AI Models, which actually designate Open AI algorithms, Spring AI provides a uniform interface. ChatClient interface currently supports text input and output and has a simple contract. Java @FunctionalInterface public interface ChatClient extends ModelClient<Prompt, ChatResponse> { default String call(String message) { Prompt prompt = new Prompt(new UserMessage(message)); return call(prompt).getResult().getOutput().getContent(); } ChatResponse call(Prompt prompt); } The actual method of the functional interface is the one usually used. In the case of our proof of concept, this is exactly what is needed: a way of calling Open AI and sending the aimed parametrized Prompt as a parameter. The following OpenAiService is defined where an instance of ChatClient is injected. Java @Service public class OpenAiService { private final ChatClient client; public OpenAiService(OpenAiChatClient aiClient) { this.client = aiClient; } public String jobReasons(int count, String domain, String location) { final String promptText = """ Write {count} reasons why people in {location} should consider a {job} job. These reasons need to be short, so they fit on a poster. For instance, "{job} jobs are rewarding." """; final PromptTemplate promptTemplate = new PromptTemplate(promptText); promptTemplate.add("count", count); promptTemplate.add("job", domain); promptTemplate.add("location", location); ChatResponse response = client.call(promptTemplate.create()); return response.getResult().getOutput().getContent(); } } With the application running, if the following request is performed, from the browser: Plain Text http://localhost:8080/gen-ai/job-reasons?count=3&job=software%20architect&location=Romania Then the below result is retrieved: Lucrative career: Software architect jobs offer competitive salaries and excellent growth opportunities, ensuring financial stability and success in Romania. In-demand profession: As the demand for technology continues to grow, software architects are highly sought after in Romania and worldwide, providing abundant job prospects and job security. Creative problem-solving: Software architects play a crucial role in designing and developing innovative software solutions, allowing them to unleash their creativity and make a significant impact on various industries. This is exactly what it was intended – an easy interface through which the Open AI GPT model can be asked to write a couple of reasons why a certain job in a certain location is appealing. Adjustments and Observations The simple proof of concept developed so far mainly uses the default configurations available. The ChatClient instance may be configured according to the desired needs via various properties. As this is beyond the scope of this writing, only two are exemplified here. spring.ai.openai.chat.options.model designates the AI Model to use. By default, it is "gpt-35-turbo," but "gpt-4" and "gpt-4-32k" designate the latest versions. Although available, one may not be able to access these using a pay-as-you-go plan, but there are additional pieces of information available on the Open AI website to accommodate it. Another property worth mentioning is spring.ai.openai.chat.options.temperature. According to the reference documentation, the sampling temperature controls the “creativity of the responses." It is said that higher values make the output “more random," while lower ones are “more focused and deterministic." The default value is 0.8, if we decrease it to 0.3, restart the application, and ask again with the same request parameters, the below result is retrieved. Lucrative career opportunities: Software architect jobs in Romania offer competitive salaries and excellent growth prospects, making it an attractive career choice for individuals seeking financial stability and professional advancement. Challenging and intellectually stimulating work: As a software architect, you will be responsible for designing and implementing complex software systems, solving intricate technical problems, and collaborating with talented teams. This role offers continuous learning opportunities and the chance to work on cutting-edge technologies. High demand and job security: With the increasing reliance on technology and digital transformation across industries, the demand for skilled software architects is on the rise. Choosing a software architect job in Romania ensures job security and a wide range of employment options, both locally and internationally. It is visible that the output is way more descriptive in this case. One last consideration is related to the structure of the output obtained. It would be convenient to have the ability to map the actual payload received to a Java object (class or record, for instance). As of now, the representation is textual and so is the implementation. Output parsers may achieve this, similarly to Spring JDBC’s mapping structures. In this proof of concept, a BeanOutputParser is used, which allows deserializing the result directly in a Java record as below: Java public record JobReasons(String job, String location, List<String> reasons) { } This is done by taking the {format} as part of the prompt text and providing it as an instruction to the AI Model. The OpenAiService method becomes: Java public JobReasons formattedJobReasons(int count, String job, String location) { final String promptText = """ Write {count} reasons why people in {location} should consider a {job} job. These reasons need to be short, so they fit on a poster. For instance, "{job} jobs are rewarding." {format} """; BeanOutputParser<JobReasons> outputParser = new BeanOutputParser<>(JobReasons.class); final PromptTemplate promptTemplate = new PromptTemplate(promptText); promptTemplate.add("count", count); promptTemplate.add("job", job); promptTemplate.add("location", location); promptTemplate.add("format", outputParser.getFormat()); promptTemplate.setOutputParser(outputParser); final Prompt prompt = promptTemplate.create(); ChatResponse response = client.call(prompt); return outputParser.parse(response.getResult().getOutput().getContent()); } When invoking again, the output is as below: JSON { "job":"software architect", "location":"Romania", "reasons":[ "High demand", "Competitive salary", "Opportunities for growth" ] } The format is the expected one, but the reasons appear less explanatory, which means additional adjustments are required in order to achieve better usability. From a proof of concept point of view though, this is acceptable, as the focus was on the form. Conclusions Prompt design is an important part of the task – the better articulated prompts are, the better the input and the higher the output quality. Using Spring AI to integrate with various chat models is quite straightforward – this post showcased an Open AI integration. Nevertheless, in the case of Gen AI in general, just as in the case of almost any technology, it is very important to get familiar at least with the general concepts first. Then, to try to understand the magic behind the way the communication is carried out and only afterward, start writing “production” code. Last but not least, it is advisable to further explore the Spring AI API to understand the implementations and remain up-to-date as it evolves and improves. The code is available here. References Spring AI Reference
Over the past few years, AI has steadily worked its way into almost every part of the global economy. Email programs use it to correct grammar and spelling on the fly and suggest entire sentences to round out each message. Digital assistants use it to provide a human-like conversational interface for users. You encounter it when you reach out to any business's contact center. You can even have your phone use AI to wait on hold for you when you exhaust the automated support options and need a live agent instead. It's no wonder, then, that AI is also already present in the average software developer's toolkit. Today, there are countless AI coding assistants available that promise to lighten developers' loads. According to their creators, the tools should help software developers and teams work faster and produce more predictable product outcomes. However, they do something less desirable, too—introduce security flaws. It's an issue that software development firms and solo coders are only beginning to come to grips with. Right now, it seems there's a binary choice. Either use AI coding assistants and accept the consequences, or forego them and risk falling behind the developers that do use them. Right now, surveys indicate that about 96% of developers have already chosen the former. But what if there was another option? What if you could mitigate the risks of using AI coding assistants without harming your output? Here's a simple framework developers can use to pull that off. Evaluate Your AI Tools Carefully The first way to mitigate the risks that come with AI coding assistants is to thoroughly investigate any tool you're considering before you use it in production. The best way to do this is to use the tool in parallel with a few of your development projects to see how the results stack up to your human-created code. This will provide you an opportunity to assess the tool's strengths and weaknesses and to look for any persistent output problems that might make it a non-starter for your specific development needs. This simple vetting procedure should let you choose an AI coding assistant that's suited to the tasks you plan to give it. It should also alert you to any significant secure coding shortcomings associated with the tool before it can affect a live project. If those shortcomings are insignificant, you can use what you learn to clean up any code that comes from the tool. If they're significant, you can move on to evaluating another tool instead. Beef up Your Code Review and Validation Processes Next, it's essential to beef up your code review and validation processes before you begin using an AI coding assistant in production. This should include multiple static code analyses passed on all the code you generate, especially any that contain AI-generated code. This should help you catch the majority of inadvertently introduced security vulnerabilities. It should also give your human developers a chance to read the AI-generated code, understand it, and point out any obvious issues with it before moving forward. Your code review and validation processes should also include dynamic testing as soon as each project reaches the point that it's feasible. This will help you evaluate the security of your code as it exists in the real world, including any user interactions that could introduce additional vulnerabilities. Keep Your AI Tools Up to Date Finally, you should create a process that ensures you're always using the latest version of your chosen AI tools. The developers of AI coding assistants are always making changes aimed at increasing the reliability and security of the code their tools generate. It's in their best interest to do so since any flawed code traced back to their tool could lead to developers dropping it in favor of a competitor. However, you shouldn't blindly update your toolset, either. It's important to keep track of any updates to your AI coding assistant change. You should never assume that an updated version of the tool you're using will still be suited for your specific coding needs. So, if you spot any changes that might call for a reevaluation of the tool, that's exactly what you should do. If you can't afford to be without your chosen AI coding assistant for long enough to repeat the vetting process you started with, continue using the older version. However, you should have the new version perform the same coding tasks and compare the output. This should give you a decent idea of how an update's changes will affect your final software products. The Bottom Line Realistically, AI code generation isn't going away. Instead, it likely won't be long before it's an integral part of every development team's workflow. However, we've not yet reached the point where human coders should blindly trust the work product of their AI counterparts. By taking a cautious approach and integrating AI tools thoughtfully, developers should be able to reap the rewards of these early AI tools while insulating themselves from their very real shortcomings.
AngularAndSpringWithMaps is a Sprint Boot project that shows company properties on a Bing map and can be run on the JDK or as a GraalVM native image. ReactAndGo is a Golang project that shows the cheapest gas stations in your post code area and is compiled in a binary. Both languages are garbage collected, and the AngularAndSpringWithMaps project uses the G1 collector. The complexity of both projects can be compared. Both serve as a frontend, provide rest data endpoints for the frontend, and implement services for the logic with repositories for the database access. How to build the GraalVM native image for the AngularAndSpringWithMaps project is explained in this article. What To Compare On the performance side, Golang and Java on the JVM or as a native image are fast and efficient enough for the vast majority of use cases. Further performance fine-tuning needs good profiling and specific improvements, and often, the improvements are related to the database. The two interesting aspects are: Memory requirements Startup time(can include warmup) The memory requirements are important because the available memory limit on the Kubernetes node or deployment server is mostly reached earlier than the CPU limit. If you use less memory, you can deploy more Docker images or Spring Boot applications on the resource. The startup time is important if you have periods with little load and periods with high load for your application. The shorter the startup time is the more aggressive you can scale the amount of deployed applications/images up or down. Memory Requirements 420 MB AngularAndSpringWithMaps JVM 21 280 MB AngularAndSpringWithMaps GraalVM native image 128-260 MB ReactAndGo binary The GraalVM native image uses significantly less memory than the JVM jar. That makes the native image more resource-efficient. The native image binary is 240 MB in size, which means 40 MB of working memory. The ReactAndGo binary is 29 MB in size and uses 128-260 MB of memory depending on the size of the updates it has to process. That means if the use case would need only 40 MB of working memory like the GraalVM native image, 70 MB would be enough to run it. That makes the Go binary much more resource-efficient. Startup Time 4300ms AngularAndSpringWithMaps JVM 21 220ms AngularAndSpringWithMaps GraalVM native image 100ms ReactAndGo binary The GraalVM native image startup time is impressive and enables the scale-to-zero configurations that start the application on demand and scale down to zero without load. The JVM start time requires one running instance as a minimum. The ReactAndGo binary startup time is the fastest and enables scale to zero. Conclusion The GraalVM native image and the Go binary are the most efficient in this comparison. Due to their lower memory requirements can, the CPU resources be used more efficiently. The fast startup times enable scale to zero configurations that can save money in on-demand environments. The winner is the Go project. The result is that if efficient use of hardware resources is the most important to you, Go is the best. If your developers are most familiar with Java then the use of GraalVM native image can improve the efficient use of hardware resources. Creating GraalVM native images needs more effort and developer time. Some of the effort can be automated, and with some of the effort, that would be hard. Then the question becomes: Is the extra developer time worth the saved hardware resources?
Starting a new project is always a mix of excitement and tough decisions, especially when you're stitching together familiar tools like Google Docs with powerhouses like GitHub Pages. This is the story of building gdocweb, a tool that I hoped would make life easier for many. I'll be diving into why I chose Java 21 and Spring Boot 3.x, ditched GraalVM after some trial and error, and why a simple VPS with Docker Compose won out over more complex options. I also went with Postgres and JPA, but steered clear of migration tools like Flyway. It's a no-frills, honest recount of the choices, changes, and the occasional "aha" moments of an engineer trying to make something useful and efficient. Introducing gdocweb Before we dive into the technical intricacies and the decision-making labyrinth of building gdocweb, let's set the stage by understanding what gdocweb is and the problem it solves. In simple terms, gdocweb connects Google Docs to GitHub Pages. It's a simple web builder that generates free sites with all the raw power of GitHub behind it, and all the usability of Google Docs. I decided to build gdocweb to eliminate the complexities typically associated with website building and documentation. It's for users who seek a hassle-free way to publish and maintain their content, but also for savvy users who enjoy the power of GitHub but don't want to deal with markdown nuances. Here's a short video explaining gdocweb for the general public: Java 21 and Spring Boot 3.x: Innovation and Maturity When you're spearheading a project on your own like I was with gdocweb, you have the liberty to make technology choices that might be more challenging in a team or corporate environment. This freedom led me to choose Java 21 and Spring Boot 3.x for this project. The decision to go with the current Long-Term Support (LTS) version of Java was a no-brainer. It's always tempting to use the latest and greatest, but with Java 21, it wasn't just about using something new; it was about leveraging a platform that has stood the test of time and has evolved to meet modern development needs. Virtual threads were a major part of the decision to go with Java 21. Cost is a huge factor in such projects, and squeezing the maximum throughput from a server is crucial in these situations. Java, being a mature technology, offered a sense of reliability even in its latest iteration. Similarly, Spring Boot 3.x, despite being a newer version, comes from a lineage of robust and well-tested frameworks. It's a conservative choice in the sense of its long-standing reputation, but innovative in its features and capabilities. However, this decision wasn't without its hiccups. During the process of integrating Google API access, I had to go through a security CASA tier 2 review. Here's where the choice of Java 21 threw a curveball. The review tool was tailored for JDK 11, and although it worked with JDK 21, it still added a bit of stress to the process. It was a reminder that when you're working with cutting-edge versions of technologies, there can be unexpected bumps along the road. Even if they are as mature as Java. The transition to Spring Boot 3.x had its own set of challenges, particularly with the changes in security configurations. These modifications rendered most online samples and guides obsolete, breaking a lot of what I had initially set up. It was a learning curve, adjusting to these changes and figuring out the new way of doing things. However, most other aspects were relatively simple and the best compliment I can give to Spring Boot 3.x is that it's very similar to Spring Boot 2.x. GraalVM Native Image for Efficiency My interest in GraalVM native image for gdocweb was primarily driven by its promise of reduced memory usage and faster startup times. The idea was that with lower memory requirements, I could run more server instances, leading to better scalability and resilience. Faster startup times also meant quicker recovery from failures, a crucial aspect of maintaining a reliable service. Implementing GraalVM Getting GraalVM to work was nontrivial but not too hard. After some trial and error, I managed to set up a Continuous Integration (CI) process that built the GraalVM project and uploaded it to Docker. This was particularly necessary because I'm using an M2 Mac, while my server runs on Intel architecture. This setup meant I had to deal with an 18-minute wait time for each update – a significant delay for any development cycle. Facing the Production Challenges Things started getting rocky when I started to test the project production and staging environments. It became a "whack-a-mole" scenario with missing library code from the native image. Each issue fixed seemed to only lead to another, and the 18-minute cycle for each update added to the frustration. The final straw was realizing the incompatibility issues with Google API libraries. Solving these issues would require extensive testing on a GraalVM build, which was already burdened by slow build times. For a small project like mine, this became a bottleneck too cumbersome to justify the benefits. The Decision To Move On While GraalVM seemed ideal on paper for saving resources, the reality was different. It consumed my limited GitHub Actions minutes and required extensive testing, which was impractical for a project of this scale. Ultimately, I decided to abandon the GraalVM route. If you do choose to use GraalVM then this was the GitHub Actions script I used, I hope it can help you with your journey: name: Java CI with Maven on: push: branches: [ "master" ] pull_request: branches: [ "master" ] jobs: build: runs-on: ubuntu-latest services: postgres: image: postgres:latest env: POSTGRES_PASSWORD: yourpassword ports: - 5432:5432 options: >- --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5 steps: - uses: actions/checkout@v3 - uses: graalvm/setup-graalvm@v1 with: java-version: '21' version: '22.3.2' distribution: 'graalvm' cache: 'maven' components: 'native-image' native-image-job-reports: 'true' github-token: ${{ secrets.GITHUB_TOKEN } - name: Wait for PostgreSQL run: sleep 10 - name: Build with Maven run: mvn -Pnative native:compile - name: Build Docker Image run: docker build -t autosite:latest . - name: Log in to Docker Hub uses: docker/login-action@v1 with: username: ${{ secrets.DOCKERHUB_USERNAME } password: ${{ secrets.DOCKERHUB_TOKEN } - name: Push Docker Image run: | docker tag autosite:latest mydockeruser/autosite:latest docker push mydockeruser/autosite:latest This configuration was a crucial part of my attempt to leverage GraalVM's benefits, but as the project evolved, so did my understanding of the trade-offs between idealism in technology choice and practicality in deployment and maintenance. Deployment: VPS and Docker Compose When it came to deploying gdocweb, I had a few paths to consider. Each option came with its pros and cons, but after careful evaluation, I settled on using a Virtual Private Server (VPS) with Docker Compose. Here’s a breakdown of my thought process and why this choice made the most sense for my needs. Avoiding Raw VPS Deployment I immediately ruled out the straightforward approach of installing the application directly on a VPS. This method fell short in terms of migration ease, testing, and flexibility. Containers offer a more streamlined and efficient approach. They provide a level of abstraction and consistency across different environments, which is invaluable. Steering Clear of Managed Containers and Orchestration Managed containers and orchestration (e.g., k8s) were another option, and while they offer scalability and ease of management, they introduce complexity in other areas. For instance, when using a managed Kubernetes service it would often mean relying on cloud storage for databases, which can get expensive quickly. My philosophy was to focus on cost before horizontal scale, especially in the early stages of a project. If we don't optimize and stabilize when we're small, the problems will only get worse as we grow. Scaling should ideally start with vertical scaling before moving to horizontal. Vertical scaling means more CPU/RAM while horizontal adds additional machines. Vertical scaling is not only more cost-effective but also crucial from a technical standpoint. It makes it easier to identify performance bottlenecks using simple profiling tools. In contrast, horizontal scaling can often mask these issues by adding more instances, which could lead to higher costs and hidden performance problems. The Choice of Docker Compose Docker Compose emerged as the clear winner for several reasons. It allowed me to seamlessly integrate the database and the application container. Their communication is contained within a closed network, adding an extra layer of security with no externally open ports. Moreover, the cost is fixed and predictable, with no surprises based on usage. This setup offered me the flexibility and ease of containerization without the overhead and complexity of more extensive container orchestration systems. It was the perfect middle ground, providing the necessary features without overcomplicating the deployment process. By using Docker Compose, I maintained control over the environment and kept the deployment process straightforward and manageable. This decision aligned perfectly with the overall ethos of gdocweb – simplicity, efficiency, and practicality. Front-End: Thymeleaf Over Modern Alternatives The front-end development of gdocweb presented a bit of a challenge for me. In an era where React and similar frameworks are dominating the scene, opting for Thymeleaf might seem like a step back. However, this decision was based on practical considerations and a clear understanding of the project's requirements and my strengths as a developer. React: Modern but Not a One-Size-Fits-All Solution React is undeniably modern and powerful, but it comes with its own set of complexities. My experience with React is akin to many developers dabbling outside their comfort zone - functional but not exactly proficient. I've seen the kind of perplexed expressions from seasoned React developers when they look at my code, much like the ones I have when I'm reading complex Java code written by others. React’s learning curve, coupled with its slower performance in certain scenarios and the risk of not achieving an aesthetically pleasing result without deep expertise, made me reconsider its suitability for gdocweb. The Appeal of Thymeleaf Thymeleaf, on the other hand, offered a more straightforward approach, aligning well with the project's ethos of simplicity and efficiency. Its HTML-based interfaces, while perhaps seen as antiquated next to frameworks like React, come with substantial advantages: Simplicity in page flow: Thymeleaf provides an easy-to-understand and easy-to-debug flow, making it a practical choice for a project like this. Performance and speed: It’s known for its fast performance, which is a significant factor in providing a good user experience. No need for NPM: Thymeleaf eliminates the need for additional package management, reducing complexity and potential vulnerabilities. Lower risk of client-side vulnerabilities: The server-side nature of Thymeleaf inherently reduces the risk of client-side issues. Considering HTMX for Dynamic Functionality The idea of incorporating HTMX for some dynamic behavior in the front end did cross my mind. HTMX has been on my radar for a while, promising to add dynamic functionalities easily. However, I had to ask myself if it was truly necessary for a tool like gdocweb, which is essentially a straightforward wizard. My conclusion was that opting for HTMX might be more of Resume Driven Design (RDD) on my part, rather than a technical necessity. In summary, the choice of Thymeleaf was a blend of practicality, familiarity, and efficiency. It allowed me to build a fast, simple, and effective front-end without the overhead and complexity of more modern frameworks, which, while powerful, weren't necessary for the scope of this project. Final Word The key takeaway in this post is the importance of practicality in technology choices. When we're building our own projects it's much easier to experiment with newer technologies, but this is a slippery slope. We need to keep our feet grounded in familiar territories while experimenting. My experience with GraalVM highlights the importance of aligning technology choices with project needs and being flexible in adapting to challenges. It’s a reminder that in technology, sometimes the simpler, tried-and-tested paths can be the most effective.
Last year, I wrote the article, "Upgrade Guide To Spring Boot 3.0 for Spring Data JPA and Querydsl," for the Spring Boot 3.0.x upgrade. Now, we have Spring Boot 3.2. Let's see two issues you might deal with when upgrading to Spring Boot 3.2.2. The technologies used in the SAT project are: Spring Boot 3.2.2 and Spring Framework 6.1.3 Hibernate + JPA model generator 6.4.1. Final Spring Data JPA 3.2.2 Querydsl 5.0.0. Changes All the changes in Spring Boot 3.2 are described in Spring Boot 3.2 Release Notes and What's New in Version 6.1 for Spring Framework 6.1. The latest changes in Spring Boot 3.2.2 can be found on GitHub. Issues Found A different treatment of Hibernate dependencies due to the changed hibernate-jpamodelgen behavior for annotation processors Unpaged class was redesigned. Let's start with the Hibernate dependencies first. Integrating Static Metamodel Generation The biggest change comes from the hibernate-jpamodelgen dependency, which is generating a static metamodel. In Hibernate 6.3, the treatment of dependencies was changed in order to mitigate transitive dependencies. Spring Boot 3.2.0 bumped up the hibernate-jpamodelgen dependency to the 6.3 version (see Dependency Upgrades). Unfortunately, the new version causes compilation errors (see below). Note: Spring Boot 3.2.2 used here already uses Hibernate 6.4 with the same behavior. Compilation Error With this change, the compilation of our project (Maven build) with Spring Boot 3.2.2 fails on the error like this: Plain Text [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 3.049 s [INFO] Finished at: 2024-01-05T08:43:10+01:00 [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.11.0:compile (default-compile) on project sat-jpa: Compilation failure: Compilation failure: [ERROR] on the class path. A future release of javac may disable annotation processing [ERROR] unless at least one processor is specified by name (-processor), or a search [ERROR] path is specified (--processor-path, --processor-module-path), or annotation [ERROR] processing is enabled explicitly (-proc:only, -proc:full). [ERROR] Use -Xlint:-options to suppress this message. [ERROR] Use -proc:none to disable annotation processing. [ERROR] <SAT_PATH>\sat-jpa\src\main\java\com\github\aha\sat\jpa\city\CityRepository.java:[3,41] error: cannot find symbol [ERROR] symbol: class City_ [ERROR] location: package com.github.aha.sat.jpa.city [ERROR] <SAT_PATH>\sat-jpa\src\main\java\com\github\aha\sat\jpa\city\CityRepository.java:[3] error: static import only from classes and interfaces ... [ERROR] <SAT_PATH>\sat-jpa\src\main\java\com\github\aha\sat\jpa\country\CountryCustomRepositoryImpl.java:[4] error: static import only from classes and interfaces [ERROR] java.lang.NoClassDefFoundError: net/bytebuddy/matcher/ElementMatcher [ERROR] at org.hibernate.jpamodelgen.validation.ProcessorSessionFactory.<clinit>(ProcessorSessionFactory.java:69) [ERROR] at org.hibernate.jpamodelgen.annotation.AnnotationMeta.handleNamedQuery(AnnotationMeta.java:104) [ERROR] at org.hibernate.jpamodelgen.annotation.AnnotationMeta.handleNamedQueryRepeatableAnnotation(AnnotationMeta.java:78) [ERROR] at org.hibernate.jpamodelgen.annotation.AnnotationMeta.checkNamedQueries(AnnotationMeta.java:57) [ERROR] at org.hibernate.jpamodelgen.annotation.AnnotationMetaEntity.init(AnnotationMetaEntity.java:297) [ERROR] at org.hibernate.jpamodelgen.annotation.AnnotationMetaEntity.create(AnnotationMetaEntity.java:135) [ERROR] at org.hibernate.jpamodelgen.JPAMetaModelEntityProcessor.handleRootElementAnnotationMirrors(JPAMetaModelEntityProcessor.java:360) [ERROR] at org.hibernate.jpamodelgen.JPAMetaModelEntityProcessor.processClasses(JPAMetaModelEntityProcessor.java:203) [ERROR] at org.hibernate.jpamodelgen.JPAMetaModelEntityProcessor.process(JPAMetaModelEntityProcessor.java:174) [ERROR] at jdk.compiler/com.sun.tools.javac.processing.JavacProcessingEnvironment.callProcessor(JavacProcessingEnvironment.java:1021) [ER... [ERROR] at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:348) [ERROR] Caused by: java.lang.ClassNotFoundException: net.bytebuddy.matcher.ElementMatcher [ERROR] at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:445) [ERROR] at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:593) [ERROR] at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:526) [ERROR] ... 51 more [ERROR] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException This is caused by the changed approach in the static metamodel generation announced in the Hibernate migration guide (see Integrating Static Metamodel Generation and the original issue HHH-17362). Their explanation for such change is this: "... in previous versions of Hibernate ORM you were leaking dependencies of hibernate-jpamodelgen into your compile classpath unknowingly. With Hibernate ORM 6.3, you may now experience a compilation error during annotation processing related to missing Antlr classes." Dependency Changes As you can see below in the screenshots, Hibernate dependencies were really changed. Spring Boot 3.1.6: Spring Boot 3.2.2: Explanation As stated in the migration guide, we need to change our pom.xml from a simple Maven dependency to the annotation processor paths of the Maven compiler plugin (see documentation). Solution We can remove the Maven dependencies hibernate-jpamodelgen and querydsl-apt (in our case) as recommended in the last article. Instead, pom.xml has to define the static metamodel generators via maven-compiler-plugin like this: XML <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <annotationProcessorPaths> <path> <groupId>org.hibernate.orm</groupId> <artifactId>hibernate-jpamodelgen</artifactId> <version>${hibernate.version}</version> </path> <path> <groupId>com.querydsl</groupId> <artifactId>querydsl-apt</artifactId> <version>${querydsl.version}</version> <classifier>jakarta</classifier> </path> <path> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <version>${lombok.version}</version> </path> </annotationProcessorPaths> </configuration> </plugin> </plugins> See the related changes in SAT project on GitHub. As we are forced to use this approach due to hibernate-jpamodelgen, we need to apply it to all dependencies tight to annotation processing (querydsl-apt or lombok). For example, when lombok is not used this way, we get the compilation error like this: Plain Text [INFO] ------------------------------------------------------------- [ERROR] COMPILATION ERROR : [INFO] ------------------------------------------------------------- [ERROR] <SAT_PATH>\sat-jpa\src\main\java\com\github\aha\sat\jpa\city\CityService.java:[15,30] error: variable repository not initialized in the default constructor [INFO] 1 error [INFO] ------------------------------------------------------------- [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 4.535 s [INFO] Finished at: 2024-01-08T08:40:29+01:00 [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.11.0:compile (default-compile) on project sat-jpa: Compilation failure [ERROR] <SAT_PATH>\sat-jpa\src\main\java\com\github\aha\sat\jpa\city\CityService.java:[15,30] error: variable repository not initialized in the default constructor The same applies to querydsl-apt. In this case, we can see the compilation error like this: Plain Text [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 5.211 s [INFO] Finished at: 2024-01-11T08:39:18+01:00 [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.11.0:compile (default-compile) on project sat-jpa: Compilation failure: Compilation failure: [ERROR] <SAT_PATH>\sat-jpa\src\main\java\com\github\aha\sat\jpa\country\CountryRepository.java:[3,44] error: cannot find symbol [ERROR] symbol: class QCountry [ERROR] location: package com.github.aha.sat.jpa.country [ERROR] <SAT_PATH>\sat-jpa\src\main\java\com\github\aha\sat\jpa\country\CountryRepository.java:[3] error: static import only from classes and interfaces [ERROR] <SAT_PATH>\sat-jpa\src\main\java\com\github\aha\sat\jpa\country\CountryCustomRepositoryImpl.java:[3,41] error: cannot find symbol [ERROR] symbol: class QCity [ERROR] location: package com.github.aha.sat.jpa.city [ERROR] <SAT_PATH>\sat-jpa\src\main\java\com\github\aha\sat\jpa\country\CountryCustomRepositoryImpl.java:[3] error: static import only from classes and interfaces [ERROR] <SAT_PATH>\sat-jpa\src\main\java\com\github\aha\sat\jpa\country\CountryCustomRepositoryImpl.java:[4,44] error: cannot find symbol [ERROR] symbol: class QCountry [ERROR] location: package com.github.aha.sat.jpa.country [ERROR] <SAT_PATH>\sat-jpa\src\main\java\com\github\aha\sat\jpa\country\CountryCustomRepositoryImpl.java:[4] error: static import only from classes and interfaces [ERROR] -> [Help 1] The reason is obvious. We need to apply all the annotation processors at the same time. Otherwise, some pieces of code can be missing, and we get the compilation error. Unpaged Redesigned The second minor issue is related to a change in Unpaged class. A serialization of PageImpl by the Jackson library was impacted by changing Unpaged from enum to class (see spring-projects/spring-data-commons#2987). Spring Boot 3.1.6: Java public interface Pageable { static Pageable unpaged() { return Unpaged.INSTANCE; } ... } enum Unpaged implements Pageable { INSTANCE; ... } Spring Boot 3.2.2: Java public interface Pageable { static Pageable unpaged() { return unpaged(Sort.unsorted()); } static Pageable unpaged(Sort sort) { return Unpaged.sorted(sort); } ... } final class Unpaged implements Pageable { private static final Pageable UNSORTED = new Unpaged(Sort.unsorted()); ... } When new PageImpl<City>(cities) is used (as we were used to using it), then this error is thrown: Plain Text 2024-01-11T08:47:56.446+01:00 WARN 5168 --- [sat-elk] [ main] .w.s.m.s.DefaultHandlerExceptionResolver : Resolved [org.springframework.http.converter.HttpMessageNotWritableException: Could not write JSON: (was java.lang.UnsupportedOperationException)] MockHttpServletRequest: HTTP Method = GET Request URI = /api/cities/country/Spain Parameters = {} Headers = [] Body = null Session Attrs = {} Handler: Type = com.github.aha.sat.elk.city.CityController Method = com.github.aha.sat.elk.city.CityController#searchByCountry(String, Pageable) Async: Async started = false Async result = null Resolved Exception: Type = org.springframework.http.converter.HttpMessageNotWritableException The workaround is to use the constructor with all attributes as: Java new PageImpl<City>(cities, ofSize(PAGE_SIZE), cities.size()) Instead of: Java new PageImpl<City>(cities) Note: It should be fixed in Spring Boot 3.3 (see this issue comment). Conclusion This article has covered both found issues when upgrading to the latest version of Spring Boot 3.2.2 (at the time of writing this article). The article started with the handling of the annotation processors due to the changed Hibernate dependency management. Next, the change in Unpaged class and workaround for using PageImpl was explained. All of the changes (with some other changes) can be seen in PR #64. The complete source code demonstrated above is available in my GitHub repository.
There is an infinite discussion about the topic of knowing algorithms and data structures for a frontend (in my case, mobile) developer needed only to pass technical interviews in large tech companies, or if there is some benefit of using it in daily work. I think the truth is somewhere in between, as always. Of course, you rarely find a case when you need to implement a min heap or use dynamic programming approaches while working on UI and business logic for a service with a REST API, but having a basic understanding of performance, time, and memory complexity can help make small, simple optimizations in the app that can pay off a lot in the long run. I want to give an example of such a small optimization and decision-making process that can help us decide whether the extra effort is worth it or not. Example I'm working on a simple iOS application for my kids that should help them learn foreign languages. One of the basic functionalities is vocabulary where you can add a word you want to learn. I wanted to add some images for each word to visually represent it. However, in 2024, the best way may be to use an API of any image generation model, but it's too much for the simple app I'm trying to make, so I decided to go with emojis. There are over 1,000 emojis available, and most simple words or phrases that kids can try to learn will have a visual representation there. Here is a code example to obtain most of the emoji symbols and filter out only those that can be properly rendered. Swift var emojis: [Character: String] = [:] let ranges = [ 0x1F600...0x1F64F, 9100...9300, 0x2600...0x26FF, 0x2700...0x27BF, 0x1F300...0x1F5FF, 0x1F680...0x1F6FF, 0x1F900...0x1F9FF ] for range in ranges { for item in range { guard let scalar = UnicodeScalar(item), scalar.properties.isEmojiPresentation else { continue } let value = Character(scalar) emojis[value] = description(emoji: value) } } With each emoji character, we also store a description for it that we will use to find the right one for our word or phrase. Here are a few examples: Now let's consider how to find the right emoji for a given word or phrase in the most straightforward and simple way. Unfortunately, string comparing would not work here, as not all emojis contain a single word as a description, and the second is that users can use different versions of the word or even a phrase. Fortunately, Apple provides us with a built-in NaturalLanguage framework that can help us. We will utilize the sentence embedding functionality from it to measure the distance between the given word/phrase from the user and the emoji description we are storing. Here is a function for it: Swift func calculateDistance(text: String, items: [Character]) -> (String?, Double?) { guard let embedding = NLEmbedding.sentenceEmbedding(for: .english) else { return (nil, nil) } var minDistance: Double = Double.greatestFiniteMagnitude var emoji = "" for key in items { guard let value = emojis[key] else { continue } let distance = embedding.distance( between: text.lowercased(), and: value.lowercased() ) if distance < minDistance { minDistance = distance emoji = String(key) } } return (emoji, minDistance) } The algorithm is straightforward here: we run through all the emoji characters we have, taking a description and comparing it with the given text, saving the minimum distance found, and in the end, returning the emoji with the minimum distance to our text together with the distance itself for further filtering. This algorithm has a linear time complexity of O(n). Here are some examples of the results: The last one is not what I would expect to get as a smiling face, but it is smiling, so it works. We can also use the returned distance to filter stuff out. The value of distance is in the range between 0 and 2 (by default). By running some experiments, I found that 0.85 is a great filter point for everything that does not represent the meaning of the phrase in the given emoji. Everything less than 0.85 looks good, and everything greater than that, I'm filtering out and returning an empty string to not confuse users. We have a first version of our algorithm, and while it works, it's quite slow. To find a match for any request, it needs to go through every emoji and execute distance measurements for each description individually. This process takes around 3.8 seconds for every request from the user. Now we need to make an important decision: whether to invest time into optimization. To answer this question, let's think about what exactly we want to improve by optimizing this extra effort. Even though 3.8 seconds for emoji generation may seem unacceptable, I would still use it as an example and challenge the purpose of optimizing this time. My use case is the following: The user opens the vocabulary and wants to add a new word or phrase. The user types this word. When typing is finished, I make a network call to a translation API that gives me a translation of the word. Ideally, I want this emoji to appear at the same time the typing is finished, but I can survive with a delay that will not exceed the time it takes for the translation API call and show it at the same time I've got the translation. When I consider this behavior as a requirement, it's clear that 3.8 seconds is too long for a network call. I would say if it takes 0.3-0.5 seconds, I probably wouldn't optimize here because I wouldn't want to sacrifice the user experience. Later, I might need to revisit this topic and improve it, but for now, delivering a working product is better than never delivering perfect code. In my case, I have to optimize, so let's think about how to do it. We're already using a dictionary here, where emojis are keys and descriptions are values. We'll add an additional dictionary with swapped keys and values. Additionally, I'll split each description into separate words and use these words as keys. For the values, I'll use a list of emojis that correspond to each word in the description. To make this more efficient, I'll create an index for my emojis that can help me find the most relevant description for a given word in almost constant time. The main drawback of this approach is that it will only work with single words, not phrases. According to my target users, they will typically search for a single word. So, I'll use this index for single-word searches and keep the old approach for rare phrases that won't return an empty symbol in most cases by not finding an appropriate emoji explanation. Let's take a look at a few examples from the Index dictionary: And here's a function for such index creation: Swift var searchIndex: [String: [Character]] = [:] ... func prepareIndex() { for item in emojis { let words = item.value.components(separatedBy: " ") for word in words { var emojiItems: [Character] = [] let lowercasedWord = word.lowercased() if let items = searchIndex[lowercasedWord] { emojiItems = items } emojiItems.append(item.key) searchIndex[lowercasedWord] = emojiItems } } } Now, let's add two more functions for single words and phrases. Swift func emoji(word: String) -> String { guard let options = searchIndex[word.lowercased()] else { return emoji(text: word) } let result = calculateDistance(text: word, items: options) guard let value = result.0, let distance = result.1 else { return emoji(text: word) } return distance < singleWordAllowedDistance ? value : "" } func emoji(text: String) -> String { let result = calculateDistance(text: text, items: Array(emojis.keys)) guard let value = result.0, let distance = result.1 else { return "" } return distance < allowedDistance ? value : "" } allowedDistance and singleWordAllowedDistance are constants that help me to configure filtering. As you can see, we use the same distance calculation as before, but instead of all emojis, we're injecting a list of emojis that have the given word in their description. And for most cases, it will be just a few or even only one option. This makes the algorithm work in near constant time in most cases. Let's test it and measure the time. This updated algorithm gives a result within 0.04 - 0.08 seconds, which is around 50 times faster than before. However, there's a big issue: the words should be spelled exactly as they are presented in the description. We can fix this by using a Word Embedding with Neighbors function, which will give us a list of similar or close-in-meaning words to the given one. Here's an updated func emoji(word: String) -> String function. Swift func emoji(word: String) -> String { guard let wordEmbedding = NLEmbedding.wordEmbedding(for: .english) else { return "" } let neighbors = wordEmbedding.neighbors(for: word, maximumCount: 2).map({ $0.0 }) let words = [word] + neighbors for word in words { guard let options = searchIndex[word.lowercased()] else { continue } let result = calculateDistance(text: word, items: options) guard let value = result.0, let distance = result.1 else { return emoji(text: word) } return distance < singleWordAllowedDistance ? value : "" } return emoji(text: word) } Now it works very quickly and in most cases. Conclusion Knowing basic algorithms and data structures expands your toolset and helps you find areas in code that can be optimized. Especially when working on a large project with many developers and numerous modules in the application, having optimizations here and there will help the app run faster over time.
It wasn't long ago that I decided to ditch my Ubuntu-based distros for openSUSE, finding LEAP 15 to be a steadier, more rock-solid flavor of Linux for my daily driver. The trouble is, I hadn't yet been introduced to Linux Mint Debian Edition (LMDE), and that sound you hear is my heels clicking with joy. LMDE 6 with the Cinnamon desktop. Allow me to explain. While I've been a long-time fan of Ubuntu, in recent years, it's the addition of snaps (rather than system packages) and other Ubuntu-only features started to wear on me. I wanted straightforward networking, support for older hardware, and a desktop that didn't get in the way of my work. For years, Ubuntu provided that, and I installed it on everything from old netbooks, laptops, towers, and IoT devices. More recently, though, I decided to move to Debian, the upstream Linux distro on which Ubuntu (and derivatives like Linux Mint and others) are built. Unlike Ubuntu, Debian holds fast to a truly solid, stable, non-proprietary mindset — and I can still use the apt package manager I've grown accustomed to. That is, every bit of automation I use (Chef and Ansible mostly) works the same on Debian and Ubuntu. I spent some years switching back and forth between the standard Ubuntu long-term releases and Linux Mint, a truly great Ubuntu-derived desktop Linux. Of course, there are many Debian-based distributions, but I stumbled across LMDE version 6, based on Debian GNU/Linux 12 "Bookworm" and known as Faye, and knew I was onto something truly special. As with the Ubuntu version, LMDE comes with different desktop environments, including the robust Cinnamon, which provides a familiar environment for any Linux, Windows, or macOS user. It's intuitive, chock full of great features (like a multi-function taskbar), and it supports a wide range of customizations. However, it includes no snaps or other Ubuntuisms, and it is amazingly stable. That is, I've not had a single freeze-up or odd glitch, even when pushing it hard with Kdenlive video editing, KVM virtual machines, and Docker containers. According to the folks at Linux Mint, "LMDE is also one of our development targets, as such it guarantees the software we develop is compatible outside of Ubuntu." That means if you're a traditional Linux Mint user, you'll find all the familiar capabilities and features in LMDE. After nearly six months of daily use, that's proven true. As someone who likes to hang on to old hardware, LMDE extended its value to me by supporting both 64- and 32-bit systems. I've since installed it on a 2008 Macbook (32-bit), old ThinkPads, old Dell netbooks, and even a Toshiba Chromebook. Though most of these boxes have less than 3 gigabytes of RAM, LMDE performs well. Cinnamon isn't the lightest desktop around, but it runs smoothly on everything I have. The running joke in the Linux world is that "next year" will be the year the Linux desktop becomes a true Windows and macOS replacement. With Debian Bookworm-powered LMDE, I humbly suggest next year is now. To be fair, on some of my oldest hardware, I've opted for Bunsen. It, too, is a Debian derivative with 64- and 32-bit versions, and I'm using the BunsenLabs Linux Boron version, which uses the Openbox window manager and sips resources: about 400 megabytes of RAM and low CPU usage. With Debian at its core, it's stable and glitch-free. Since deploying LMDE, I've also begun to migrate my virtual machines and containers to Debian 12. Bookworm is amazingly robust and works well on IoT devices, LXCs, and more. Since it, too, has long-term support, I feel confident about its stability — and security — over time. If you're a fan of Ubuntu and Linux Mint, you owe it to yourself to give LMDE a try. As a daily driver, it's truly hard to beat.
This tutorial illustrates B2B push-style application integration with APIs and internal integration with messages. We have the following use cases: Ad Hoc Requests for information (Sales, Accounting) that cannot be anticipated in advance. Two Transaction Sources: A) internal Order Entry UI, and B) B2B partner OrderB2B API. The Northwind API Logic Server provides APIs and logic for both transaction sources: Self-Serve APIs to support ad hoc integration and UI dev, providing security (e.g., customers see only their accounts). Order Logic: enforcing database integrity and Application Integration (alert shipping). A custom API to match an agreed-upon format for B2B partners. The Shipping API Logic Server listens to Kafka and processes the message. Key Architectural Requirements: Self-Serve APIs and Shared Logic This sample illustrates some key architectural considerations: Requirement Poor Practice Good Practice Best Practice Ideal Ad Hoc Integration ETL APIs Self-Serve APIs Automated Self-Serve APIs Logic Logic in UI Reusable Logic Declarative Rules.. Extensible with Python Messages Kafka Kafka Logic Integration We'll further expand on these topics as we build the system, but we note some best practices: APIs should be self-serve, not requiring continuing server development. APIs avoid the nightly Extract, Transfer, and Load (ETL) overhead. Logic should be re-used over the UI and API transaction sources. Logic in UI controls is undesirable since it cannot be shared with APIs and messages. Using This Guide This guide was developed with API Logic Server, which is open-source and available here. The guide shows the highlights of creating the system. The complete Tutorial in the Appendix contains detailed instructions to create the entire running system. The information here is abbreviated for clarity. Development Overview This overview shows all the key codes and procedures to create the system above. We'll be using API Logic Server, which consists of a CLI plus a set of runtimes for automating APIs, logic, messaging, and an admin UI. It's an open-source Python project with a standard pip install. 1. ApiLogicServer Create: Instant Project The CLI command below creates an ApiLogicProject by reading your schema. The database is Northwind (Customer, Orders, Items, and Product), as shown in the Appendix. Note: the db_urlvalue is an abbreviation; you normally supply a SQLAlchemy URL. The sample NW SQLite database is included in ApiLogicServer for demonstration purposes. $ ApiLogicServer create --project_name=ApiLogicProject --db_url=nw- The created project is executable; it can be opened in an IDE and executed. One command has created meaningful elements of our system: an API for ad hoc integration and an Admin App. Let's examine these below. API: Ad Hoc Integration The system creates a JSON API with endpoints for each table, providing filtering, sorting, pagination, optimistic locking, and related data access. JSON: APIs are self-serve: consumers can select their attributes and related data, eliminating reliance on custom API development. In this sample, our self-serve API meets our Ad Hoc Integration needs and unblocks Custom UI development. Admin App: Order Entry UI The create command also creates an Admin App: multi-page, multi-table with automatic joins, ready for business user agile collaboration and back office data maintenance. This complements custom UIs you can create with the API. Multi-page navigation controls enable users to explore data and relationships. For example, they might click the first Customer and see their Orders and Items: We created an executable project with one command that completes our ad hoc integration with a self-serve API. 2. Customize: In Your IDE While API/UI automation is a great start, we now require Custom APIs, Logic, and Security. Such customizations are added to your IDE, leveraging all its services for code completion, debugging, etc. Let's examine these. Declare UI Customizations The admin app is not built with complex HTML and JavaScript. Instead, it is configured with the ui/admin/admin.yml, automatically created from your data model by the ApiLogicServer create command. You can customize this file in your IDE to control which fields are shown (including joins), hide/show conditions, help text, etc. This makes it convenient to use the Admin App to enter an Order and OrderDetails: Note the automation for automatic joins (Product Name, not ProductId) and lookups (select from a list of Products to obtain the foreign key). If we attempt to order too much Chai, the transaction properly fails due to the Check Credit logic described below. Check Credit Logic: Multi-Table Derivation and Constraint Rules, 40X More Concise. Such logic (multi-table derivations and constraints) is a significant portion of a system, typically nearly half. API Logic server provides spreadsheet-like rules that dramatically simplify and accelerate logic development. The five check credit rules below represent the same logic as 200 lines of traditional procedural code. Rules are 40X more concise than traditional code, as shown here. Rules are declared in Python and simplified with IDE code completion. Rules can be debugged using standard logging and the debugger: Rules operate by handling SQLAlchemy events, so they apply to all ORM access, whether by the API engine or your custom code. Once declared, you don't need to remember to call them, which promotes quality. The above rules prevented the too-big order with multi-table logic from copying the Product Price, computing the Amount, rolling it up to the AmountTotal and Balance, and checking the credit. These five rules also govern changing orders, deleting them, picking different parts, and about nine automated transactions. Implementing all this by hand would otherwise require about 200 lines of code. Rules are a unique and significant innovation, providing meaningful improvements over procedural logic: CHARACTERISTIC PROCEDURAL DECLARATIVE WHY IT MATTERS Reuse Not Automatic Automatic - all Use Cases 40X Code Reduction Invocation Passive - only if called Active - call not required Quality Ordering Manual Automatic Agile Maintenance Optimizations Manual Automatic Agile Design For more on the rules, click here. Declare Security: Customers See Only Their Own Row Declare row-level security using your IDE to edit logic/declare_security.sh, (see screenshot below). An automatically created admin app enables you to configure roles, users, and user roles. If users now log in as ALFKI (configured with role customer), they see only their customer row. Observe the console log at the bottom shows how the filter worked. Declarative row-level security ensures users see only the rows authorized for their roles. 3. Integrate: B2B and Shipping We now have a running system, an API, logic, security, and a UI. Now, we must integrate with the following: B2B partners: We'll create a B2B Custom Resource. OrderShipping: We add logic to Send an OrderShipping Message. B2B Custom Resource The self-serve API does not conform to the format required for a B2B partnership. We need to create a custom resource. You can create custom resources by editing customize_api.py using standard Python, Flask, and SQLAlchemy. A custom OrderB2B endpoint is shown below. The main task here is to map a B2B payload onto our logic-enabled SQLAlchemy rows. API Logic Server provides a declarative RowDictMapper class you can use as follows: Declare the row/dict mapping; see the OrderB2B class in the lower pane: Note the support for lookup so that partners can send ProductNames, not ProductIds. Create the custom API endpoint; see the upper pane: Add def OrderB2B to customize_api/py to create a new endpoint. Use the OrderB2B class to transform API request data to SQLAlchemy rows (dict_to_row). The automatic commit initiates the shared logic described above to check credit and reorder products. Our custom endpoint required under ten lines of code and the mapper configuration. Produce OrderShipping Message Successful orders must be sent to Shipping in a predesignated format. We could certainly POST an API, but Messaging (here, Kafka) provides significant advantages: Async: Our system will not be impacted if the Shipping system is down. Kafka will save the message and deliver it when Shipping is back up. Multi-cast: We can send a message that multiple systems (e.g., Accounting) can consume. The content of the message is a JSON string, just like an API. Just as you can customize APIs, you can complement rule-based logic using Python events: Declare the mapping; see the OrderShipping class in the right pane. This formats our Kafka message content in the format agreed upon with Shipping. Define an after_flush event, which invokes send_order_to_shipping. This is called by the logic engine, which passes the SQLAlchemy models.Order row. send_order_to_shipping uses OrderShipping.row_to_dict to map our SQLAlchemy order row to a dict and uses the Kafka producer to publish the message. Rule-based logic is customizable with Python, producing a Kafka message with 20 lines of code here. 4. Consume Messages The Shipping system illustrates how to consume messages. The sections below show how to create/start the shipping server create/start and use our IDE to add the consuming logic. Create/Start the Shipping Server This shipping database was created from AI. To simplify matters, API Logic Server has installed the shipping database automatically. We can, therefore, create the project from this database and start it: 1. Create the Shipping Project ApiLogicServer create --project_name=shipping --db_url=shipping 2. Start your IDE (e.g., code shipping) and establish your venv. 3. Start the Shipping Server: F5 (configured to use a different port). The core Shipping system was automated by ChatGPT and ApiLogicServer create. We add 15 lines of code to consume Kafka messages, as shown below. Consuming Logic To consume messages, we enable message consumption, configure a mapping, and provide a message handler as follows. 1. Enable Consumption Shipping is pre-configured to enable message consumption with a setting in config.py: KAFKA_CONSUMER = '{"bootstrap.servers": "localhost:9092", "group.id": "als-default-group1", "auto.offset.reset":"smallest"}' When the server is started, it invokes flask_consumer() (shown below). This is called the pre-supplied FlaskKafka, which handles the Kafka consumption (listening), thread management, and the handle annotation used below. This housekeeping task is pre-created automatically. FlaskKafka was inspired by the work of Nimrod (Kevin) Maina in this project. Many thanks! 2. Configure a Mapping As we did for our OrderB2B Custom Resource, we configured an OrderToShip mapping class to map the message onto our SQLAlchemy Order object. 3. Provide a Consumer Message Handler We provide the order_shipping handler in kafka_consumer.py: Annotate the topic handler method, providing the topic name. This is used by FlaskKafka to establish a Kafka listener Provide the topic handler code, leveraging the mapper noted above. It is called FlaskKafka per the method annotations. Test It You can use your IDE terminal window to simulate a business partner posting a B2BOrder. You can set breakpoints in the code described above to explore system operation. ApiLogicServer curl "'POST' 'http://localhost:5656/api/ServicesEndPoint/OrderB2B'" --data ' {"meta": {"args": {"order": { "AccountId": "ALFKI", "Surname": "Buchanan", "Given": "Steven", "Items": [ { "ProductName": "Chai", "QuantityOrdered": 1 }, { "ProductName": "Chang", "QuantityOrdered": 2 } ] } }}' Use Shipping's Admin App to verify the Order was processed. Summary These applications have demonstrated several types of application integration: Ad Hoc integration via self-serve APIs. Custom integration via custom APIs to support business agreements with B2B partners. Message-based integration to decouple internal systems by reducing dependencies that all systems must always be running. We have also illustrated several technologies noted in the ideal column: Requirement Poor Practice Good Practice Best Practice Ideal Ad Hoc Integration ETL APIs Self-Serve APIs Automated Creation of Self-Serve APIs Logic Logic in UI Reusable Logic Declarative Rules.. Extensible with Python Messages Kafka Kafka Logic Integration API Logic Server provides automation for the ideal practices noted above: 1. Creation: instant ad hoc API (and Admin UI) with the ApiLogicServer create command. 2. Declarative Rules: Security and multi-table logic reduce the backend half of your system by 40X. 3. Kafka Logic Integration Produce messages from logic events. Consume messages by extending kafka_consumer. Services, including: RowDictMapper to transform rows and dict. FlaskKafka for Kafka consumption, threading, and annotation invocation. 4. Standards-based Customization Standard packages: Python, Flask, SQLAlchemy, Kafka... Using standard IDEs. Creation, logic, and integration automation have enabled us to build two non-trivial systems with a remarkably small amount of code: Type Code Custom B2B API 10 lines Check Credit Logic 5 rules Row Level Security 1 security declaration Send Order to Shipping 20 lines Process Order in Shipping 30 lines Mapping configurationsto transform rows and dicts 45 lines Automation dramatically increases time to market, with standards-based customization using your IDE, Python, Flask, SQLAlchemy, and Kafka. For more information on API Logic Server, click here. Appendix Full Tutorial You can recreate this system and explore running code, including Kafka, click here. It should take 30-60 minutes, depending on whether you already have Python and an IDE installed. Sample Database The sample database is an SQLite version of Northwind, Customers, Order, OrderDetail, and Product. To see a database diagram, click here. This database is included when you pip install ApiLogicServer.
This article starts with an overview of what a typical computer vision application requires. Then, it introduces Pipeless, an open-source framework that offers a serverless development experience for embedded computer vision. Finally, you will find a detailed step-by-step guide on the creation and execution of a simple object detection app with just a couple of Python functions and a model. Inside a Computer Vision Application "The art of identifying visual events via a camera interface and reacting to them" That is what I would answer if someone asked me to describe what computer vision is in one sentence. But it is probably not what you want to hear. So let's dive into how computer vision applications are typically structured and what is required in each subsystem. Really fast frame processing: Note that to process a stream of 60 FPS in real-time, you only have 16 ms to process each frame. This is achieved, in part, via multi-threading and multi-processing. In many cases, you want to start processing a frame even before the previous one has finished. An AI model to run inference on each frame and perform object detection, segmentation, pose estimation, etc: Luckily, there are more and more open-source models that perform pretty well, so we don't have to create our own from scratch, you usually just fine-tune the parameters of a model to match your use case (we will not deep dive into this today). An inference runtime: The inference runtime takes care of loading the model and running it efficiently on the different available devices (GPUs or CPUs). A GPU: To run the inference using the model fast enough, we require a GPU. This happens because GPUs can handle orders of magnitude more parallel operations than a CPU, and a model at the lowest level is just a huge bunch of mathematical operations. You will need to deal with the memory where the frames are located. They can be at the GPU memory or at the CPU memory (RAM) and copying frames between those is a very heavy operation due to the frame sizes that will make your processing slow. Multimedia pipelines: These are the pieces that allow you to take streams from sources, split them into frames, provide them as input to the models, and, sometimes, make modifications and rebuild the stream to forward it. Stream management: You may want to make the application resistant to interruptions in the stream, re-connections, adding and removing streams dynamically, processing several of them at the same time, etc. All those systems need to be created or incorporated into your project and thus, it is code that you need to maintain. The problem is that you end up maintaining a huge amount of code that is not specific to your application, but subsystems around the actual case-specific code. The Pipeless Framework To avoid having to build all the above from scratch, you can use Pipeless. It is an open-source framework for computer vision that allows you to provide a few functions specific to your case and it takes care of everything else. Pipeless splits the application's logic into "stages," where a stage is like a micro app for a single model. A stage can include pre-processing, running inference with the pre-processed input, and post-processing the model output to take any action. Then, you can chain as many stages as you want to compose the full application even with several models. To provide the logic of each stage, you simply add a code function that is very specific to your application, and Pipeless takes care of calling it when required. This is why you can think about Pipeless as a framework that provides a serverless-like development experience for embedded computer vision. You provide a few functions and you don't have to worry about all the surrounding systems that are required. Another great feature of Pipeless is that you can add, remove, and update streams dynamically via a CLI or a REST API to fully automate your workflows. You can even specify restart policies that indicate when the processing of a stream should be restarted, whether it should be restarted after an error, etc. Finally, to deploy Pipeless you just need to install it and run it along with your code functions on any device, whether it is in a cloud VM or containerized mode, or directly within an edge device like a Nvidia Jetson, a Raspberry, or any others. Creating an Object Detection Application Let's deep dive into how to create a simple application for object detection using Pipeless. The first thing we have to do is to install it. Thanks to the installation script, it is very simple: curl https://raw.githubusercontent.com/pipeless-ai/pipeless/main/install.sh | bash Now, we have to create a project. A Pipeless project is a directory that contains stages. Every stage is under a sub-directory, and inside each sub-directory, we create the files containing hooks (our specific code functions). The name that we provide to each stage folder is the stage name that we have to indicate to Pipeless later when we want to run that stage for a stream. pipeless init my-project --template empty cd my-project Here, the empty template tells the CLI to just create the directory, if you do not provide any template, the CLI will prompt you several questions to create the stage interactively. As mentioned above, we now need to add a stage to our project. Let's download an example stage from GitHub with the following command: wget -O - https://github.com/pipeless-ai/pipeless/archive/main.tar.gz | tar -xz --strip=2 "pipeless-main/examples/onnx-yolo" That will create a stage directory, onnx-yolo, that contains our application functions. Let's check the content of each of the stage files; i.e., our application hooks. We have the pre-process.py file, which defines a function (hook) taking a frame and a context. The function makes some operations to prepare the input data from the received RGB frame in order to match the format that the model expects. That data is added to the frame_data['inference_input'] which is what Pipeless will pass to the model. def hook(frame_data, context): frame = frame_data["original"].view() yolo_input_shape = (640, 640, 3) # h,w,c frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) frame = resize_rgb_frame(frame, yolo_input_shape) frame = cv2.normalize(frame, None, 0.0, 1.0, cv2.NORM_MINMAX) frame = np.transpose(frame, axes=(2,0,1)) # Convert to c,h,w inference_inputs = frame.astype("float32") frame_data['inference_input'] = inference_inputs ... (some other auxiliar functions that we call from the hook function) We also have the process.json file, which indicates Pipeless the inference runtime to use (in this case, the ONNX Runtime), where to find the model that it should load, and some optional parameters for it, such as the execution_provider to use, i.e., CPU, CUDA, TensortRT, etc. { "runtime": "onnx", "model_uri": "https://pipeless-public.s3.eu-west-3.amazonaws.com/yolov8n.onnx", "inference_params": { "execution_provider": "tensorrt" } } Finally, the post-process.py file defines a function similar to the one at pre-process.py. This time, it takes the inference output that Pipeless stored at frame_data["inference_output"] and performs the operations to parse that output into bounding boxes. Later, it draws the bounding boxes over the frame, to finally assign the modified frame to frame_data['modified']. With that, Pipeless will forward the stream that we provide but with the modified frames including the bounding boxes. def hook(frame_data, _): frame = frame_data['original'] model_output = frame_data['inference_output'] yolo_input_shape = (640, 640, 3) # h,w,c boxes, scores, class_ids = parse_yolo_output(model_output, frame.shape, yolo_input_shape) class_labels = [yolo_classes[id] for id in class_ids] for i in range(len(boxes)): draw_bbox(frame, boxes[i], class_labels[i], scores[i]) frame_data['modified'] = frame ... (some other auxiliar functions that we call from the hook function) The final step is to start Pipeless and provide a stream. To start Pipeless, simply run the following command from the my-project directory: pipeless start --stages-dir . Once running, let's provide a stream from the webcam (v4l2) and show the output directly on the screen. Note we have to provide the list of stages that the stream should execute in order; in our case, it is just the onnx-yolo stage: pipeless add stream --input-uri "v4l2" --output-uri "screen" --frame-path "onnx-yolo" And that's all! Conclusion We have described how creating a computer vision application is a complex task due to many factors and the subsystems that we have to implement around it. With a framework like Pipeless, getting up and running takes just a few minutes and you can focus just on writing the code for your specific use case. Furthermore, Pipeless' stages are highly reusable and easy to maintain so the maintenance will be easy and you will be able to iterate very fast. If you want to get involved with Pipeless and contribute to its development, you can do so through its GitHub repository.
The domain of Angular state management has received a huge boost with the introduction of Signal Store, a lightweight and versatile solution introduced in NgRx 17. Signal Store stands out for its simplicity, performance optimization, and extensibility, making it a compelling choice for modern Angular applications. In the next steps, we'll harness the power of Signal Store to build a sleek Task Manager app. Let's embark on this journey to elevate your Angular application development. Ready to start building? Let's go! A Glimpse Into Signal Store’s Core Structure Signal Store revolves around four fundamental components that form the backbone of its state management capabilities: 1. State At the heart of Signal Store lies the concept of signals, which represent the application's state in real-time. Signals are observable values that automatically update whenever the underlying state changes. 2. Methods Signal Store provides methods that act upon the state, enabling you to manipulate and update it directly. These methods offer a convenient way to interact with the state and perform actions without relying on observable streams or external state managers. 3. Selectors Selectors are functions that derive calculated values from the state. They provide a concise and maintainable approach to accessing specific parts of the state without directly exposing it to components. Selectors help encapsulate complex state logic and improve the maintainability of applications. 4. Hooks Hooks are functions that are triggered at critical lifecycle events, such as component initialization and destruction. They allow you to perform actions based on these events, enabling data loading, state updates, and other relevant tasks during component transitions. Creating a Signal Store and Defining Its State To embark on your Signal Store journey, you'll need to install the @ngrx/signals package using npm: But first, you have to install the Angular CLI and create an Angular base app with: JavaScript npm install -g @angular/cli@latest JavaScript ng new <name of your project> JavaScript npm install @ngrx/signals Creating a state (distinct from a store) is the subsequent step: TypeScript import { signalState } from '@ngrx/signals'; const state = signalState({ /* State goes here */ }); Manipulating the state becomes an elegant affair using the patchState method: TypeScript updateStateMethod() { patchState(this.state, (state) => ({ someProp: state.someProp + 1 })); } The patchState method is a fundamental tool for updating the state. It allows you to modify the state in a shallow manner, ensuring that only the specified properties are updated. This approach enhances performance by minimizing the number of state changes. First Steps for the Task Manager App First, create your interface for a Task and place it in a task.ts file: TypeScript export interface Task { id: string; value: string; completed: boolean; } The final structure of the app is: And our TaskService in taskService.ts looks like this: TypeScript @Injectable({ providedIn: 'root' }) export class TaskService { private taskList: Task[] = [ { id: '1', value: 'Complete task A', completed: false }, { id: '2', value: 'Read a book', completed: true }, { id: '3', value: 'Learn Angular', completed: false }, ]; constructor() { } getTasks() : Observable<Task[]> { return of(this.taskList); } getTasksAsPromise() { return lastValueFrom(this.getTasks()); } getTask(id: string): Observable<Task | undefined> { const task = this.taskList.find(t => t.id === id); return of(task); } addTask(value: string): Observable<Task> { const newTask: Task = { id: (this.taskList.length + 1).toString(), // Generating a simple incremental ID value, completed: false }; this.taskList = [...this.taskList, newTask]; return of(newTask); } updateTask(updatedTask: Task): Observable<Task> { const index = this.taskList.findIndex(task => task.id === updatedTask.id); if (index !== -1) { this.taskList[index] = updatedTask; } return of(updatedTask); } deleteTask(task: Task): Observable<Task> { this.taskList = this.taskList.filter(t => t.id !== task.id); return of(task); } } Crafting a Signal Store for the Task Manager App The creation of a store is a breeze with the signalStore method: Create the signalStore and place it in the taskstate.ts file: TypeScript import { signalStore, withHooks, withState } from '@ngrx/signals'; export const TaskStore = signalStore( { providedIn: 'root' }, withState({ /* state goes here */ }), ); Taking store extensibility to new heights, developers can add methods directly to the store. Methods act upon the state, enabling you to manipulate and update it directly. TypeScript export interface TaskState { tasks: Task[]; loading: boolean; } export const initialState: TaskState = { tasks: []; loading: false; } export const TaskStore = signalStore( { providedIn: 'root' }, withState(initialState), withMethods((store, taskService = inject(TaskService)) => ({ loadAllTasks() { // Use TaskService and then patchState(store, { tasks }); }, })) ); This method loadAllTasks is now available directly through the store itself. So in the component, we could do it in ngOnInit(): TypeScript @Component({ // ... providers: [TaskStore], }) export class AppComponent implements OnInit { readonly store = inject(TaskStore); ngOnInit() { this.store.loadAllTasks(); } } Harmony With Hooks The Signal Store introduces its own hooks, simplifying component code. By passing implemented methods into the hooks, developers can call them effortlessly: TypeScript export const TaskStore = signalStore( { providedIn: 'root' }, withState(initialState), withMethods(/* ... */), withHooks({ onInit({ loadAllTasks }) { loadAllTasks(); }, onDestroy() { console.log('on destroy'); }, }) ); This results in cleaner components, exemplified in the following snippet: TypeScript @Component({ providers: [TaskStore], }) export class AppComponent implements OnInit { readonly store = inject(TaskStore); // ngOnInit is NOT needed to load the Tasks !!!! } RxJS and Promises in Methods Flexibility takes center stage as @ngrx/signals seamlessly accommodates both RxJS and Promises: TypeScript import { rxMethod } from '@ngrx/signals/rxjs-interop'; export const TaskStore = signalStore( { providedIn: 'root' }, withState({ /* state goes here */ }), withMethods((store, taskService = inject(TaskService)) => ({ loadAllTasks: rxMethod<void>( pipe( switchMap(() => { patchState(store, { loading: true }); return taskService.getTasks().pipe( tapResponse({ next: (tasks) => patchState(store, { tasks }), error: console.error, finalize: () => patchState(store, { loading: false }), }) ); }) ) ), })) ); This snippet showcases the library's flexibility in handling asynchronous operations with RxJS. What I find incredibly flexible is that you can use RxJS or Promises to call your data. In the above example, you can see that we are using an RxJS in our methods. The tapResponse method helps us to use the response and manipulate the state with patchState again. But you can also use promises. The caller of the method (the hooks in this case) do not care. TypeScript async loadAllTasksByPromise() { patchState(store, { loading: true }); const tasks = await taskService.getTasksAsPromise(); patchState(store, { tasks, loading: false }); }, Reading the Data With Finesse Experience, the Signal Store introduces the withComputed() method. Similar to selectors, this method allows developers to compose and calculate values based on state properties: TypeScript export const TaskStore = signalStore( { providedIn: 'root' }, withState(initialState), withComputed(({ tasks }) => ({ completedCount: computed(() => tasks().filter((x) => x.completed).length), pendingCount: computed(() => tasks().filter((x) => !x.completed).length), percentageCompleted: computed(() => { const completed = tasks().filter((x) => x.completed).length; const total = tasks().length; if (total === 0) { return 0; } return (completed / total) * 100; }), })), withMethods(/* ... */), withHooks(/* ... */) ); In the component, these selectors can be effortlessly used: TypeScript @Component({ providers: [TaskStore], templates: ` <div> {{ store.completedCount() } / {{ store.pendingCount() } {{ store.percentageCompleted() } </div> ` }) export class AppComponent implements OnInit { readonly store = inject(TaskStore); } Modularizing for Elegance To elevate the elegance, selectors, and methods can be neatly tucked into separate files. We use in these files the signalStoreFeature method. With this, we can extract the methods and selectors to make the store even more beautiful. This method again has withComputed, withHooks, and withMethods for itself, so you can build your own features and hang them into the store. // task.selectors.ts: TypeScript export function withTasksSelectors() { return signalStoreFeature( {state: type<TaskState>()}, withComputed(({tasks}) => ({ completedCount: computed(() => tasks().filter((x) => x.completed).length), pendingCount: computed(() => tasks().filter((x) => !x.completed).length), percentageCompleted: computed(() => { const completed = tasks().filter((x) => x.completed).length; const total = tasks().length; if (total === 0) { return 0; } return (completed / total) * 100; }), })), ); } // task.methods.ts: TypeScript export function withTasksMethods() { return signalStoreFeature( { state: type<TaskState>() }, withMethods((store, taskService = inject(TaskService)) => ({ loadAllTasks: rxMethod<void>( pipe( switchMap(() => { patchState(store, { loading: true }); return taskService.getTasks().pipe( tapResponse({ next: (tasks) => patchState(store, { tasks }), error: console.error, finalize: () => patchState(store, { loading: false }), }) ); }) ) ), async loadAllTasksByPromise() { patchState(store, { loading: true }); const tasks = await taskService.getTasksAsPromise(); patchState(store, { tasks, loading: false }); }, addTask: rxMethod<string>( pipe( switchMap((value) => { patchState(store, { loading: true }); return taskService.addTask(value).pipe( tapResponse({ next: (task) => patchState(store, { tasks: [...store.tasks(), task] }), error: console.error, finalize: () => patchState(store, { loading: false }), }) ); }) ) ), moveToCompleted: rxMethod<Task>( pipe( switchMap((task) => { patchState(store, { loading: true }); const toSend = { ...task, completed: !task.completed }; return taskService.updateTask(toSend).pipe( tapResponse({ next: (updatedTask) => { const allTasks = [...store.tasks()]; const index = allTasks.findIndex((x) => x.id === task.id); allTasks[index] = updatedTask; patchState(store, { tasks: allTasks, }); }, error: console.error, finalize: () => patchState(store, { loading: false }), }) ); }) ) ), deleteTask: rxMethod<Task>( pipe( switchMap((task) => { patchState(store, { loading: true }); return taskService.deleteTask(task).pipe( tapResponse({ next: () => { patchState(store, { tasks: [...store.tasks().filter((x) => x.id !== task.id)], }); }, error: console.error, finalize: () => patchState(store, { loading: false }), }) ); }) ) ), })) ); } This modular organization allows for a clean separation of concerns, making the store definition concise and easy to maintain. Streamlining the Store Definition With selectors and methods elegantly tucked away in their dedicated files, the store definition now takes on a streamlined form: // task.store.ts: TypeScript export const TaskStore = signalStore( { providedIn: 'root' }, withState(initialState), withTasksSelectors(), withTasksMethods(), withHooks({ onInit({ loadAllTasksByPromise: loadAllTasksByPromise }) { console.log('on init'); loadAllTasksByPromise(); }, onDestroy() { console.log('on destroy'); }, }) ); This modular approach not only enhances the readability of the store definition but also facilitates easy maintenance and future extensions. Our AppComponent then can get the Store injected and use the methods from the store, the selectors, and using the hooks indirectly. TypeScript @Component({ selector: 'app-root', standalone: true, imports: [CommonModule, RouterOutlet, ReactiveFormsModule], templateUrl: './app.component.html', styleUrl: './app.component.css', providers: [TaskStore], changeDetection: ChangeDetectionStrategy.OnPush, }) export class AppComponent { readonly store = inject(TaskStore); private readonly formbuilder = inject(FormBuilder); form = this.formbuilder.group({ taskValue: ['', Validators.required], completed: [false], }); addTask() { this.store.addTask(this.form.value.taskValue); this.form.reset(); } } The final app: In Closing In this deep dive into the @ngrx/signals library, we've unveiled a powerful tool for Angular state management. From its lightweight architecture to its seamless integration of RxJS and Promises, the library offers a delightful development experience. As you embark on your Angular projects, consider the elegance and simplicity that @ngrx/signals brings to the table. Whether you're starting a new endeavor or contemplating an upgrade, this library promises to be a valuable companion, offering a blend of simplicity, flexibility, and power in the dynamic world of Angular development. You can find the final code here. Happy coding!
Justin Albano
Software Engineer,
IBM
Thomas Hansen
CTO,
AINIRO.IO