Project Loom: Lightweight Java threads
Content
When I ran this code and timed it, I got the numbers shown here. I get better performance when I use a thread pool with Executors.newCachedThreadPool(). This kind of control is not difficult in a language like JavaScript where functions are easily referenced and can be called at will to direct execution flow.
This is a sad case of a good and natural abstraction being abandoned in favor of a less natural one, which is overall worse in many respects, merely because of the runtime performance characteristics of the abstraction. Working with local threads gives you the same stack of traces and thread dumps, and they work with the same debugger and profiling tools. In any way, don’t start a project using a Reactive framework but blocking inside Reactive code just because you are using Loom’s virtual threads.
Structured concurrency aims to simplify multi-threaded and parallel programming. It treats multiple tasks running in different threads as a single unit of work, streamlining error handling and cancellation while improving reliability and observability. This helps to avoid issues like thread leaking and cancellation delays.
Featured free learning paths
Java Development Kit 1.1 had basic support for platform threads (or Operating System threads), and JDK 1.5 had more utilities and updates to improve concurrency and multi-threading. JDK 8 brought asynchronous programming support and more concurrency improvements. While things have continued to improve over multiple versions, there has been nothing groundbreaking in Java for the last three decades, apart from support for concurrency and multi-threading using OS threads. As mentioned, the new Fiber class represents a virtual thread. Why go to this trouble, instead of just adopting something like ReactiveX at the language level? The answer is both to make it easier for developers to understand, and to make it easier to move the universe of existing code.
If the thread executing handleOrder() is interrupted, the interruption is not propagated to the subtasks. In this case updateInventory() and updateOrder() will leak and continue to run in the background. Already, Java and its primary server-side competitor Node.js are neck and neck in performance. An order of magnitude boost to Java performance in typical web app use cases could alter the landscape for years to come.
Learn about Project Loom and the lightweight concurrency for JavaJVM.
If it gets the expected response, the preview status of the virtual threads will then be removed by the time of the release of JDK21. For instance, Thread.ofVirtual() method that returns a builder to start a virtual thread or to create a ThreadFactory. Similarly, the Executors.newVirtualThreadPerTaskExecutor() method has also been added, which can be used to create an ExecutorService that uses virtual threads. You can use these features by adding –enable-preview JVM argument during compilation and execution like in any other preview feature. On the contrary, Virtual threads, also known as user threads or green threads are scheduled by the applications instead of the operating system.
JVM, being the application, gets the total control over all the virtual threads and the whole scheduling process when working with Java. The virtual threads play an important role in serving concurrent requests from users and other applications. A thread supports the concurrent execution of instructions in modern high-level programming languages and operating systems.
Why are some Java calls blocking?
As a language runtime implementation of threads is not required to support arbitrary native code, we can gain more flexibility over how to store continuations, which allows us to reduce footprint. It is the goal of this project to add a lightweight thread construct — fibers — to the Java platform. What user-facing form this construct may take will be discussed below.
This can streamline error handling and cancellation, improve reliability, and enhance observability. OS threads are at the core of Java’s concurrency model and have a very mature ecosystem around them, but they also come with some drawbacks and are expensive computationally. Let’s look at the two most common use cases for concurrency and the drawbacks of the current Java concurrency model in these cases.
One downside of this solution is that these APIs are complex, and their integration with legacy APIs is also a pretty complex process. Most concurrent applications developed in Java require some level of synchronization between threads for every request to work properly. It is required due to the high frequency of threads working concurrently. Hence, context switching takes place between the threads, which is an expensive task affecting the execution of the application. In the literature, nested continuations that allow such behavior are sometimes call “delimited continuations with multiple named prompts”, but we’ll call them scoped continuations.
current community
You can reach us directly at or you can also ask us on the forum. Deepu is a polyglot developer, Java Champion, and OSS aficionado. Cancellation propagation — If the thread running handleOrder() is interrupted before or during the call to join(), both forks are canceled automatically when the thread exits the scope. For these situations, we would have to carefully write workarounds and failsafe, putting all the burden on the developer. Traditional Java concurrency is managed with the Thread and Runnable classes, as seen in Listing 1 .
- For instance, Thread.ofVirtual() method that returns a builder to start a virtual thread or to create a ThreadFactory.
- JEP 428, Structured Concurrency , proposes to simplify multithreaded programming by introducing a library to treat multiple tasks running in different threads as a single unit of work.
- This can streamline error handling and cancellation, improve reliability, and enhance observability.
- In particular, they refer only to the abstraction allowing programmers to write sequences of code that can run and pause, and not to any mechanism of sharing information among threads, such as shared memory or passing messages.
- Fibers will be mostly implemented in Java in the JDK libraries, but may require some support in the JVM.
- Comprehensibility — Makes the lifetime of shared data visible from the syntactic structure of code.
With the rise of web-scale applications, this threading model can become the major bottleneck for the application. Fiber class would wrap the tasks in an internal user-mode continuation. This means the task will be suspended and resume in Java runtime java project loom instead of the operating system kernel. Whenever the caller resumes the continuation after it is suspended, the control is returned to the exact point where it was suspended. Project Loom allows the use of pluggable schedulers with fiber class.
The Helidon Team prototyped a replacement using a loom and called it Wrap. The operating system implements threads too heavily, which limits the problem. The rest of the code is identical to the previous standard thread example. Here are the speed times for the three runs with the above code. In VisualVM, we also confirm the number of threads in this case is low. You can also block in Reactive code, it’s just normally not a good idea.
Releases
JEP 425, Virtual Threads , introduces virtual threads, lightweight threads that dramatically reduce the effort of writing, maintaining, and observing high-throughput concurrent applications, to the Java platform. If fibers are represented by the same Thread class, a fiber’s underlying kernel thread would be inaccessible to user code, which seems reasonable but has a number of implications. For one, it would require more work in the JVM, which makes heavy use of the Thread class, and would need to be aware of a possible fiber implementation. It also creates some circularity when writing schedulers, that need to implement threads by assigning them to threads . This means that we would need to expose the fiber’s continuation for use by the scheduler. Recent years have seen the introduction of many asynchronous APIs to the Java ecosystem, from asynchronous NIO in the JDK, asynchronous servlets, and many asynchronous third-party libraries.
Java Concurrency With Project Loom
It’s much more complicated, and it’s harder to write, it’s harder to read, and it’s much harder to debug or profile because they build the platform in all of its layers and all of its tools around threads. We can scale well beyond the limits posed by the thread-per-request model but at a huge cost. One option to deal with this limit is to turn it into a reactive framework by not representing concurrent operations directly as threads. Let’s rewrite this processing test, this time with a bunch of threads, and this is first case, we’re going all out, not limiting the number of threads we use for the processing. There is a need to manually enable experimental features in the project’s project language level, and this is done as shown in the screenshot below. The current implementation of light threads available in the OpenJDK build of the JDK is not entirely complete yet, but you can already have a good taste of how things will be shaping up.
Unlike the previous sample using ExecutorService, we can now use StructuredTaskScope to achieve the same result while confining the lifetimes of the subtasks to the lexical scope, in this case, the body of the try-with-resources statement. The code is much more readable, and the intent is also clear. StructuredTaskScope also ensures the following behavior automatically.
Project Loom
Examples include hidden code, like loading classes from disk to user-facing functionality, such as synchronized and Object.wait. As the fiber scheduler multiplexes many fibers onto a small set of worker kernel threads, blocking a kernel thread may take out of commission a significant portion of the scheduler’s available resources, and should therefore be avoided. On one extreme, each of these cases will need to be made fiber-friendly, i.e., block only the fiber rather than the underlying kernel thread if triggered by a fiber; on the other extreme, all cases may continue to block the underlying kernel thread. In between, we may make some constructs fiber-blocking while leaving others kernel-thread-blocking.
This will increase performance and scalability in most cases based on the benchmarks out there. Structured concurrency can help simplify the multi-threading or parallel processing use cases and make them less fragile and more maintainable. So in a thread-per-request model, the throughput will be limited by the number of OS threads available, which depends on the number of physical cores/threads available on the hardware.
Leave a Reply
Want to join the discussion?Feel free to contribute!