Java is a mainstream platform that manages concurrency by utilizing multiple threads — Java.lang.Thread. It’s a mapper for the native operating system thread.
Such a model is perfect within networks that do not require the usage of multiple threads. But as soon as you apply this model to a system that works on a larger scale, you have problems. This native model is not good on a larger scale and here is why:
In such a case, the thread approach for concurrency when running Java applications causes problems. Just think about it, a web server is processing 700 requests per second. This requires 700 Mb.
The server won’t use just those threads created specifically for requests. It will use all the threads and memory. This means clogging and disruption of the system. This problem needs a solution. Enter Project Loom.
Project Loomis is currently in development since it needs adjustments. But still, it provides Java apps with a good solution.
The idea of Project Loom is to explore and deliver the Java Virtual Machine (Java VM) features and in-built APIs to support the lightweight concurrency within the system. It’s a new project that introduces the idea of a Virtual Thread.
A Virtual Thread is a thread but more lightweight than a traditional one. Virtual Threads are managed by Java VM and not by the operating system like in the case of traditional threads. The new idea meets the requirements of existing Java APIs and thus allows the synchronous (or blocking) code.
Let’s dive into the world of Project Loom and find out more about the new project.
The idea behind the creation of Project Loom is to make the process of writing, debugging, and maintaining concurrent applications easier. At the same time, the project aims to make sure that the new approach meets all modern requirements.
The current threading approach native to Java is a bit outdated considering modern demands. The creation of every thread leads to a corresponding thread spawning within the operating system, and that’s a huge waste of computing resources. If you think about cloud services on this scale, you will understand that you can minimize your expenses instead of using maximum computing resources.
Loom has introduced a new type of thread — the Fiber, a much more lightweight thread that doesn’t need a corresponding thread within the OS. The Java Virtual Machine will manage Fiber. The process of creating applications won’t change much, but it will significantly reduce the carbon print of a separate app. Moreover, it won’t need to use the maximum computing resources.
The new thread, Fiber, consists of just two components — a continuation and a scheduler. Java already has a great scheduler called ForkJoinPool, so the project will just need a continuation to add to the Java Virtual Machine.
In the past, when Java was launched, it was rather easy to write and run concurrent applications. But the problem is that the modern world has increased demands. The software unit of concurrency simply can’t match the scale of the domain’s unit of concurrency.
Overall, a server can handle millions of concurrent open sockets. But the problem is that the current Java runtime limits the server to a few thousand concurrent open sockets. The reason for such limitation is the need for Java runtime to create corresponding threads within the operating system. And the OS can’t handle millions or billions of operations.
The project’s goal is to add a new type of thread — Fiber. The project developers also plan to experiment with new schedulers and continuations. But mainly, with the last component as the scheduler can be easily used with Fiber.
Developers launch new projects now and then. But the question is — what for? If there is a system and it’s working, why would someone need to learn new programs? In the case of Java Loom, the answer is simple — traditional Java threads aren’t universal.
Here is how the threading system within Java works:
This is how threading works within Java. But why do you even need threads? Especially, so many threads? A Central Processing Unit has only a few hyper threads. For instance, CPU Internal Core i9 consists of 16 threads.
That’s because the application isn’t using just the CPU of a computer. It also needs the Input-output (I/O) system.
The moment the thread has to use I/O, the operating system tries to distribute the resources needed for this thread and to create another thread. The system uses the CPU to create one more thread. Creating more threads enables the usage of resources simultaneously to perform more than just one task.
A standard example is a web server. It’s able to handle thousands of operations simultaneously, but it will require new threads. And as it was mentioned, creating new threads, especially thousands of them, has a huge impact on the machine’s operating system.
That’s why the traditional threading system needs improvements. This is exactly what Java’s Project Loom is offering.
The main idea of Project Loom revolves around a Fiber. It is also called Virtual Thread, Green, or User thread since the OS isn’t involved in creating threads at all! The burden of creating new Fiber is on the virtual “shoulders” of the Java VM.
This all means that not every thread needs a corresponding one within the operating system, as it is in the case of traditional threading.
Even though Fiber can still be blocked by I/O or get a confirmation from another thread, it won’t block underlying threads. Meaning, Virtual threads can still utilize underlying ones.
In simple words, if one virtual thread is blocked, the underlying thread will be used by the other one. That’s how Java’s Loom Project offers to fix the problem of system disruptions and clogging.
But that’s not the only advantage of the project — that is if it works as advertised. The memory footprint of Fiber is not in Megabytes but Kilobytes! If needed, the system can expand its stack so that the Java Virtual Machine won’t need to add more memory.
This solution offers another one. It’s like a chain reaction that offers improvements. Now that threads are as lightweight as possible, it is possible to figure out new ways of using ExecutorServices.
The ExecutorServices is a traditional construct utilized to implement concurrency in the Java world. ExecutorServices has useful APIs. The benefit of APIs of ExecutorServices is that it’s easy to use them.
ExecutorServices has an internal pool that has control over the number of threads that spawn at a time. The number of threads depends on the settings used by the developer. The mentioned pool is utilized to control the number of corresponding threads within the operating system. It’s imperative for ExecutorServices since otherwise, threads would use resources of the OS.
But now that threads are lightweight, thanks to Loom, ExecutorServices can be used differently as well. One improvement within the system leads to another.
Java is already using structured concurrency for programming. But this approach still has a lot of flaws. Project Loom solves the issue and brings more structure to the concurrency approach.
If you program by using standard ThreadBasedJobRunner, you get a longer code with more lines. The opposite happens when you utilize Loom.
The new approach also offers a new way to create thread factories. A standard thread class from now on has a new static method which is called a builder. This new class is used both to create Fiber and the ThreadFactory.
The project is still in the development stage, but it is already showing great results. One thing that the developers of the Loom Project can improve is the speed.
A lot of various applications can use Loom partially or fully adopt this approach. The lightweights of threads and the ability of the system to use its resources more consciously offer a great solution for the application developers.
Java Loom offers a better solution for using the resources of the computing machine. The main advantage of the project is the introduction of a new thread, the Fiber or Green Threads. Here are some of the main benefits of Fiber:
The last part about the eliminated need for a corresponding thread for each Fiber is one of the best advantages of Loom. Huge-scale JVM applications can simultaneously run millions if not billions of operations. But the lack of scalability of traditional threading is what slows down such applications.
These huge-scale apps simply can’t reach their full potential because of limitations set by the operating system. That’s exactly the issue that Loom aims to solve by introducing Fiber. As it was mentioned, the usage of ExecutorServices can be improved as well.
The project is still in the stage of development, but it already offers great solutions to current problems. Thanks to the introduction of Fiber, it is possible that servers will be able to run millions of concurrent operations without overusing computing resources.
The traditional threading approach is outdated and slows down most applications. But the new approach that requires using Fiber enables the use of the full potential of the Java server that can run millions of operations. Every time a new Fiber is created, it doesn’t need a corresponding thread within the operating system.
Moreover, the introduction of Fiber also offers a solution to system clogs. Whenever I/O blocks the new Fiber, another thread will take the underlying one to run the system without disruptions. Of course, the release of the project might show some bugs, but the current information related to the project seems rather promising.
See what running a business is like with Global Cloud Team on your development. Please submit the form below and we will get back to you within 24 - 48 hours. Or call us at: +1 800 903 94 16