Concurrency and parallelism have always been hot words in the IT industry, and I believe that all ape friends will not be unfamiliar. In this book on computer systems, the explanation of concurrency and parallelism is that concurrency refers to a system with multiple activities at the same time, while parallelism refers to using concurrency to make a system run faster.
This should not conflict with the previous understanding of LZ. Before LZ, concurrency is a mode, and parallelism is one of the means to realize this mode.
Threads are introduced under the abstract concept of processes, and the concept of thread-level concurrency means that multiple threads are active at the same time (not absolutely simultaneously).
The operating system has undergone great changes from a single processor to the current multi-core multi-processor system, and even hyper-threading technology. It also makes it even more important to program for multithreading, otherwise you wouldn’t be able to take advantage of the benefits of multiprocessors.
For a multi-processor system, it is easier to understand that it is actually a physical concentration of multiple CPUs on one integrated circuit chip. As for the hyper-threading technology, it is a technology of simulating 2N logical cores by using N physical cores. In hardware terms, hyperthreading requires multiple copies of certain parts of the CPU, such as registers and program counters, but only one copy of other parts, such as the ALU.
instruction level parallelism
In the book, instruction-level parallelism is explained as if a processor can execute multiple instructions at the same time, then this property is called instruction-level parallelism. In fact, instruction-level parallelism takes advantage of different stages in the execution of instructions, or more precisely, only uses part of the CPU hardware at the same time, so you can use this to execute multiple instructions in parallel.
Better yet, many modern processors can execute an instruction in less than one cycle on average. Such processors are called superscalar processors.
Single instruction, multiple data parallelism
The single-instruction, multiple-data concept refers to the way that one instruction can produce multiple operations that are executed in parallel. Some of today’s processors are equipped with special hardware to achieve this effect. Since multiple operations performed in parallel are generated, multiple data will be involved. In layman’s terms, it can also be understood that one instruction operates multiple data. Like the example in the book, some processors have instructions for adding 4 pairs of single-precision floating-point numbers in parallel.
Brief talk about abstraction
The importance of abstraction cannot be overemphasized, it has a self-evident place in the field of computer science. Abstraction can make some concrete implementations easier to describe, and can also specify how some implementations should be implemented.
To give a simple example, as far as classes in JAVA are concerned, it is actually implemented by the compiler and the JVM, and the JVM itself is an abstract concept, and it will also have specific implementations. For the hotspot virtual machine we usually use, the implementation of the class is to store the class information in the permanent generation, and then store the instance in the heap, and in each instance, a reference to the class information will be stored. So when we operate this instance, we will determine the operation we have done and execute it through the class information.
The above is LZ’s personal understanding of class implementation. Ape friends who are not familiar with JVM may be confused by this description. But it doesn’t matter, you just need to know that class can declare a class, and after you create an instance, use the instance name. The method name can call its method, the instance name. The variable name can get its attribute value (in For simplicity, access restrictions are ignored). This makes it easier for us to operate classes, which is one of the meanings of abstraction and the explanation of the first sentence of the abstract description above.
It is easier to understand the latter sentence. The JAVA virtual machine is an abstraction. With this abstraction, we can formulate specifications for the JAVA virtual machine, that is, the JAVA virtual machine specification.
This time we briefly understood the concepts of concurrency and parallelism, and the importance of abstraction to computer science.
In the next chapter, LZ will enter a brand new world with fellow ape friends, where there are many numbers of 1s and 0s, and many theorems and proofs, so this part may be boring. If LZ’s explanation does not allow you to better understand the content of this book, you can also read the original content of the book, or you can compare the book with LZ’s article. However, LZ still hopes that you will not give up halfway. After all, although practice is important, it still needs theoretical support.