Version 1 (modified by 19 years ago) (diff) | ,
---|
Data structures
A well-designed data structure allows a variety of critical operations to be
performed, using as little resources, both execution time and memory space, as
possible. Moreover a carefully chosen data structure will allow a more efficient
algorithm to be used; consequently such choice assume a primary importance in
this project.
The two fundamental data structures on which Phase 1 layer is founded are lists
and trees, implemented in various ways, depending on the singular case
evaluations.
Here below we are going to describe the two most significative flavour of them.
1. Linear single linked list
This simple data structure has been choosen for trivial structures implementation. We are talking about lists of non-allocated PCBs or SDs: respectively pcbFree and semdFree.
In fact, because of element insertion and deletion may involve only the first one, a stack with a O(1) computational cost is what we was looking for. And is what we implemented with a linear single linked list, appreciating its simplicity.
2. Circular double linked list
List management operations, such as element insertion and deletion, require
predecessor and successor updating. Because of this, a previous element pointer
(p_prev
) is added to the PCB standard structure, performing these operations
scanning the list only once.
Kaya Phase 1 will take advantage of double circularly linked list benefits in every processes queue, which is widely used by operating system.
3. Semaphore binary tree
The importance of the choice of data structure increases directly with the performance relevance and for semaphore descriptors management becomes a primary goal. That is because semaphores queue's phisical address is obfuscated in a higher level view, so Phase 1 layer must encapsulate the association between semaphore address (semAdd) and its queue into the Semaphore descriptors (SDs).
Such association must be searched between them and if they are managed in a sorted structure computational costs decrease, avoiding scanning everyone. For this reason we opted for a data structure that is more complex and difficult to mantain but, on the other hand, is more performant and scalable: a binary search tree.
Let us show in detail a BSTs comparison with sorted list.
The major part of mantainence costs depends on the fact that it is sorted and,
consequently, on the search cost. In fact, while a list search costs O(n), a
binary tree takes only O(log(n)), because of the number of scanned elements
is at most equal to the tree height. By the way, input distribution influences
BST's height in a determining way, deciding tree balance, making O(log(n))
an estimate of the average case cost. In particularly adverse situations BSTs
collapse into a fked/st list.
Summing up, in order to appreciate BSTs performance improvements and scalability,
input cardinality should be sufficiently large and well-distributed.
Like often happens, performance and memory usage risk to collide and a good
compromise is needed.
Therefore we tried to limit wastes, just single-linking the tree with s_rightC
and s_leftC
pointers, increasing module complexity and consequent
implementation and mantainance difficulties?.