Jump to content

Integer sorting

This is a good article. Click here for more information.
From Wikipedia, the free encyclopedia

Incomputer science,integer sortingis thealgorithmicproblem ofsortinga collection of data values byintegerkeys. Algorithms designed for integer sorting may also often be applied to sorting problems in which the keys arefloating pointnumbers,rational numbers,or text strings.[1]The ability to perform integer arithmetic on the keys allows integer sorting algorithms to be faster thancomparison sortingalgorithms in many cases, depending on the details of which operations are allowed in the model of computing and how large the integers to be sorted are.

Integer sorting algorithms includingpigeonhole sort,counting sort,andradix sortare widely used and practical. Other integer sorting algorithms with smaller worst-case time bounds are not believed to be practical for computer architectures with 64 or fewer bits per word. Many such algorithms are known, with performance depending on a combination of the number of items to be sorted, number of bits per key, and number of bits per word of the computer performing the sorting algorithm.

General considerations[edit]

Models of computation[edit]

Time bounds for integer sorting algorithms typically depend on three parameters: the numbernof data values to be sorted, the magnitudeKof the largest possible key to be sorted, and the numberwof bits that can be represented in a single machine word of the computer on which the algorithm is to be performed. Typically, it is assumed thatw≥ log2(max(n,K));that is, that machine words are large enough to represent an index into the sequence of input data, and also large enough to represent a single key.[2]

Integer sorting algorithms are usually designed to work in either thepointer machineorrandom access machinemodels of computing. The main difference between these two models is in how memory may be addressed. The random access machine allows any value that is stored in a register to be used as the address of memory read and write operations, with unit cost per operation. This ability allows certain complex operations on data to be implemented quickly using table lookups. In contrast, in the pointer machine model, read and write operations use addresses stored in pointers, and it is not allowed to perform arithmetic operations on these pointers. In both models, data values may be added, and bitwise Boolean operations and binary shift operations may typically also be performed on them, in unit time per operation. Different integer sorting algorithms make different assumptions, however, about whether integer multiplication is also allowed as a unit-time operation.[3]Other more specialized models of computation such as theparallel random access machinehave also been considered.[4]

Andersson, Miltersen & Thorup (1999)showed that in some cases the multiplications or table lookups required by some integer sorting algorithms could be replaced by customized operations that would be more easily implemented in hardware but that are not typically available on general-purpose computers.Thorup (2003)improved on this by showing how to replace these special operations by thebit fieldmanipulation instructions already available onPentiumprocessors.

Inexternal memory models of computing,no known integer sorting algorithm is faster than comparison sorting. Researchers have shown that, in these models, restricted classes of algorithms that are limited in how they manipulate their keys cannot be faster than comparison sorting,[5]and that an integer sorting algorithm that is faster than comparison sorting would imply the falsity of a standard conjecture innetwork coding.[6]

Sorting versus integer priority queues[edit]

Apriority queueis a data structure for maintaining a collection of items with numerical priorities, having operations for finding and removing the item with the minimum priority value. Comparison-based priority queues such as thebinary heaptake logarithmic time per update, but other structures such as thevan Emde Boas treeorbucket queuemay be faster for inputs whose priorities are small integers. These data structures can be used in theselection sortalgorithm, which sorts a collection of elements by repeatedly finding and removing the smallest element from the collection, and returning the elements in the order they were found. A priority queue can be used to maintain the collection of elements in this algorithm, and the time for this algorithm on a collection ofnelements can be bounded by the time to initialize the priority queue and then to performnfind and remove operations. For instance, using abinary heapas a priority queue in selection sort leads to theheap sortalgorithm, a comparison sorting algorithm that takesO(nlogn)time. Instead, using selection sort with a bucket queue gives a form ofpigeonhole sort,and using van Emde Boas trees or other integer priority queues leads to other fast integer sorting algorithms.[7]

Instead of using an integer priority queue in a sorting algorithm, it is possible to go the other direction, and use integer sorting algorithms as subroutines within an integer priority queue data structure.Thorup (2007)used this idea to show that, if it is possible to perform integer sorting in timeT(n)per key, then the same time bound applies to the time per insertion or deletion operation in a priority queue data structure. Thorup's reduction is complicated and assumes the availability of either fast multiplication operations or table lookups, but he also provides an alternative priority queue using only addition and Boolean operations with timeT(n) +T(logn) +T(log logn) +...per operation, at most multiplying the time by aniterated logarithm.[7]

Usability[edit]

The classical integer sorting algorithms ofpigeonhole sort,counting sort,andradix sortare widely used and practical.[8]Much of the subsequent research on integer sorting algorithms has focused less on practicality and more on theoretical improvements in theirworst case analysis,and the algorithms that come from this line of research are not believed to be practical for current64-bitcomputer architectures, although experiments have shown that some of these methods may be an improvement on radix sorting for data with 128 or more bits per key.[9]Additionally, for large data sets, the near-randommemory access patternsof many integer sorting algorithms can handicap them compared to comparison sorting algorithms that have been designed with thememory hierarchyin mind.[10]

Integer sorting provides one of the sixbenchmarksin theDARPAHigh Productivity Computing SystemsDiscrete Mathematics benchmark suite,[11]and one of eleven benchmarks in theNAS Parallel Benchmarkssuite.

Practical algorithms[edit]

Pigeonhole sortorcounting sortcan both sortndata items having keys in the range from0toK− 1in timeO(n+K).Inpigeonhole sort(often called bucket sort), pointers to the data items are distributed to a table of buckets, represented ascollectiondata types such aslinked lists,using the keys as indices into the table. Then, all of the buckets are concatenated together to form the output list.[12]Counting sort uses a table of counters in place of a table of buckets, to determine the number of items with each key. Then, aprefix sumcomputation is used to determine the range of positions in the sorted output at which the values with each key should be placed. Finally, in a second pass over the input, each item is moved to its key's position in the output array.[13]Both algorithms involve only simple loops over the input data (taking timeO(n)) and over the set of possible keys (taking timeO(K)), giving theirO(n+K)overall time bound.

Radix sortis a sorting algorithm that works for larger keys than pigeonhole sort or counting sort by performing multiple passes over the data. Each pass sorts the input using only part of the keys, by using a different sorting algorithm (such as pigeonhole sort or counting sort) that is suited only for small keys. To break the keys into parts, the radix sort algorithm computes thepositional notationfor each key, according to some chosenradix;then, the part of the key used for theith pass of the algorithm is theith digit in the positional notation for the full key, starting from the least significant digit and progressing to the most significant. For this algorithm to work correctly, the sorting algorithm used in each pass over the data must bestable:items with equal digits should not change positions with each other. For greatest efficiency, the radix should be chosen to be near the number of data items,n.Additionally, using apower of twonearnas the radix allows the keys for each pass to be computed quickly using only fast binary shift and mask operations. With these choices, and with pigeonhole sort or counting sort as the base algorithm, the radix sorting algorithm can sortndata items having keys in the range from0toK− 1in timeO(nlognK).[14]

Theoretical algorithms[edit]

Many integer sorting algorithms have been developed whose theoretical analysis shows them to behave better than comparison sorting, pigeonhole sorting, or radix sorting for large enough combinations of the parameters defining the number of items to be sorted, range of keys, and machine word size. Which algorithm has the best performance depends on the values of these parameters. However, despite their theoretical advantages, these algorithms are not an improvement for the typical ranges of these parameters that arise in practical sorting problems.[9]

Algorithms for small keys[edit]

AVan Emde Boas treemay be used as a priority queue to sort a set ofnkeys, each in the range from0toK− 1,in timeO(nlog logK).This is a theoretical improvement over radix sorting whenKis sufficiently large. However, in order to use a Van Emde Boas tree, one either needs a directly addressable memory ofKwords, or one needs to simulate it using ahash table,reducing the space to linear but making the algorithm randomized. Another priority queue with similar performance (including the need for randomization in the form of hash tables) is theY-fast trieofWillard (1983).

A more sophisticated technique with a similar flavor and with better theoretical performance was developed byKirkpatrick & Reisch (1984).They observed that each pass of radix sort can be interpreted as a range reduction technique that, in linear time, reduces the maximum key size by a factor ofn;instead, their technique reduces the key size to the square root of its previous value (halving the number of bits needed to represent a key), again in linear time. As in radix sort, they interpret the keys as two-digit base-bnumbers for a basebthat is approximatelyK.They then group the items to be sorted into buckets according to their high digits, in linear time, using either a large but uninitialized direct addressed memory or a hash table. Each bucket has a representative, the item in the bucket with the largest key; they then sort the list of items using as keys the high digits for the representatives and the low digits for the non-representatives. By grouping the items from this list into buckets again, each bucket may be placed into sorted order, and by extracting the representatives from the sorted list the buckets may be concatenated together into sorted order. Thus, in linear time, the sorting problem is reduced to another recursive sorting problem in which the keys are much smaller, the square root of their previous magnitude. Repeating this range reduction until the keys are small enough to bucket sort leads to an algorithm with running timeO(nlog lognK).

A complicated randomized algorithm ofHan & Thorup (2002)in theword RAMmodel of computationallows these time bounds to be reduced even farther, toO(nlog logK).

Algorithms for large words[edit]

An integer sorting algorithm is said to benon-conservativeif it requires a word sizewthat is significantly larger thanlog max(n,K).[15]As an extreme instance, ifwK,and all keys are distinct, then the set of keys may be sorted in linear time by representing it as abitvector,with a 1 bit in positioniwheniis one of the input keys, and then repeatedly removing the least significant bit.[16]

The non-conservative packed sorting algorithm ofAlbers & Hagerup (1997)uses a subroutine, based onKen Batcher'sbitonic sorting network,formergingtwo sorted sequences of keys that are each short enough to be packed into a single machine word. The input to the packed sorting algorithm, a sequence of items stored one per word, is transformed into a packed form, a sequence of words each holding multiple items in sorted order, by using this subroutine repeatedly to double the number of items packed into each word. Once the sequence is in packed form, Albers and Hagerup use a form ofmerge sortto sort it; when two sequences are being merged to form a single longer sequence, the same bitonic sorting subroutine can be used to repeatedly extract packed words consisting of the smallest remaining elements of the two sequences. This algorithm gains enough of a speedup from its packed representation to sort its input in linear time whenever it is possible for a single word to containΩ(lognlog logn)keys; that is, whenlogKlognlog logncwfor some constantc> 0.

Algorithms for few items[edit]

Pigeonhole sort, counting sort, radix sort, and Van Emde Boas tree sorting all work best when the key size is small; for large enough keys, they become slower than comparison sorting algorithms. However, when the key size or the word size is very large relative to the number of items (or equivalently when the number of items is small), it may again become possible to sort quickly, using different algorithms that take advantage of the parallelism inherent in the ability to perform arithmetic operations on large words.

An early result in this direction was provided byAjtai, Fredman & Komlós (1984)using thecell-probe modelof computation (an artificial model in which the complexity of an algorithm is measured only by the number of memory accesses it performs). Building on their work,Fredman & Willard (1994)described two data structures, the Q-heap and the atomic heap, that are implementable on a random access machine. The Q-heap is a bit-parallel version of a binarytrie,and allows both priority queue operations and successor and predecessor queries to be performed in constant time for sets ofO((logN)1/4)items, whereN≤ 2wis the size of the precomputed tables needed to implement the data structure. The atomic heap is aB-treein which each tree node is represented as a Q-heap; it allows constant time priority queue operations (and therefore sorting) for sets of(logN)O(1)items.

Andersson et al. (1998)provide a randomized algorithm called signature sort that allows for linear time sorting of sets of up to2O((logw)1/2 − ε)items at a time, for any constantε > 0.As in the algorithm of Kirkpatrick and Reisch, they perform range reduction using a representation of the keys as numbers in basebfor a careful choice ofb.Their range reduction algorithm replaces each digit by a signature, which is a hashed value withO(logn)bits such that different digit values have different signatures. Ifnis sufficiently small, the numbers formed by this replacement process will be significantly smaller than the original keys, allowing the non-conservative packed sorting algorithm ofAlbers & Hagerup (1997)to sort the replaced numbers in linear time. From the sorted list of replaced numbers, it is possible to form a compressedtrieof the keys in linear time, and the children of each node in the trie may be sorted recursively using only keys of sizeb,after which a tree traversal produces the sorted order of the items.

Trans-dichotomous algorithms[edit]

Fredman & Willard (1993)introduced thetransdichotomous modelof analysis for integer sorting algorithms, in which nothing is assumed about the range of the integer keys and one must bound the algorithm's performance by a function of the number of data values alone. Alternatively, in this model, the running time for an algorithm on a set ofnitems is assumed to be theworst caserunning time for any possible combination of values ofKandw.The first algorithm of this type was Fredman and Willard'sfusion treesorting algorithm, which runs in timeO(nlogn/ log logn);this is an improvement over comparison sorting for any choice ofKandw.An alternative version of their algorithm that includes the use of random numbers and integer division operations improves this toO(nlogn).

Since their work, even better algorithms have been developed. For instance, by repeatedly applying the Kirkpatrick–Reisch range reduction technique until the keys are small enough to apply the Albers–Hagerup packed sorting algorithm, it is possible to sort in timeO(nlog logn);however, the range reduction part of this algorithm requires either a large memory (proportional toK) or randomization in the form of hash tables.[17]

Han & Thorup (2002)showed how to sort in randomized timeO(nlog logn).Their technique involves using ideas related to signature sorting to partition the data into many small sublists, of a size small enough that signature sorting can sort each of them efficiently. It is also possible to use similar ideas to sort integers deterministically in timeO(nlog logn)and linear space.[18]Using only simple arithmetic operations (no multiplications or table lookups) it is possible to sort in randomized expected timeO(nlog logn)[19]or deterministically in timeO(n(log logn)1 + ε)for any constantε > 0.[1]

References[edit]

Footnotes
  1. ^abHan & Thorup (2002).
  2. ^Fredman & Willard (1993).
  3. ^The question of whether integer multiplication or table lookup operations should be permitted goes back toFredman & Willard (1993);see alsoAndersson, Miltersen & Thorup (1999).
  4. ^Reif (1985);comment inCole & Vishkin (1986);Hagerup (1987);Bhatt et al. (1991);Albers & Hagerup (1997).
  5. ^Aggarwal & Vitter (1988).
  6. ^Farhadi et al. (2020).
  7. ^abChowdhury (2008).
  8. ^McIlroy, Bostic & McIlroy (1993);Andersson & Nilsson (1998).
  9. ^abRahman & Raman (1998).
  10. ^Pedersen (1999).
  11. ^DARPA HPCS Discrete Mathematics BenchmarksArchived2016-03-10 at theWayback Machine,Duncan A. Buell, University of South Carolina, retrieved 2011-04-20.
  12. ^Goodrich & Tamassia (2002).AlthoughCormen et al. (2001)also describe a version of this sorting algorithm, the version they describe is adapted to inputs where the keys are real numbers with a known distribution, rather than to integer sorting.
  13. ^Cormen et al. (2001),8.2 Counting Sort, pp. 168–169.
  14. ^Comrie (1929–1930);Cormen et al. (2001),8.3 Radix Sort, pp. 170–173.
  15. ^Kirkpatrick & Reisch (1984);Albers & Hagerup (1997).
  16. ^Kirkpatrick & Reisch (1984).
  17. ^Andersson et al. (1998).
  18. ^Han (2004).
  19. ^Thorup (2002)
Secondary sources
  • Chowdhury, Rezaul A. (2008),"Equivalence between priority queues and sorting",in Kao, Ming-Yang (ed.),Encyclopedia of Algorithms,Springer, pp. 278–281,ISBN9780387307701.
  • Cormen, Thomas H.;Leiserson, Charles E.;Rivest, Ronald L.;Stein, Clifford(2001),Introduction to Algorithms(2nd ed.),MIT PressandMcGraw-Hill,ISBN0-262-03293-7.
  • Goodrich, Michael T.;Tamassia, Roberto(2002), "4.5 Bucket-Sort and Radix-Sort",Algorithm Design: Foundations, Analysis, and Internet Examples,John Wiley & Sons, pp. 241–243.
Primary sources