Bubble sort
Bubble sort is a simple sorting algorithm. The algorithm
starts at the beginning of the data set. It compares the first two
elements, and if the first is greater than the second, it swaps them. It
continues doing this for each pair of adjacent elements to the end of
the data set. It then starts again with the first two elements,
repeating until no swaps have occurred on the last pass. This
algorithm's average and worst case performance is O(n2),
so it is rarely used to sort large, unordered, data sets. Bubble sort
can be used to sort a small number of items (where its asymptotic
inefficiency is not a high penalty). Bubble sort can also be used
efficiently on a list of any length that is nearly sorted (that is, the
elements are not significantly out of place). For example, if any number
of elements are out of place by only one position (e.g. 0123546789 and
1032547698), bubble sort's exchange will get them in order on the first
pass, the second pass will find all elements in order, so the sort will
take only 2n time.
Selection sort
Main article: Selection sort
Selection sort is an in-place comparison sort. It has O(n2) complexity, making it inefficient on large lists, and generally performs worse than the similar insertion sort.
Selection sort is noted for its simplicity, and also has performance
advantages over more complicated algorithms in certain situations.
The algorithm finds the minimum value, swaps it with the value in the
first position, and repeats these steps for the remainder of the list.
It does no more than n swaps, and thus is useful where swapping is very expensive.
Insertion sort
Main article: Insertion sort
Insertion sort is a simple sorting algorithm that is
relatively efficient for small lists and mostly sorted lists, and often
is used as part of more sophisticated algorithms. It works by taking
elements from the list one by one and inserting them in their correct
position into a new sorted list. In arrays, the new list and the
remaining elements can share the array's space, but insertion is
expensive, requiring shifting all following elements over by one. Shell sort (see below) is a variant of insertion sort that is more efficient for larger lists.
Shell sort
Main article: Shell sort
Shell sort was invented by Donald Shell
in 1959. It improves upon bubble sort and insertion sort by moving out
of order elements more than one position at a time. One implementation
can be described as arranging the data sequence in a two-dimensional
array and then sorting the columns of the array using insertion sort.
Comb sort
Main article: Comb sort
Comb sort is a relatively simple sorting algorithm originally designed by Wlodzimierz Dobosiewicz in 1980. Later it was rediscovered and popularized by Stephen Lacey and Richard Box with a Byte Magazine article published in April 1991. Comb sort improves on bubble sort, and rivals algorithms like Quicksort. The basic idea is to eliminate turtles, or small values near the end of the list, since in a bubble sort these slow the sorting down tremendously. (Rabbits, large values around the beginning of the list, do not pose a problem in bubble sort)
Merge sort
Main article: Merge sort
Merge sort takes advantage of the ease of merging already
sorted lists into a new sorted list. It starts by comparing every two
elements (i.e., 1 with 2, then 3 with 4...) and swapping them if the
first should come after the second. It then merges each of the resulting
lists of two into lists of four, then merges those lists of four, and
so on; until at last two lists are merged into the final sorted list. Of
the algorithms described here, this is the first that scales well to
very large lists, because its worst-case running time is O(n log n).
Merge sort has seen a relatively recent surge in popularity for
practical implementations, being used for the standard sort routine in
the programming languages Perl,[12] Python (as timsort[13]), and Java (also uses timsort as of JDK7[14]), among others. Merge sort has been used in Java at least since 2000 in JDK1.3.[15][16]
Heapsort
Main article: Heapsort
Heapsort is a much more efficient version of selection sort.
It also works by determining the largest (or smallest) element of the
list, placing that at the end (or beginning) of the list, then
continuing with the rest of the list, but accomplishes this task
efficiently by using a data structure called a heap, a special type of binary tree.
Once the data list has been made into a heap, the root node is
guaranteed to be the largest (or smallest) element. When it is removed
and placed at the end of the list, the heap is rearranged so the largest
element remaining moves to the root. Using the heap, finding the next
largest element takes O(log n) time, instead of O(n) for a linear scan as in simple selection sort. This allows Heapsort to run in O(n log n) time, and this is also the worst case complexity.
Quicksort
Main article: Quicksort
Quicksort is a divide and conquer algorithm which relies on a partition operation: to partition an array an element called a pivot
is selected. All elements smaller than the pivot are moved before it
and all greater elements are moved after it. This can be done
efficiently in linear time and in-place.
The lesser and greater sublists are then recursively sorted. Efficient
implementations of quicksort (with in-place partitioning) are typically
unstable sorts and somewhat complex, but are among the fastest sorting
algorithms in practice. Together with its modest O(log n) space
usage, quicksort is one of the most popular sorting algorithms and is
available in many standard programming libraries. The most complex issue
in quicksort is choosing a good pivot element; consistently poor
choices of pivots can result in drastically slower O(n²) performance, if at each step the median is chosen as the pivot then the algorithm works in O(n log n). Finding the median however, is an O(n) operation on unsorted lists and therefore exacts its own penalty with sorting.
Counting sort
Main article: Counting sort
Counting sort is applicable when each input is known to belong to a particular set, S, of possibilities. The algorithm runs in O(|S| + n) time and O(|S|) memory where n is the length of the input. It works by creating an integer array of size |S| and using the ith bin to count the occurrences of the ith member of S
in the input. Each input is then counted by incrementing the value of
its corresponding bin. Afterward, the counting array is looped through
to arrange all of the inputs in order. This sorting algorithm cannot
often be used because S needs to be reasonably small for it to be
efficient, but the algorithm is extremely fast and demonstrates great
asymptotic behavior as n increases. It also can be modified to provide stable behavior.
Bucket sort
Main article: Bucket sort
Bucket sort is a divide and conquer sorting algorithm that generalizes Counting sort
by partitioning an array into a finite number of buckets. Each bucket
is then sorted individually, either using a different sorting algorithm,
or by recursively applying the bucket sorting algorithm. A variation of
this method called the single buffered count sort is faster than
quicksort.[citation needed]
Due to the fact that bucket sort must use a limited number of buckets
it is best suited to be used on data sets of a limited scope. Bucket
sort would be unsuitable for data that have a lot of variation, such as
social security numbers.
Radix sort
Main article: Radix sort
Radix sort is an algorithm that sorts numbers by processing individual digits. n numbers consisting of k digits each are sorted in O(n · k) time. Radix sort can process digits of each number either starting from the least significant digit (LSD) or starting from the most significant digit
(MSD). The LSD algorithm first sorts the list by the least significant
digit while preserving their relative order using a stable sort. Then it
sorts them by the next digit, and so on from the least significant to
the most significant, ending up with a sorted list. While the LSD radix
sort requires the use of a stable sort, the MSD radix sort algorithm
does not (unless stable sorting is desired). In-place MSD radix sort is
not stable. It is common for the counting sort algorithm to be used internally by the radix sort. Hybrid sorting approach, such as using insertion sort for small bins improves performance of radix sort significantly.
Distribution sort
Distribution sort refers to any sorting algorithm where data
are distributed from their input to multiple intermediate structures
which are then gathered and placed on the output. For example, both bucket sort and flashsort are distribution based sorting algorithms.
Timsort
Main article: Timsort
Timsort finds runs in the data, creates runs with insertion
sort if necessary, and then uses merge sort to create the final sorted
list. It has the same complexity (O(nlogn)) in the average and worst
cases, but with pre-sorted data it goes down to O(n).
Memory usage patterns and index sorting
When the size of the array to be sorted approaches or exceeds the
available primary memory, so that (much slower) disk or swap space must
be employed, the memory usage pattern of a sorting algorithm becomes
important, and an algorithm that might have been fairly efficient when
the array fit easily in RAM may become impractical. In this scenario,
the total number of comparisons becomes (relatively) less important, and
the number of times sections of memory must be copied or swapped to and
from the disk can dominate the performance characteristics of an
algorithm. Thus, the number of passes and the localization of
comparisons can be more important than the raw number of comparisons,
since comparisons of nearby elements to one another happen at system bus speed (or, with caching, even at CPU speed), which, compared to disk speed, is virtually instantaneous.
For example, the popular recursive quicksort
algorithm provides quite reasonable performance with adequate RAM, but
due to the recursive way that it copies portions of the array it becomes
much less practical when the array does not fit in RAM, because it may
cause a number of slow copy or move operations to and from disk. In that
scenario, another algorithm may be preferable even if it requires more
total comparisons.
One way to work around this problem, which works well when complex records (such as in a relational database)
are being sorted by a relatively small key field, is to create an index
into the array and then sort the index, rather than the entire array.
(A sorted version of the entire array can then be produced with one
pass, reading from the index, but often even that is unnecessary, as
having the sorted index is adequate.) Because the index is much smaller
than the entire array, it may fit easily in memory where the entire
array would not, effectively eliminating the disk-swapping problem. This
procedure is sometimes called "tag sort".[17]
Another technique for overcoming the memory-size problem is to
combine two algorithms in a way that takes advantages of the strength of
each to improve overall performance. For instance, the array might be
subdivided into chunks of a size that will fit easily in RAM (say, a few
thousand elements), the chunks sorted using an efficient algorithm
(such as quicksort or heapsort), and the results merged as per mergesort.
This is less efficient than just doing mergesort in the first place,
but it requires less physical RAM (to be practical) than a full
quicksort on the whole array.
Techniques can also be combined. For sorting very large sets of data
that vastly exceed system memory, even the index may need to be sorted
using an algorithm or combination of algorithms designed to perform
reasonably with virtual memory, i.e., to reduce the amount of swapping required.
Some other sorting algorithms also i.e. New friends sort algorithm, Relative split and concatenate sort etc
No comments:
Post a Comment