-= tEn difFeRent dEfinitiOns of datA stRuctuRe: =-
1. A means of storing a collection of data. Computer science is in part the study of methods for effectively using a computer to solve problems, or in other words, determining exactly the problem to be solved. This process entails (1) gaining an understanding of the problem; (2) translating vague descriptions, goals, and contradictory requests, and often unstated desires, into a precisely formulated conceptual solution; and (3) implementing the solution with a computer program. This solution typically consists of two parts: algorithms and data structures.
2. Way in which data are stored for efficient search and retrieval. The simplest data structure is the one-dimensional (linear) array, in which stored elements are numbered with consecutive integers and contents are accessed by these numbers. Data items stored non consecutively in memory may be linked by pointers (memory addresses stored with items to indicate where the "next" item or items in the structure are located). Many algorithms have been developed for sorting data efficiently; these apply to structures residing in main memory and also to structures that constitute information systems and databases.
3. A data structure in computer science is a way of storing data in a computer so that it can be used efficiently. It is an organization of mathematical and logical concepts of data. Often a carefully chosen data structure will allow the most efficient algorithm to be used. The choice of the data structure often begins from the choice of an abstract data type. A well-designed data structure allows a variety of critical operations to be performed, using as few resources, both execution time and memory space, as possible. Data structures are implemented by a programming language as data types and the references and operations they provide.
4. It is the interrelationship among data elements that determine how data is recorded, manipulated, stored, and presented by a database.
5. In programming, the term data structure refers to a scheme for organizing related pieces of information. The basic types of data structures include:
files
lists
arrays
records
trees
tables
Each of these basic structures has many variations and allows different operations to be performed on the data.
6. A data structure is a specialized format for organizing and storing data. General data structure types include the array, the file, the record, the table, the tree, and so on. Any data structure is designed to organize data to suit a specific purpose so that it can be accessed and worked with in appropriate ways. In computer programming, a data structure may be selected or designed to store data for the purpose of working on it with various algorithms.
7. Any method of organising a collection of data to allow it to be manipulated effectively. It may include meta data to describe the properties of the structure.
Examples data structures are: array, dictionary, graph,hash, heap, linked list, matrix, object, queue,ring, stack, tree, vector.
8. An organization of information, usually in memory, for better algorithm efficiency, such as queue, stack, linked list, heap, dictionary, and tree, or conceptual unity, such as the name and address of a person. It may include redundant information, such as length of the list or number of nodes in a subtree.
9. A data structure is a way of storing information in a computer so that it can be used efficiently.
Efficiency in this context refers to the ability to find and manipulate data quickly and with the minimum consumption of computer and network resources, mainly CPU (central processing unit) time, memory space and bandwidth.
Numerous types of data structures have been developed; some are very general and widely used, while others are highly specialized for certain types of tasks. Careful selection of data structures can allow the use of the most efficient algorithms for particular tasks and thereby optimize the performance of programs. An algorithm is a precise, unambiguous sets of rules that specify how to solve some problem or perform some task.
10. Data structures can be classified in several ways, including whether they are linear or graphic and whether they are static or dynamic (i.e., whether the shape or size of the structure changes over time). Linear data structures include lists and associative arrays. List data structures can be divided into arrays, linked lists and V lists. Graph data structures include trees, adjacency lists, disjoint-sets, graph-structured stacks and scene graphs. Other data structures include frames, unions, tagged unions and tables.
-= OthEr typEs of daTa stRuctuRe =-
[[..Union (computer science)..]]
In computer science, a union is a data structure that stores one of several types of data at a single location. There are only two safe ways of accessing a union object. One is to always read the field of a union most recently assigned; tagged unions enforce this restriction. The other is to only access functionality common to all types in the union. For example, if the fields are all subtypes of a common supertype, then it is always legal to perform operations on the union object that one can perform on the supertype.
Note: The remainder of this article refers strictly to primitive untagged unions, as opposed to tagged unions.
Because of the limitations of their use, untagged unions are generally only provided in untyped languages or in an unsafe way (as in C). They have the advantage over simple tagged unions of not requiring space to store the tag.
The name "union" stems from the type's formal definition. If one sees a type as the set of all values that type can take on, a union type is simply the mathematical union of its constituting types, since it can take on any value any of its fields can. Also, because a mathematical union discards duplicates, if more than one field of the union can take on a single common value, it is impossible to tell from the value alone which field was last written.
Unions in various programming languages
C/C++
In C and C++, untagged unions are expressed nearly exactly like structures (structs), except that each data member begins at the same location in memory. The data members, as in structures, need not be primitive values, and in fact may be structures or even other unions. However, C++ does not allow for a data member to be any type that has "a non-trivial constructor, a non-trivial copy constructor, a non-trivial destructor, or a non-trivial copy assignment operator." In particular, it is impossible to have the standard C++ string as a member of a union. The union object occupies as much space as the largest member, whereas structures require space equal to at least the sum of the size of its members. This gain in space efficiency, while valuable in certain circumstances, comes at a great cost of safety: the program logic must ensure that it only reads the field most recently written along all possible execution paths.
The primary usefulness of a union is to conserve space, since it provides a way of letting many different types be stored in the same space. Unions also provide crude polymorphism. However, there is no checking of types, so it is up to the programmer to be sure that the proper fields are accessed in different contexts. The relevant field of a union variable is typically determined by the state of other variables, possibly in an enclosing struct.
One common C programming idiom uses unions to perform what C++ calls a reinterpret_cast, by assigning to one field of a union and reading from another, as is done in code which depends on the raw representation of the values. This is not, however, a safe use of unions in general.
Note that the safer tagged unions can be constructed from untagged unions (see tagged union). The safe C dialect Cyclone encourages the preference of tagged unions to untagged.
Structure and union specifiers have the same form. [ . . . ] The size of a union is sufficient to contain the largest of its members. The value of at most one of the members can be stored in a union object at any time. A pointer to a union object, suitably converted, points to each of its members (or if a member is a bit-field, then to the unit in which it resides), and vice versa.
[[..Heap (data structure)..]]
This article is about heap data structures. For “the heap” as a large pool of unused memory, see Dynamic memory allocation.
Example of a full binary max heap
In computer science, a heap is a specialized tree-based data structure that satisfies the heap property: if B is a child node of A, then key(A) ≥ key(B). This implies that an element with the greatest key is always in the root node, and so such a heap is sometimes called a max heap. (Alternatively, if the comparison is reversed, the smallest element is always in the root node, which results in a min heap.) This is why heaps are used to implement priority queues. The efficiency of heap operations is crucial in several graph algorithms.
The operations commonly performed with a heap are:
• delete-max or delete-min: removing the root node of a max- or min-heap, respectively
• increase-key or decrease-key: updating a key within a max- or min-heap, respectively
• insert: adding a new key to the heap
• merge: joining two heaps to form a valid new heap containing all the elements of both.
Heaps are used in the sorting algorithm heapsort.
Heap applications
Heaps are a favorite data structures for many applications.
• Heapsort: One of the best sorting methods being in-place and with no quadratic worst-case scenarios.
• Selection algorithms: Finding the min, max or both of them, median or even any k-th element in sublinear time[citation needed] can be done dynamically with heaps.
• Graph algorithms: By using heaps as internal traversal data structures, run time will be reduced by an order of polynomial. Examples of such problems are Prim's minimal spanning tree algorithm and Dijkstra's shortest path problem.
Interestingly, full and almost full binary heaps may be represented using an array alone. The first (or last) element will contain the root. The next two elements of the array contain its children. The next four contain the four children of the two child nodes, etc. Thus the children of the node at position n would be at positions 2n and 2n+1 in a one-based array, or 2n+1 and 2n+2 in a zero-based array. Balancing a heap is done by swapping elements which are out of order. As we can build a heap from an array without requiring extra memory (for the nodes, for example), heapsort can be used to sort an array in-place.
One more advantage of heaps over trees in some applications is that construction of heaps can be done in linear time using Tarjan's algorithm.
Heap implementations
• The C++ Standard Template Library provides the make_heap, push_heap and pop_heap algorithms for binary heaps, which operate on arbitrary random access iterators. It treats the iterators as a reference to an array, and uses the array-to-heap conversion detailed above.
[[..Octree (data structure)..]]
An octree is a tree data structure in which each internal node has up to eight children. Octrees are most often used to partition a three dimensional space by recursively subdividing it into eight octants. Octrees are the three-dimensional analog of quadtrees. The name is formed from oct + tree, and normally written "octree", not "octtree".
Octrees for spatial representation
Each node in an octree subdivides the space it represents into eight octants. In a point region (PR) octree, the node stores an explicit 3-dimensional point, which is the "center" of the subdivision for that node; the point defines one of the corners for each of the eight children. In an MX octree, the subdivision point is implicitly the center of the space the node represents. The root node of a PR octree can represent infinite space; the root node of an MX octree must represent a finite bounded space so that the implicit centers are well-defined. Octrees are never considered kD-trees, as kD-trees split along a dimension and octrees split around a point. kD-trees are also always binary, which is not true of octrees.
Common uses of octrees
• Spatial indexing
• Efficient collision detection in three dimensions
• View frustum culling
• Fast Multipole Method
Application to color quantization
The octree color quantization algorithm, invented by Gervautz and Purgathofer in 1988, encodes image color data as an octree up to nine levels deep. Octrees are used because 23 = 8 and there are three color components in the RGB system. The node index to branch out from at the top level is determined by a formula that uses the most significant bits of the red, green, and blue color components, e.g. 4r + 2g + b. The next lower level uses the next bit significance, and so on. Less significant bits are sometimes ignored to reduce the tree size.
The algorithm is highly memory efficient because the tree's size can be limited. The bottom level of the octree consists of leaf nodes that accrue color data not represented in the tree; these nodes initially contain single bits. If much more than the desired number of palette colors are entered into the octree, its size can be continually reduced by seeking out a bottom-level node and averaging its bit data up into a leaf node, pruning part of the tree. Once sampling is complete, exploring all routes in the tree down to the leaf nodes, taking note of the bits along the way, will yield approximately the required number of colors.
Sunday, March 1, 2009
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment