Computer Programming Algorithms Directory

Encryption Algorithms
  • Advanced Encryption Standard (AES), Data Encryption Standard (DES), Triple-DES and Skipjack Algorithms - Offers descriptions of the named encryption algorithms.
  • Blowfish - Describes the Blowfish encryption algorithm. Offers source code for a variety of platforms.
  • KremlinEncrypt - Cryptography site provides an overview of cryptography algorithms and links to published descriptions where available.
  • PowerBASIC Crypto Archives - Offers PowerBASIC source code for many algorithms including:
    Hashing - RIPEMD-160, MD5, SHA-1, SHA-256, CRC-16, CRC-32, Adler-32, FNV-32, ELF-32
    Encryption - RSA-64, Diffie-Hellman-Merkle Secure Key Exchange, Rijndael, Serpent, Twofish, CAST-128, CAST-256, Skipjack, TEA, RC4, PC1, GOST, Blowfish, Caesar Substitutional Shift, ROT13
    Encoding - Base64, MIME Base64, UUEncode, yEnc, Neuronal Network, URLEncode, URLDecode
    Compression - LZ78, LZSS, LZW, RLE, Huffman, Supertiny
    Psuedo-Random Number Generation (PRNG) - Mersenne Twister Number Generator, Cryptographic PRNG, MPRNG, MOAPRNG, L'Ecuyer LCG3 Composite PRNG, W32.SQL-Slammer
  • TEA - Tiny Encryption Algorithm - Describes the TEA encryption algorithm with C source code.
  • xICE - Has links towards the bottom of the page to the description of the xice encryption algorithm as well as the xice software development kit which contains the algorithm's full source code in C++, ASP, JScript, Ruby, and Visual Basic 6.0.

Genetic Algorithms

  • Artificial Life - Offers executable and source for ant food collection and the travelling salesman problems using genetic algorithms
  • Genetic Ant Algorithm - Source code for a Java applet that implements the Genetic Ant Algorithm based upon the model given in Koza, Genetic Programming, MIT Press
  • Introduction to Genetic Algorithms - Introduces fundamentals, offers Java applet examples
  • Jaga - Offers a free, open source API for implementing genetic algorithms (GA) and genetic programming (GP) applications in Java
  • SPHINcsX - Describes a methodology to perform a generalized zeroth-order two- and three-dimensional shape optimization utilizing a genetic algorithm

GIS (Geographic Information Systems) Algorithms

  • Efficient Triangulation Algorithm Suitable for Terrain Modelling - Describes algorithm and includes links to source code for various languages
  • Prediction of Error and Complexity in GIS Algorithms - Describes algorithms for GIS sensitivitiy analysis
  • Point in Polygon Algorithm - Describes algorithm

Sorting Algorithms

  • Andrew Kitchen's Sorting Algorithms - Describes parallel sorting algorithms:
    Odd-Even Transposition Sort has a worst case time of O(n), running on n processors. Its absolute speed up is O(log n), so its efficiency is O((log n)/n)
    Shear Sort has a worst case time of O(n½ log n), running on n processors. Its absolute speed up is O(n½), so its efficiency is O(1/n½)
  • Ariel Faigon's Library of Sorting Algorithms - C source code for a variety of sorting algorithms including Insertion Sort, Quick Sort, Shell Sort, Gamasort, Heap Sort and Sedgesort (Robert Sedgewick quicksort optimization)
  • Flash Sort - Describes the FlashSort algorithm which sorts n elements in O(n) time
  • Michael Lamont's Sorting Algorithms - Describes common sorting algorithms:
    O(n²) Sorts - bubble, insertion, selection and shell sorts
    O(n log n) Sorts - heap, merge and quick sorts
  • Sequential and Parallel Sorting Algorithms - Describes many sorting algorithms:
    Quicksort
    Heapsort
    Shellsort
    Mergesort
    Sorting Networks
    Bitonic Sort
    Odd-Even Mergesort
    LS3-Sort
    4-way Mergesort
    Rotate Sort
    3n-Sort
    s^2-way Mergesort

Search Algorithms

  • Exact String Matching Algorithms - Details 35 exact string search algorithms.
  • Finding a Loop in a Singly Linked List - Outlines several methods for identifying loops in a singly linked list.
  • Fibonaccian search - Describes an O(log n) search algorithm for sorted arrays that is faster than a binary search for very large arrays.

Tree Algorithms

  • B-Trees: Balanced Tree Data Structures - Introduction to B-Trees. Describes searching, splitting and inserting algorithms.

Computational Geometry Algorithms

  • CGAL - Offers open source C++ library of computational geometry algorithms.
  • FastGEO - Offers source code for a library of computational geometry algorithms such as geometrical primitives and predicates, hull construction, triangulation, clipping, rotations and projections using the Object Pascal language.
  • Wykobi - FastGEO library ported to C++.

Phonetic Algorithms

  • Lawrence Philips' Metaphone Algorithm - Describes an algorithm which returns the rough approximation of how an English word sounds. Offers a variety of source code listings for the algorithm.
  • Soundex Algorithms - Describes the NYSIIS VS Soundex and R. C. Russell's soundex algorithms.

Project Management Algorithms

  • Calculations for Critical Path Scheduling - Describes the algorithms for calculating critical paths with both ADM and PDM networks.
  • Resource Leveling Using the Minimum Moment Heuristic - Offers a Windows 3.1 download that includes a .PDF document describing the algorithm.
  • Resource-Constrained Project Scheduling - (.PDF) Describes several algorithms for resource leveling:
    Basic Single Mode RCPSP
    Basic Multi-Mode RCPSP
    Stochastic RCPSP
    Bin Packing related RCPSP
    Multi-resource constrained project scheduling problem (MRCPSP)

Miscellaneous Algorithms

  • AI Horizon - Has a variety of algorithms, from basic computer science data structures such as 2-3 trees to AI-related algorithms such as minimax and a discussion of machine learning algorithms.
  • Hash Algorithms - Overview and source code (in C, Pascal and Java) for many general purpose hashing algorithms.
  • Porter Stemming Algorithm - Describes a process for removing the commoner morphological and inflexional endings from words in English. Its main use is as part of a term normalisation process that is usually done when setting up Information Retrieval systems.
  • Rubik's Cube - Solves a Rubic's Cube using the BestFast search algorithm and profile tables.
  • Simulated Annealing - The fundamental idea is to allow moves resulting in solutions of worse quality than the current solution (uphill moves) in order to escape from local minima. The probability of doing such a move is decreased during the search.
  • The Stony Brook Algorithm Repository - Offers a collection of algorithm implementations for over seventy of the most fundamental problems in combinatorial algorithms:
  1. Data Structures - Dictionaries, Priority Queues, Suffix Trees and Arrays, Graph Data Structures, Set Data Structures, Kd-Trees
  2. Numerical Problems - Solving Linear Equations, Bandwidth Reduction, Matrix Multiplication, Determinants and Permanents, Linear Programming/Simplex Method, Random Number Generation, Factoring and Primality Testing, Arbitrary Precision Arithmetic, Knapsack Problem, Discrete Fourier Transform
  3. Combinatorial Problems - Sorting, Searching, Median and Selection, Permutations, Subsets, Partitions, Graphs, Calendrical Calculations, Job Scheduling, Satisfiability
  4. Graph Problems - Polynomial Time Problems (Connected Components, Topological Sorting, Minimum Spanning Tree, Shortest Path, Transitive Closure and Reduction, Matching, Eulerian Cycle / Chinese Postman, Edge and Vertex Connectivity, Network Flow, Drawing Graphs Nicely, Drawing Trees, Planarity Detection and Embedding)
  5. Graph Problems - Hard Problems (Clique, Independent Set, Vertex Cover, Traveling Salesman Problem, Hamiltonian Cycle, Graph Partition, Vertex Coloring, Edge Coloring, Graph Isomorphism, Steiner Tree, Feedback Edge/Vertex Set
  6. Computational Geometry - Robust Geometric Primitives, Convex Hull, Triangulation, Voronoi Diagrams, Nearest Neighbor Search, Range Search, Point Location, Intersection Detection, Bin Packing, Medial-Axis Transformation, Polygon Partitioning, Simplifying Polygons, Shape Similarity, Motion Planning, Maintaining Line Arrangements, Minkowski Sum
  7. Set and String Problems - Set Cover, Set Packing, String Matching, Approximate String Matching, Text Compression, Cryptography, Finite State Machine Minimization, Longest Common Substring, Shortest Common Superstring
  • NIST Dictionary of Algorithms and Data Structures - Some entries have links to implementations.

Click to Read More

Dictionary of Algorithms and Data Structures

This web site is hosted in part by the Software Quality Group of the Software Diagnostics and Conformance Testing Division, Information Technology Laboratory.
This is a dictionary of algorithms, algorithmic techniques, data structures, archetypal problems, and related definitions. Algorithms include common functions, such as Ackermann's function. Problems include traveling salesman and Byzantine generals. Some entries have links to implementations and more information. Index pages list entries by area and by type. The two-level index has a total download 1/20 as big as this page.
Don't use this site to cheat. Teachers, contact us if we can help.
To define or correct terms, please contact Paul E. Black. We need help in automata theory, combinatorics, parallel or randomized algorithms, heuristics, and quantum computing. We do not include algorithms particular to business data processing, communications, operating systems or distributed algorithms, programming languages, AI, graphics, or numerical analysis: it is tough enough covering "general" algorithms and data structures. However, if you want to tackle one of these areas, we'll consider including them.
Some terms with a leading variable, such as n-way, m-dimensional, or p-branching, are under k-. You may find terms dealing with hardware, the computer industry, slang, etc., in the Free On-Line Dictionary Of Computing or in A Glossary of Computer Oriented Abbreviations and Acronyms.

Data Structures and Algorithms Course

By John Morris

This course will focus on data structures and algorithms for manipulating them. Data structures for storing information in tables, lists, trees, queues and stacks will be covered. Some basic graph and discrete transform algorithms will also be discussed.
Good Programs

There are a number of facets to good programs: they must
  • run correctly
  • run efficiently
  • be easy to read and understand
  • be easy to debug and
  • be easy to modify.

The first of these is obvious - programs which don't run correctly are clearly of little use. "Efficiently" is usually understood to mean in the minimum time - but occasionally there will be other constraints, such as memory use, which will be paramount. As will be demonstrated later, better running times will generally be obtained from use of the most appropriate data structures and algorithms, rather than through "hacking", i.e. removing a few statements by some clever coding - or even worse, programming in assembler!

This course will focus on solving problems efficiently: you will be introduced to a number of fundamental data structures and algorithms (or procedures) for manipulating them.

Click to Read More

Data Structures and Algorithms

By Alison Cawsey
This course will have two main sections. The first, shorter section will introduce inheritance and polymorphism in object oriented programming. This is material makes the implementation of some datastructures more elegant and more re-useable. However this section of the course is fairly independent of the next. The second, longer section will look at different sorts of algorithms, and in particular string processing algorithms and graph algorithms.
There are various concepts from Data Structure & Algorithms which will be assumed. The basic idea of an abstract datatype is important. You also need to be familiar with pointers for some of the first section. You should also be able to implement and use a stack and a queue (with or without the help of a textbook).
Why should you want to know about algorithms? There are a number of possible reasons:
  • To avoid re-inventing the wheel. As you might expect, for many programming problems, someone has already developed a good algorithm to solve that problem. For many of these algorithms, people have formally analysed their properties, so you can be confident in their correctness and efficiency. For example, we know that merge sort and quick sort both work correctly, and have average case complexity O(n log n), so we can straightforwardly use either algorithm in our programs. We might be able to further choose between these two algorithms, depending on properties of our data (is it already almost sorted?).
  • To help when developing your own algorithms. Many of the principles of data-abstraction and algorithm design, illustrated by some of the algorithms discussed here, are important in all programming problem. Not all tasks have ``off-the-shelf'' algorithms available, so you will sometimes have to develop your own. Merge sort illustrates a widely applicable technique: split the problem in two, solve each separately, then combine the results. A knowledge of well known algorithms will provide a source of ideas that may be applied to new tasks.
  • To help understand tools that use particular algorithms, so you can select the appropriate tool to use. For example, documentation of various compression programs will tell you that pack uses Huffman coding, compress uses LZW, and Gzip uses the Lempel-Ziv algorithm. If you have at least some understanding of these algorithms you will know which is likely to be better and how much reduction in file size you might expect for a given type of file (e.g., does the file involve a lot of repetition of common sequences of characters).
  • Because they're neat. Many surprisingly simple and elegant algorithms have been invented. If you have even a slightly Mathematical disposition you should find them interesting in their own right.

For all these reasons it is useful to have a broad familiarity with a range of algorithms, and to know what they may be used for.

Click to Read More

Algorithms for programmers

By Jorg Arndt

This is a draft of a book about selected algorithms. The audience in mind are programmers who are interested in the treated algorithms and actually want to create and understand working and reasonably optimized code.
The style varies somewhat which I do not consider bad per se: While some topics (as fast Fourier transforms) need a clear and explicit introduction others (like the bit wizardry chapter) seem to be best presented by basically showing the code with just a few comments.
The pseudo language Sprache is used when I see a clear advantage to do so, mainly when the corresponding C++ does not appear to be self explanatory. Larger pieces of code are presented in C++. C programmers do not need to be shocked by the ‘++’ as only a rather minimal set of the C++ features is used. Some of the code, especially in part 3 (Arithmetical algorithms), is given in the pari/gp language as the use of other languages would likely bury the idea in technicalities.

Algorithms for Modular Elliptic Curves

By J. E. Cremona
This book is in three sections. First, we describe in detail an algorithm based on modular symbols for computing modular elliptic curves: that is, one-dimensional factors of the Jacobian of the modular curve $X_0(N)$, which are attached to certain cusp forms for the congruence subgroup $\Gamma_0(N)$. In the second section, various algorithms for studying the arithmetic of elliptic curves (defined over the rationals) are described. These are for the most part not new, but they have not all appeared in book form, and it seemed appropriate to include them here. Lastly, we report on the results obtained when the modular symbols algorithm was carried out for all $N\le1000$. In a comprehensive set of tables we give details of the curves found, together with all isogenous curves (5113 curves in all, in 2463 isogeny classes. [In the first edition, these numbers were given as 5089 and 2447 respectively, as the curves of conductor 702 were inadvertently omitted.] Specifically, we give for each curve the rank and generators for the points of infinite order, the number of torsion points, the regulator, the traces of Frobenius for primes less than 100, and the leading coefficient of the $L$-series at $s=1$; we also give the reduction data (Kodaira symbols, and local constants) for all primes of bad reduction, and information about isogenies..........
Contents of Second Edition
  • Introduction
  • Modular symbol algorithms
  • Modular Symbols and Homology
  • The upper half-plane, the modular group and cusp forms
  • The duality between cusp forms and homology
  • Real structure
  • Modular symbol formalism
  • Rational structure and the Manin-Drinfeld Theorem
  • Triangulations and homology
  • M-symbols and $\Gamma_0(N)$
  • Conversion between modular symbols and M-symbols
  • Action of Hecke and other operators
  • Working in $H^+(N)$
  • Modular forms and modular elliptic curves
  • Splitting off one-dimensional eigenspaces
  • $L(f,s)$ and the evaluation of $L(f,1)/\period(f)$
  • Computing Fourier coefficients
  • Computing periods I
  • Computing periods II: Indirect method
  • Computing periods III: Evaluation of the sums
  • Computing $L^{(r)}(f,1)$
  • Obtaining equations for the curves
  • Computing the degree of a modular parametrization
  • Modular Parametrizations
  • Coset representatives and Fundamental Domains
  • Implementation for $\Gamma_0(N)$ Appendix to Chapter II. Examples
  • Example 1. N=11
  • Example 2. N=33
  • Example 3. N=37
  • Example 4. N=49
  • Elliptic curve algorithms
  • Terminology and notation
  • The Kraus--Laska--Connell algorithm and Tate's algorithm
  • The Mordell--Weil group I: finding torsion points
  • Heights and the height pairing
  • The Mordell--Weil group II: generators
  • The Mordell--Weil group III: the rank
  • The period lattice
  • Finding isogenous curves
  • Twists and complex multiplication
  • The tables
  • Elliptic curves
  • Mordell--Weil generators
  • Hecke eigenvalues
  • Birch--Swinnerton-Dyer data
  • Parametrization degrees
  • Bibliography

Average Case Analysis of Algorithms on Sequences

by Wojciech Szpankowski
A timely book on a topic that has witnessed a surge of interest over the last decade, owing in part to several novel applications, most notably in data compression and computational molecular biology. It describes methods employed in average case analysis of algorithms, combining both analytical and probabilistic tools in a single volume.
  • Tools are illustrated through problems on words with applications to molecular biology, data compression, security, and pattern matching.
  • Includes chapters on algorithms and data structures on words, probabilistic and analytical models, inclusion-exclusion principles, first and second moment methods, subadditive ergodic theorem and large deviations, elements of information theory, generating functions, complex asymptotic methods, Mellin transform and its applications, and analytic poissonization and depoissonization.
  • Written by an established researcher with a strong international reputation in the field.

Click to Read More

Planning Algorithms

By Steven M. LaValle
This book presents a unified treatment of many different kinds of planning algorithms. The subject lies at the crossroads between robotics, control theory, artificial intelligence, algorithms, and computer graphics. The particular subjects covered include motion planning, discrete planning, planning under uncertainty, sensor-based planning, visibility, decision-theoretic planning, game theory, information spaces, reinforcement learning, nonlinear systems, trajectory planning, nonholonomic planning, and kinodynamic planning.
Planning to Plan
Planning is a term that means different things to different groups of people. Robotics addresses the automation of mechanical systems that have sensing, actuation, and computation capabilities (similar terms, such as autonomous systems are also used). A fundamental need in robotics is to have algorithms that convert high-level specifications of tasks from humans into low-level descriptions of how to move. The terms motion planning and trajectory planning are often used for these kinds of problems. A classical version of motion planning is sometimes referred to as the Piano Mover’s Problem. Imagine giving a precise computer-aided design (CAD) model of a house and a piano as input to an algorithm. The algorithm must determine how to move the piano from one room to another in the house without hitting anything. Most of us have encountered similar problems when moving a sofa or mattress up a set of stairs. Robot motion planning usually ignores dynamics and other differential constraints and focuses primarily on the translations and rotations required to move the piano. Recent work, however, does consider other aspects, such as uncertainties, differential constraints, modeling errors, and optimality. Trajectory planning usually refers to the problem of taking the solution from a robot motion planning algorithm and determining how to move along the solution in a way that respects the mechanical limitations of the robot.

Data Structures And Algorithms

CSEd Tools
This book covers some commonly used data structures (such as arrays, linked-lists, trees, and graphs) and algorithms (such as sorting, searching, and various graph algorithms) that are used in Computer Science. The book is intended primarily for the readers who are already familiar with these data structures and algorithms, but would like to get some information on how to perform some advanced or non-trivial operations on/using them. Hence, for each data structure and algorithm, I have generally given a lighter treatment to the basic issues (since they are generally already covered elsewhere) and given more attention to more advanced issues related to them.
The book is actually still in writing, and I will add more material to it. So make sure to check this Web site from time-to-time to see the latest updates.
Note: This book contains several psuedocodes. Implementing these psuedocodes using a programming language will require rewriting them appropriately using language-specific constructs. The pseudocodes may also make some simplifying assumptions, may not check some boundary conditions such as nil input values, and may not check for invalid input values. The solutions provided here may not be the best possible solutions. In fact, simpler and more practical solutions have often been preferred over theoretically-better but more complicated solutions.

Object Oriented Datastructures using Java

By Nell Dale, Daniel T. Joyce and Chip Weems
Welcome to the first edition of Object-Oriented Data Structures using Java. This book has been written to present the algorithmic, programming, and structuring techniques of a traditional data structures course in an objectoriented context. You’ll find that all of the familiar topics of lists, stacks, queues, trees, graphs, sorting, searching, Big-O complexity analysis, and recursion are still here, but covered from an object-oriented point of view using Java. Thus, our structures
are defined with Java interfaces and encapsulated as Java classes. We use abstract classes and inheritance, as appropriate, to take advantage of the relationships among various versions of the data structures. We use design aids, such as Class-Responsibility-Collaborator (CRC) Cards and Universal Modeling Language (UML) diagrams, to help us model and visualize our classes and their interrelationships. We hope that you enjoy this modern and up-to-date approach to the traditional data structures course.
Abstract Data Types

Over the last 16 years, the focus of the data structures course has broadened considerably. The topic of data structures now has been subsumed under the broader topic of abstract data types (ADTs)—the study of classes of objects whose logical behavior is defined by a set of values and a set of operations.
The term abstract data type describes a domain of values and set of operations that are specified independently of any particular implementation. The shift in emphasis is representative of the move towards more abstraction in computer science education. We now are interested in the study of the abstract properties of classes of data objects in addition to how the objects might be represented in a program.
The data abstraction approach leads us, throughout the book, to view our data structures from three different perspectives: their specification, their application, and their implementation. The specification describes the logical or abstract level. This level is concerned with what the operations are and what they do. The application level, sometimes called the user level, is concerned with how the data type might be used to solve a problem. This level is concerned with why the operations do what they do. The implementation level is where the operations are actually coded. This level is concerned with the how questions.
Using this approach, we stress computer science theory and software engineering principles, including modularization, data encapsulation, information hiding, data abstraction, stepwise refinement, visual aids, the analysis of algorithms, and software verification methods. We feel strongly that these principles should be introduced to computer science students early in their education so that they learn to practice good software techniques from the beginning.
An understanding of theoretical concepts helps students put the new ideas they encounter into place, and practical advice allows them to apply what they have learned. To teach these concepts we consistently use intuitive explanations, even for topics that have a basis in mathematics, like the analysis of algorithms. In all cases, our highest goal has been to make our explanations as readable and as easily understandable as possible.

Data Structures in Java - A Laboratory Course

By Sandra Andersen

To learn a subject such as computer science, you need to immerse yourself in it --learning by doing rather than by simply observing. Through the study of several classic data structures and algorithms, you will become a better informed and more knowledgeable computer science student and programmer. To be able to professionally choose the best algorithm and data structure for a particular set of resource constraints takes practice.

An emphasis on learning by doing is used throughout Data Structures in Java: A Laboratory Course. In each laboratory, you explore a particular data structure by implementing it. As you create an implementation, you learn how the data structure works and how it can be applied. The resulting implementation is a working piece of software that you can use in later laboratories and programming projects.

Click to Download

Data Structures in Java: A Laboratory Course

Algorithms and Data Structures in VLSI Design

By Christoph Meinel and Thorsten Theobald
One of the main problems in chip design is the huge number of possible combinations of individual chip elements, leading to a combinatorial explosion as chips become more complex. New key results in theoretical computer science and in the design of data structures and efficient algorithms can be applied fruitfully here. The application of ordered binary decision diagrams (OBDDs) has led to dramatic performance improvements in many computer-aided design projects. This textbook provides an introduction to the foundations of this interdisciplinary research area, with an emphasis on applications in computer-aided circuit design and formal verification.
Contents
1. Introduction
2. Basics
2.1 Propositions and Predicates - 2.2 Sets, Relations, and Functions - 2.3 Graphs - 2.4 Algorithms and Data Structures - 2.5 Complexity of Algorithms - 2.6 Hashing - 2.7 Finite Automata and Finite State Machines - 2.8 References
Part I: Data Structures for Switching Functions
3. Boolean Functions
3.1 Boolean Algebras - 3.2 Boolean Formulas and Boolean Functions - 3.3 Switching Functions - 3.3.1 Switching Functions with at most Two Variables - 3.3.2 Subfunctions and Shannon's Expansion - 3.3.3 Visual Representation - 3.3.4 Monotone Switching Functions - 3.3.5 Symmetric Functions - 3.3.6 Threshold Functions - 3.3.7 Partial Switching Functions - 3.4 References
4. Classical Representations
4.1 Truth Tables - 4.2 Two-Level Normal Forms - 4.3 Circuits and Formulas - 4.3.1 Circuits - 4.3.2 - Formulas - 4.4 Binary Decision Trees and Diagrams - 4.4.1 Binary Decision Trees - 4.4.2 Branching Programs - 4.4.3 Read-Once Branching Programs - 4.4.4 Complexity of Basic Operations - 4.5 References
5. Requirements on Data Structures in Formal Circuit Verification
5.1 Circuit Verification - 5.2 Formal Verification of Combinational Circuits - 5.3 Formal Verification of Sequential Circuits - 5.4 References
Part II: OBDDs: An Efficient Data Structure
6. OBDDs - Ordered Binary Decision Diagrams
6.1 Notation and Examples - 6.2 Reduced OBDDs: A Canonical Representation of Switching Functions - 6.3 The Reduction Algorithm - 6.4 Basic Constructions - 6.5 Performing Binary Operations and the Equivalence Test - 6.6 References
7. Efficient Implementation of OBDDs
7.1 Key Ideas - 7.1.1 Shared OBDDs - 7.1.2 Unique Table and Strong Canonicity - 7.1.3 ITE Algorithm and Computed Table - 7.1.4 Complemented Edges - 7.1.5 Standard Triples - 7.1.6 Memory Management - 7.2 Some Popular OBDD Packages - 7.2.1 The OBDD Package of Brace, Rudell, and Bryant - 7.2.2 The OBDD Package of Long - 7.2.3 The CUDD Package: Colorado University Decision Diagrams - 7.3 References
8. Influence of the Variable Order on the Complexity of OBDDs
8.1 Connection Between Variable Order and OBDD Size - 8.2 Exponential Lower Bounds - 8.3 OBDDs with Different Variable Orders - 8.4 Complexity of Minimization - 8.5 References
9. Optimizing the Variable Order
9.1 Heuristics for Constructing Good Variable Orders - 9.1.1 The Fan-In Heuristic - 9.1.2 Die Weight Heurisitic - 9.2 Dynamic Reordering - 9.2.1 The Variable Swap - 9.2.2 Exact Minimization - 9.2.3 Window Permutation - 9.2.4 The Sifting Algorithm - 9.2.5 Block Sifting and Symmetric Sifting - 9.3 Quantitative Statements - 9.4 Outlook - 9.5 References
Part III: Applications and Extensions
10. Analysis of Sequential Systems
10.1 Formal Verification - 10.2 Basic Operators - 10.2.1 Generalized Cofactors - 10.2.2 The Constrain Operator - 10.2.3 Quantification - 10.2.4 The Restrict Operator - 10.3 Reachability Analysis - 10.4 Efficient Image Computation - 10.4.1 Input Splitting - 10.4.2 Output Splitting - 10.4.3 The Transition Relation - 10.4.4 Partitioning the Transition Relation - 10.5 References
11. Symbolic Model Checking11.1 Computation Tree Logic - 11.2 CTL Model Checking - 11.3 Implementations - 11.3.1 The SMV System - 11.3.2 The VIS System - 11.4 References
12. Variants and Extensions of OBDDs12.1 Relaxing the Ordering Restriction - 12.2 Alternative Decomposition Types - 12.3 Zero-Suppressed BDDs - 12.4 Multiple-Valued Functions - 12.4.1 Additional Sinks - 12.4.2 Edge Values - 12.4.3 Moment Decompositions - 12.5 References
13. Transformation Techniques for Optimization13.1 Transformed OBDDs - 13.2 Type-Based Transformations - 13.2.1 Definition - 13.2.2 Circuit Verification - 13.3 Linear Transformations - 13.3.1 Definition - 13.3.2 Efficient Implementation - 13.3.3 Linear Sifting - 13.4 Encoding Transformations - 13.5 References
Bibliography
Index

Data Structures:All Chapters

From Wikibooks, the open-content textbooks collection
Computers can and usually store and process vast amounts of data. Some of this data is of use in limited ways (such as the company's name, the date, etc) while most application data follows some pattern. For example, student records at a university would typically contain an id-number, names, addresses, contact details, course information, etc. For conceptual and data processing efficiency reasons this vast amount of data should be structured. That is, the same sort of data, in our student records example, would be provided for each student. Within the one application there would typically be many of such data structures created. Also typically there would be some conceptual relationships which link data structures.
There are no hard and fast rules as to how data must be structured. The structures which are adopted will be determined by the purpose to be achieved. Sometimes the structure of the software may be a relevant consideration, as well as the amount of data involved.
The objective is, ultimately, to process the data captured or stored on the computer in the most efficient manner.
Formal data structures enable a programmer to mentally structure the large amounts of data into conceptually manageable relationships.
Sometimes we use data structures to allow us to do more: for example, to accomplish fast searching or sorting of data. Other times, we use data structures so that we can do less: for example, the concept of the stack is a limited form of a more general data structure. These limitations provide us with guarantees that allow us to reason about our programs more easily. Data structures also provide guarantees about algorithmic complexity — choosing an appropriate data structure for a job is crucial for writing good software.
Because data structures are higher-level abstractions, they present to us operations on groups of data, such as adding an item to a list, or looking up the highest-priority item in a queue. When a data structure provides operations, we can call the data structure an abstract data type (sometimes abbreviated as ADT). Abstract data types can minimize dependencies in your code, which is important when your code needs to be changed. Because you are abstracted away from lower-level details, some of the higher-level commonalities one data structure shares with a different data structure can be used to replace one with the other.
Our programming languages come equipped with a set of built-in types, such as integers and floating-point numbers, that allow us to work with data objects for which the machine's processor has native support. These built-in types are abstractions of what the processor actually provides because built-in types hide details both about their execution and limitations.

Data Structures and Algorithms in C++

By Michael T. Goodrich, Roberto Tamassia and David M. Mount
This book provides a comprehensive introduction to data structures and algorithms, including their design, analysis, and implementation. In terms of the computer science and computer engineering curricula, we have written this book to be primarily focused on the Freshman-Sophomore level Data Structures (CS2) course.
This is a "sister"' book to Goodrich-Tamassia, Data Structures and Algorithms in Java (DSAJ), but uses C++ as the basis language instead of Java. This present book maintains the same general structure as DSAJ, so that CS/CE programs that teach data structures in both C++ and Java can share the same core syllabus, with one course using DSAJ and the other using this book.
While this book retains the same pedagogical approach and general structure as DSAJ, the code fragments have been completely redesigned. Because the C++ language supports almost all of Java's basic constructs, it would be tempting to simply translate the code fragments from Java to the corresponding C++ counterparts. We have been careful, however, to make full use of C++'s capabilities and design code in a manner that is consistent with modern C++ usage. In particular, whenever appropriate, we use elements of C++ that are not part of Java, including templated functions and classes, the C++ Standard Template Library (STL), C++ memory allocation and deallocation (and we discuss the associated tricky issues of writing destructors, copy constructors, and assignment operators), virtual functions and virtual class destructors, stream input and output, and C++'s safe run-time casting. However, we have avoided some of C++'s more arcane or easily misused elements, such as pointer arithmetic.
Highlights of this book include:
  • Review of basic features of the C++ programming language
  • Introduction to object-oriented design with C++ and design patterns
  • Consistent object-oriented viewpoint throughout the book
  • Comprehensive coverage of all the data structures taught in a typical CS2 course, including vectors, lists, heaps, hash tables, and search trees
  • Detailed explanation and visualization of sorting algorithms
  • Coverage of graph algorithms and pattern-matching algorithms for more advanced CS2 courses
  • Visual justifications (that is, picture proofs), which make mathematical arguments more understandable for students, appealing to visual learners
  • Motivation of algorithmic concepts with Internet-related applications, such as Web browsers and search engines
  • Accompanying Web site http://datastructures.net with a special password-protected area for instructors.

Click to Read More

Algorithmic Information Theory

By G J Chaitin
The aim of this book is to present the strongest possible version of G¨odel's incompleteness theorem, using an information-theoretic approach based on the size of computer programs.
One half of the book is concerned with studying , the halting probability of a universal computer if its program is chosen by tossing a coin. The other half of the book is concerned with encoding as an algebraic equation in integers, a so-called exponential diophantine equation.
G¨odel's original proof of his incompleteness theorem is essentially the assertion that one cannot always prove that a program will fail to halt. This is equivalent to asking whether it ever produces any output. He then converts this into an arithmetical assertion. Over the years this has been improved; it follows from the work on Hilbert's 10th problem that G¨odel's theorem is equivalent to the assertion that one cannot always prove that a diophantine equation has no solutions if this is the case.
In our approach to incompleteness, we shall ask whether or not a program produces an in nite amount of output rather than asking whether it produces any; this is equivalent to asking whether or not a diophantine equation has in nitely many solutions instead of asking whether or not it is solvable.
If one asks whether or not a diophantine equation has a solution for N dierent values of a parameter, the N dierent answers to this question are not independent; in fact, they are only log2 N bits of information. But if one asks whether or not there are in nitely many solutions for N dierent values of a parameter, then there are indeed cases in which the N dierent answers to these questions are independent mathematical facts, so that knowing one answer is no help in knowing any of the others. The equation encoding has this property.
When mathematicians can't understand something they usually assume that it is their fault, but it may just be that there is no pattern or law to be discovered!
How to read this book: This entire monograph is essentially a proof of one theorem, Theorem D in Chapter 8. The exposition is completely self-contained, but the collection Chaitin (1987c) is a useful source of background material. While the reader is assumed to be familiar with the basic concepts of recursive function or computability theory and probability theory, at a level easily acquired from Davis (1965) and Feller (1970), we make no use of individual results from these elds that we do not reformulate and prove here. Familiarity with LISP programming is helpful but not necessary, because we give a selfcontained exposition of the unusual version of pure LISP that we use, including a listing of an interpreter. For discussions of the history and signi cance of metamathematics, see Davis (1978), Webb (1980), Tymoczko (1986), and Rucker (1987).
Although the ideas in this book are not easy, we have tried to present the material in the most concrete and direct fashion possible. We give many examples, and computer programs for key algorithms. In particular, the theory of program-size in LISP presented in Chapter 5 and Appendix B, which has not appeared elsewhere, is intended as an illustration of the more abstract ideas in the following chapters.

Algorithms and Complexity

by Herbert S. Wilf
For the past several years mathematics majors in the computing track at the University of Pennsylvania have taken a course in continuous algorithms (numerical analysis) in the junior year, and in discrete algorithms in the senior year. This book has grown out of the senior course as I have been teaching it recently. It has also been tried out on a large class of computer science and mathematics majors, including seniors and graduate students, with good results.
Selection by the instructor of topics of interest will be very important, because normally I’ve found that I can’t cover anywhere near all of this material in a semester. A reasonable choice for a first try might be to begin with Chapter 2 (recursive algorithms) which contains lots of motivation. Then, as new ideas are needed in Chapter 2, one might delve into the appropriate sections of Chapter 1 to get the concepts and techniques well in hand. After Chapter 2, Chapter 4, on number theory, discusses material that is extremely attractive, and surprisingly pure and applicable at the same time. Chapter 5 would be next, since the foundations would then all be in place. Finally, material from Chapter 3, which is rather independent of the rest of the book, but is strongly connected to combinatorial algorithms in general, might be studied as time permits.
Throughout the book there are opportunities to ask students to write programs and get them running. These are not mentioned explicitly, with a few exceptions, but will be obvious when encountered. Students should all have the experience of writing, debugging, and using a program that is nontrivially recursive, for example. The concept of recursion is subtle and powerful, and is helped a lot by hands-on practice. Any of the algorithms of Chapter 2 would be suitable for this purpose. The recursive graph algorithms are particularly recommended since they are usually quite foreign to students’ previous experience and therefore have great learning value.
In addition to the exercises that appear in this book, then, student assignments might consist of writing occasional programs, as well as delivering reports in class on assigned readings. The latter might be found among the references cited in the bibliographies in each chapter.
I am indebted first of all to the students on whom I worked out these ideas, and second to a number of colleagues for their helpful advice and friendly criticism. Among the latter I will mention Richard Brualdi, Daniel Kleitman, Albert Nijenhuis, Robert Tarjan and Alan Tucker. For the no-doubt-numerous shortcomings that remain, I accept full responsibility.
This book was typeset in TEX. To the extent that it’s a delight to look at, thank TEX. For the deficiencies in its appearance, thank my limitations as a typesetter. It was, however, a pleasure for me to have had the chance to typeset my own book. My thanks to the Computer Science department of the University of Pennsylvania, and particularly to Aravind Joshi, for generously allowing me the use of TEX facilities.

Sorting and Searching Algorithms: A Cookbook

By Thomas Niemann

This is a collection of algorithms for sorting and searching. Descriptions are brief and intuitive, with just enough theory thrown in to make you nervous. I assume you know C, and that you are familiar with concepts such as arrays and pointers.
The first section introduces basic data structures and notation. The next section presents several sorting algorithms. This is followed by techniques for implementing dictionaries, structures that allow efficient search, insert, and delete operations. The last section illustrates algorithms that sort data and implement dictionaries for very large files. Source code for each algorithm, in ANSI C, is available at the site listed below.

Introduction
Arrays and linked lists are two basic data structures used to store information. We may wish to search, insert or delete records in a database based on a key value. This section examines the performance of these operations on arrays and linked lists...

Click to Read More/Download

Problems on Algorithms

By William Gasarch and Ian Parberry
The ability to devise effective and efficient algorithms in new situations is a skill that separates the master programmer from the merely adequate coder. The best way to develop that skill is to solve problems. To be effective problem solvers, master-programmers-in-training must do more than memorize a collection of standard techniques and applications -- they must in addition be able to internalize and integrate what they have learned and apply it in new circumstances. This book is a collection of problems on the design, analysis, and verification of algorithms for use by practicing programmers who wish to hone and expand their skills, as a supplementary text for students enrolled in an undergraduate or beginning graduate class on algorithms, and as a self-study text for graduate students who are preparing for the qualifying (often called "breadth" or "comprehensive") examination on algorithms for a Ph.D. program in Computer Science or Computer Engineering. It is intended to augment the problem sets found in any standard algorithms textbook.
Recognizing that a supplementary text must be cost-effective if it is to be useful, the author made two important and perhaps controversial decisions in order to keep its length within reasonable bounds. The first is to cover only what it is considered to be the most important areas of algorithm design and analysis. Although most instructors throw in a "fun" advanced topic such as amortized analysis, computational geometry, approximation algorithms, number-theoretic algorithms, randomized algorithms, or parallel algorithms, it had been chosen not to cover these areas. The second decision is not to search for the origin of the problems used. A lengthy discussion of the provenance of each problem would help make this hook more scholarly, but would not make it more attractive for its intended audience -- students and practicing programmers.

Graph-Theoretic Algorithms

By Therese Biedl
This document results from graduate courses that I taught at the University of Waterloo in Fall 1999, Winter 2002 and Winter 2004. Students were asked to take lecture notes in LATEX, which I then compiled and edited for this document.
The topic of graph algorithms is so unbelievably big that it is entirely hopeless to want to hold just one course about them. Even focusing on advanced graph algorithms (whatever "advanced" may mean) doesn't help, because there is still too many algorithms to cover.
The focus of this course was therefore even more specialized: I wanted to look at problems which are "normally" difficult (by which I mean a high polynomial running-time, or even NP-complete), but which can be solved faster when focusing on a specialized subclass. These notes assume that the reader is familiar with the following topics:
  • Data structures, in particular lists, stacks, queues, trees, bucket sorting, graph traversals.
  • Graph theory and combinatorial optimization, in particular graph definitions, shortest paths, °ows, matchings.
  • Analysis of algorithms, in particular the O-notation and NP-completeness.

For some of these topics, a brief review is given here (see the appendix), but for more details, the reader is referred to for example [CLRS00] for data structures and analysis of algorithms, and [Gib85] for graph theory and combinatorial optimization.

Click to Read More/Download

Foundations of Computer

By Lawrence C Paulson

This course has two objectives. First (and obvious) is to teach programming. Second is to present some fundamental principles of computer science, especially algorithm design. Most students will have some programming experience already, but there are few people whose programming cannot be improved through greater knowledge of basic principles. Please bear this point in mind if you have extensive experience and find parts of the course rather slow.
The programming in this course is based on the language ML and mostly concerns the functional programming style. Functional programs tend to be shorter and easier to understand than their counterparts in conventional languages such as C. In the space of a few weeks, we shall be able to cover most of the forms of data structures seen in programming. The course also covers basic methods for estimating efficiency.
Courses in the Computer Laboratory are now expected to supply a Learning Guide to suggest extra reading, discussion topics, exercises and past exam questions. For this course, such material is attached at the end of each lecture. Extra reading is mostly drawn from my book ML for the Working Programmer (second edition), which also contains many exercises.

Contents
Introduction
Recursive Functions
O Notation: Estimating Costs in the Limit
Lists
More on Lists
Sorting
Datatypes and Trees
Dictionaries and Functional Arrays
Queues and Search Strategies
Functions as Values
List Functionals
Polynomial Arithmetic
Sequences, or Lazy Lists
Elements of Procedural Programming
Linked Data Structures

Click to Read More/Download

Efficient Algorithms for Sorting and Synchronization

By Andrew Tridgell

This thesis presents efficient algorithms for internal and external parallel sorting and remote data update. The sorting algorithms approach the problem by concentrating first on highly efficient but incorrect algorithms followed by a cleanup phase that completes the sort. The remote data update algorithm, rsync, operates by exchanging block signature information followed by a simple hash search algorithm for block matching at arbitrary byte boundaries. The last chapter of the thesis examines a number of related algorithms for text compression, differencing and incremental backup.
Internal Parallel Sorting
My first introduction to the problem of parallel sorting came from a problem in the implementation of an automatic speech recognition training program. A set of speech data needed to be normalized in order to be used as the input to a recurrent neural network system and I decided that a quick-and-dirty way of doing this would be to sort the data, then sample it at regular intervals to generate a histogram.

Data Structures and Algorithms with Object-Oriented Design Patterns in C++

By Bruno R. Preiss
This book was motivated by my experience in teaching the course E&CE 250: Algorithms and Data Structures in the Computer Engineering program at the University of Waterloo. I have observed that the advent of object-oriented methods and the emergence of object-oriented design patterns has lead to a profound change in the pedagogy of data structures and algorithms. The successful application of these techniques gives rise to a kind of cognitive unification: Ideas that are disparate and apparently unrelated seem to come together when the appropriate design patterns and abstractions are used.
This paradigm shift is both evolutionary and revolutionary. On the one hand, the knowledge base grows incrementally as programmers and researchers invent new algorithms and data structures. On the other hand, the proper use of object-oriented techniques requires a fundamental change in the way the programs are designed and implemented. Programmers who are well schooled in the procedural ways often find the leap to objects to be a difficult one.

Data Structures and Algorithms with Object-Oriented Design Patterns in C#

By Bruno R. Preiss
Object-Oriented Design
Traditional approaches to the design of software have been either data oriented or process oriented. Data-oriented methodologies emphasize the representation of information and the relationships between the parts of the whole. The actions which operate on the data are of less significance. On the other hand, process-oriented design methodologies emphasize the actions performed by a software artifact; the data are of lesser importance.
It is now commonly held that object-oriented methodologies are more effective for managing the complexity which arises in the design of large and complex software artifacts than either data-oriented or process-oriented methodologies. This is because data and processes are given equal importance. Objects are used to combine data with the procedures that operate on that data. The main advantage of using objects is that they provide both abstraction and encapsulation.
Click to Read More

Combinatorial Algorithms

By Jeff Erickson

This pdf book contains senior-level algorithms for computer engineering students. They are
  • Notes on solving recurrences
  • Introduction
  • Divide and conquer
  • Dynamic programming
  • Randomization: Nuts and bolts
  • Randomized treaps
  • Randomized mincut
  • Uniform and universal hashing
  • Skip lists
  • Amortized analysis
  • Scapegoat trees and splay trees
  • Union-find
  • Fibonacci heaps
  • Graph algorithms
  • Representing and searching graphs
  • Single-source shortest paths
  • All-pairs shortest paths
  • Minimum spanning trees
  • Seminumerical algorithms
  • Fast Fourier transforms
  • Number-theoretic algorithms
  • String matching
  • String matching: Rabin-Karp
  • String matching: Knuth-Morris-Pratt
  • Computational Geometry
  • Convex hulls
  • Plane sweep algorithms
  • Polygon triangulation
  • Lower bounds
  • Information theory
  • Adversary arguments
  • Reductions and transformations
  • NP-hard, NP-easy, and NP-complete problems

Click to Read More/Download

Art of Programming Contest - C Programming Tutorials | Data Structures | Algorithms

Compiled by Ahmed Shamsul Arefin

Review Note

A Computer programming contest is a pleasurable event for the budding programmers, but only a few books are available as a training manual for programming competitions. This book is designed to serve as a textbook for an algorithm course focusing on programming as well as a programming course focusing on algorithms. The book is specially designed to train students to participate in competitions such as the ACM International Collegiate Programming Contest.
The book covers several important topics related to the development of programming skills such as, fundamental concepts of contest, game plan for a contest, essential data structures for contest, Input/output techniques, brute force method, mathematics, sorting, searching, greedy algorithms, dynamic programming, graphs, computational geometry, Valladolid Online Judge problem category, selected ACM programming problems, common codes/routines for programming, Standard Template Library (STL), PC2 contest administration and team guide.The book also lists some important websites/books for ACM/ICPC Programmers.
I believe that the book will be book will be of immense use for young programmers interested in taking part in programming competitions.
Dr. M. Lutfar Rahman
Professor, Department of Computer Science and Engineering (CSE)
University of Dhaka.
Bangladesh.
Click to Read More/Download

The FXT library: Fast transforms and low level algorithms

Here you find
  • the FXT package
  • the "Algorithms for programmers" text (the fxtbook)
  • the amorphous FFT bucket moved to the fftpage

The FXT library

Latest FXT version: fxt-2007.01.13.tgz (approx. 950kB), distributed under the GPL.

Here are a some (well, more than 200) demos.

Much of the low level bit-magic code is shown on the bit wizardry page.

Read the short description in description.txt, the Linux Software Map (LSM) file fxt.lsm

Generated doc-oids:

  • aux0-doc.txt auxiliary routines
  • aux1-doc.txt auxiliary routines for 1-dim arrays
  • aux2-doc.txt auxiliary routines for 2-dim arrays
  • bits-doc.txt bit wizardry
  • bpol-doc.txt binary polynomials and arithmetic over GF(2**n)
  • comb-doc.txt combinatorics (combinations, partitions etc.)
  • convolution-doc.txt convolution
  • mult-doc.txt multiplication of large numbers
  • correlation-doc.txt correlation
  • dctdst-doc.txt cosine and sine transforms
  • ds-doc.txt data structures (FIFO, heap etc.)
  • fft-doc.txt fast Fourier transforms
  • realfft-doc.txt real valued FFTs
  • fht-doc.txt fast Hartley transforms
  • walsh-doc.txt Walsh transforms
  • haar-doc.txt Haar transforms
  • mod-doc.txt modular arithmetics and number theory
  • ntt.h-doc.txt number theoretic transforms
  • perm-doc.txt permutations
  • sort-doc.txt sorting and searching
  • data-doc.txt tables of mathematical data. The tables can also be accessed via the mathdata page. generated index of all docs

Click to Read More/Download

Algorithms

by S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani
Prologue

Look around you. Computers and networks are everywhere, enabling an intricate web of complex human activities: education, commerce, entertainment, research, manufacturing, health management, human communication, even war. Of the two main technological underpinnings of this amazing proliferation, one is obvious: the breathtaking pace with which advances in microelectronics and chip design have been bringing us faster and faster hardware.

This book tells the story of the other intellectual enterprise that is crucially fueling the computer revolution:

Books and algorithms
Two ideas changed the world. In 1448 in the German city of Mainz a goldsmith named Johann Gutenberg discovered a way to print books by putting together movable metallic pieces.
Literacy spread, the Dark Ages ended, the human intellect was liberated, science and technology triumphed, the Industrial Revolution happened. Many historians say we owe all this to typography. Imagine a world in which only an elite could read these lines! But others insist that the key development was not typography, but algorithms....

Click to Read More/Download

Java Data Structures (2nd edition)

© 1996-2001, Particle
Welcome to Java Data Structures (2nd edition). This document was created with an intent to show people how easy Java really is, and to clear up a few things I've missed in the previous release of the document.............
In contrast to what most people think about Java, it being a language with no pointers, data structures are quite easy to implement. In this section, I'll demonstrate few basic data structures. By learning how easy they are to implement in Java, you'll be able to write any implementation yourself.
I also think that this document is a pretty good introduction to Data Structures in general. All these concepts can be applied in any programming language............

Data Structures

By Peter M Williams
These notes provide an introduction to some of the most commonly occurring data structures. The language used is Java. The aim is not the greatest generality. The DataStructures package developed here is not as extensive as the Collections framework, first released with Java 1.2. For portable applications, you should use the Collections framework where possible.
The DataStructures package, however, includes graphs which are not currently in the Collections framework; and the greater simplicity of the DataStructures package makes it more suitable as a basis for learning about fundamental principles of data structures and algorithms.

Data Structures and Algorithms with Object-Oriented Design Patterns in Java

By Bruno R. Preiss
The primary goal of this book is to promote object-oriented design using Java and to illustrate the use of the emerging object-oriented design patterns. Experienced object-oriented programmers find that certain ways of doing things work best and that these ways occur over and over again. The book shows how these patterns are used to create good software designs. In particular, the following design patterns are used throughout the text: singleton, container, enumeration, adapter and visitor.
Virtually all of the data structures are presented in the context of a single, unified, polymorphic class hierarchy. This framework clearly shows the relationships between data structures and it illustrates how polymorphism and inheritance can be used effectively. In addition, algorithmic abstraction is used extensively when presenting classes of algorithms. By using algorithmic abstraction, it is possible to describe a generic algorithm without having to worry about the details of a particular concrete realization of that algorithm.
A secondary goal of the book is to present mathematical tools just in time. Analysis techniques and proofs are presented as needed and in the proper context. In the past when the topics in this book were taught at the graduate level, an author could rely on students having the needed background in mathematics. However, because the book is targeted for second- and third-year students, it is necessary to fill in the background as needed. To the extent possible without compromising correctness, the presentation fosters intuitive understanding of the concepts rather than mathematical rigor.

Followers