# Data Structures & Algorithms in Dart

Take your programming skills to the next level. Learn to build stacks, queues, trees, graphs, and efficient sorting and searching algorithms from scratch. By Jonathan Sande, Vincent Ngo & Kelvin Lau.

Read for Free with the Personal Plan* * Includes this and all other books in our online library See all benefits
Leave a rating/review
Save for later
Share

## Who is this for?

This book is for programmers who are familiar with the Dart language but would like to improve the efficiency of their code and take their skills to the next level.

## Covered concepts

• Big O Notation
• Dart Lists, Sets and Maps
• Stacks
• Queues
• Ring Buffers
• Binary Trees
• AVL Trees
• Tries
• Heaps
• Priority Queues
• Graphs
• Binary Search
• Depth-First Search
• Bubble Sort
• Selection Sort
• Insertion Sort
• Merge Sort
• Heapsort
• Quicksort
• Dijkstra’s Algorithm

Perhaps you’ve heard about Big O notation, stacks and queues, or bubble sort and quicksort. You’d like to learn more, but it’s hard to find any good examples and explanations that use your favorite programming language, Dart.

Data Structures & Algorithms in Dart is here to help with in-depth explanations,...

more

## Before You Begin

This section tells you a few things you need to know before you get started, such as what you’ll need for hardware and software, where to find the project files for this book, and more.

vii

## Section I: Introduction

The chapters in this short but essential section will provide the foundation and motivation for the study of data structures and algorithms. You’ll also get a quick rundown of the Dart core library, which you’ll use as a basis for creating your own data structures and algorithms.

• Chapter 1: Why Learn Data Structures & Algorithms?: Data structures are a well-studied area, and the concepts are language agnostic; a data structure from C is functionally and conceptually identical to the same data structure in any other language, such as Dart. At the same time, the high-level expressiveness of Dart makes it an ideal choice for learning these core concepts without sacrificing too much performance.

• Chapter 2: Complexity: Answering the question, “Does it scale?” is all about understanding the complexity of an algorithm. Big-O notation is the primary tool you use to think about algorithmic performance in the abstract, independent of hardware or language. This chapter will prepare you to think in these terms.

• Chapter 3: Basic Data Structures in Dart: The dart:core library includes basic data structures that are used widely in many applications. These include List, Map and Set. Understanding how they function will give you a foundation to work from as you proceed through the book and begin creating your own data structures from scratch.

1
Data structures are a well-studied area, and the concepts are language agnostic; a data structure from C is functionally and conceptually identical to the same data structure in any other language, such as Dart. At the same time, the high-level expressiveness of Dart makes it an ideal choice for learning these core concepts without sacrificing too much performance.
2
Answering the question, "Does it scale?" is all about understanding the complexity of an algorithm. Big-O notation is the primary tool you use to think about algorithmic performance in the abstract, independent of hardware or language. This chapter will prepare you to think in these terms.
3
The `dart:core` library includes a number of basic data structures that are used widely in many applications. These include `List`, `Map` and `Set`. Understanding how they function will give you a foundation to work from as you proceed through the book and begin creating your own data structures from scratch.

## Section II: Elementary Data Structures

This section looks at a few important data structures that are not found in the dart:core library but form the basis of more advanced algorithms covered in future sections. All are collections optimized for and enforcing a particular access pattern.

The dart:collection library, which comes with Dart, does contain LinkedList and Queue classes. However, learning to build these data structures yourself is why you’re reading this book, isn’t it?

Even with just these basics, you‘ll begin to start thinking “algorithmically” and seeing the connection between data structures and algorithms.

• Chapter 4: Stacks: The stack data structure is similar in concept to a physical stack of objects. When you add an item to a stack, you place it on top of the stack. When you remove an item from a stack, you always remove the top-most item. Stacks are useful and also exceedingly simple. The main goal of building a stack is to enforce how you access your data.

• Chapter 5: Linked Lists: A linked list is a collection of values arranged in a linear, unidirectional sequence. It has some theoretical advantages over contiguous storage options such as Dart’s List, including constant time insertion and removal from the front of the list.

• Chapter 6: Queues: Lines are everywhere, whether you are lining up to buy tickets to your favorite movie or waiting for a printer to print out your documents. These real-life scenarios mimic the queue data structure. Queues use first-in-first-out ordering, meaning the first enqueued element will be the first to get dequeued. Queues are handy when you need to maintain the order of your elements to process later.

4
The stack data structure is similar in concept to a physical stack of objects. When you add an item to a stack, you place it on top of the stack. When you remove an item from a stack, you always remove the top-most item. Stacks are useful and also exceedingly simple. The main goal of building a stack is to enforce how you access your data.
5
A linked list is a collection of values arranged in a linear, unidirectional sequence. It has some theoretical advantages over contiguous storage options such as the Dart `List`, including constant time insertion and removal from the front of the list and other reliable performance characteristics.
Lines are everywhere, whether you are lining up to buy tickets to your favorite movie or waiting for a printer to print out your documents. These real-life scenarios mimic the queue data structure. Queues use first-in-first-out ordering, meaning the first enqueued element will be the first to get dequeued. Queues are handy when you need to maintain the order of your elements to process later.

## Section III: Trees

Trees are another way to organize information, introducing the concept of children and parents. You’ll take a look at the most common tree types and see how they can be used to solve specific computational problems. Trees are a handy way to organize information when performance is critical. Having them in your tool belt will undoubtedly prove to be useful throughout your career.

• Chapter 7: Trees: The tree is a data structure of profound importance. It’s used to tackle many recurring challenges in software development, such as representing hierarchical relationships, managing sorted data, and facilitating fast lookup operations. There are many types of trees, and they come in various shapes and sizes.

• Chapter 8: Binary Trees: In the previous chapter, you looked at a basic tree where each node can have many children. A binary tree is a tree where each node has at most two children, often referred to as the left and right children. Binary trees serve as the basis for many tree structures and algorithms. In this chapter, you’ll build a binary tree and learn about the three most important tree traversal algorithms.

• Chapter 9: Binary Search Trees: A binary search tree facilitates fast lookup, addition, and removal operations. Each operation has an average time complexity of O(log n), which is considerably faster than linear data structures such as lists and linked lists.

• Chapter 10: AVL Trees: In the previous chapter, you learned about the O(log n) performance characteristics of the binary search tree. However, you also learned that unbalanced trees can deteriorate the performance of the tree, all the way down to O(n). In 1962, Georgy Adelson-Velsky and Evgenii Landis came up with the first self-balancing binary search tree: the AVL Tree.

• Chapter 11: Tries: The trie (pronounced as “try”) is a tree that specializes in storing data that can be represented as a collection, such as English words. The benefits of a trie are best illustrated by looking at it in the context of prefix matching, which you’ll do in this chapter.

• Chapter 12: Binary Search: Binary search is one of the most efficient searching algorithms with a time complexity of O(log n). You’ve already implemented a binary search once using a binary search tree. In this chapter, you’ll reimplement binary search on a sorted list.

• Chapter 13: Heaps: A heap is a complete binary tree that can be constructed using a list. Heaps come in two flavors: max-heaps and min-heaps. In this chapter, you’ll focus on creating and manipulating heaps. You’ll see how convenient heaps make it to fetch the minimum or maximum element of a collection.

• Chapter 14: Priority Queues: Queues are simply lists that maintain the order of elements using first-in-first-out (FIFO) ordering. A priority queue is another version of a queue that dequeues elements in priority order instead of FIFO order. A priority queue is especially useful when identifying the maximum or minimum value given a list of elements.

The tree is a data structure of profound importance. It's used to tackle many recurring challenges in software development, such as representing hierarchical relationships, managing sorted data, and facilitating fast lookup operations. There are many types of trees, and they come in various shapes and sizes.
In the previous chapter, you looked at a basic tree where each node can have many children. A binary tree is a tree where each node has at most two children, often referred to as the left and right children. Binary trees serve as the basis for many tree structures and algorithms. In this chapter, you’ll build a binary tree and learn about the three most important tree traversal algorithms.
A binary search tree facilitates fast lookup, addition, and removal operations. Each operation has an average time complexity of O(log n), which is considerably faster than linear data structures such as lists and linked lists.
In the previous chapter, you learned about the O(log n) performance characteristics of the binary search tree. However, you also learned that unbalanced trees can deteriorate the performance of the tree, all the way down to O(n). In 1962, Georgy Adelson-Velsky and Evgenii Landis came up with the first self-balancing binary search tree: the AVL Tree.
The trie (pronounced as “try”) is a tree that specializes in storing data that can be represented as a collection, such as English words. The benefits of a trie are best illustrated by looking at it in the context of prefix matching, which you’ll do in this chapter.
Binary search is one of the most efficient searching algorithms with a time complexity of O(log n). You've already implemented a binary search once using a binary search tree. In this chapter you'll reimplement binary search on a sorted list.
A heap is a complete binary tree, also known as a binary heap, that can be constructed using a list. Heaps come in two flavors: max-heaps and min-heaps. In this chapter, you'll focus on creating and manipulating heaps. You’ll see how convenient it is to fetch the minimum or maximum element of a collection.
Queues are simply lists that maintain the order of elements using first-in-first-out (FIFO) ordering. A priority queue is another version of a queue that dequeues elements in priority order instead of FIFO order. A priority queue is especially useful when identifying the maximum or minimum value given a list of elements.

## Section IV: Sorting Algorithms

Putting lists in order is a classical computational problem. Although you may never need to write your own sorting algorithm, studying this topic has many benefits. This section will teach you about stability, best- and worst-case times, and the all-important technique of divide and conquer.

Studying sorting may seem a bit academic and disconnected from the “real world” of app development, but understanding the tradeoffs for these simple cases will lead you to a better understanding of how to analyze any algorithm.

• Chapter 15: O(n²) Sorting Algorithms: O(n²) time complexity isn’t great performance, but the sorting algorithms in this category are easy to understand and useful in some scenarios. These algorithms are space-efficient and only require constant O(1) memory space. In this chapter, you’ll look at the bubble sort, selection sort and insertion sort algorithms.

• Chapter 16: Merge Sort: Merge sort, with a time complexity of O(n log n), is one of the fastest of the general-purpose sorting algorithms. The idea behind merge sort is to divide and conquer: to break up a big problem into several smaller, easier to solve problems and then combine those solutions into a final result. The merge sort mantra is to split first and merge later.

• Chapter 17: Radix Sort: In this chapter, you’ll look at a completely different model of sorting. So far, you’ve been relying on comparisons to determine the sorting order. Radix sort is a non-comparative algorithm for sorting integers.

• Chapter 18: Heapsort: Heapsort is a comparison-based algorithm that sorts a list in ascending order using a heap. This chapter builds on the heap concepts presented in Chapter 13, “Heaps”. Heapsort takes advantage of a heap being, by definition, a partially sorted binary tree.

• Chapter 19: Quicksort: Quicksort is another comparison-based sorting algorithm. Much like merge sort, it uses the same strategy of divide and conquer. In this chapter, you’ll implement quicksort and look at various partitioning strategies to get the most out of this sorting algorithm.

O(n²) time complexity isn't great performance, but the sorting algorithms in this category are easy to understand and useful in some scenarios. These algorithms are space-efficient and only require constant O(1) additional memory space. In this chapter, you'll look at the bubble sort, selection sort and insertion sort algorithms.
Merge sort, with a time complexity of O(n log n), is one of the fastest of the general-purpose sorting algorithms. The idea behind merge sort is to divide and conquer: to break up a big problem into several smaller, easier to solve problems and then combine those solutions into a final result. The merge sort mantra is to split first and merge later.
In this chapter, you’ll look at a completely different model of sorting. So far, you’ve been relying on comparisons to determine the sorting order. Radix sort is a non-comparative algorithm for sorting integers.
Heapsort is a comparison-based algorithm that sorts a list in ascending order using a heap. This chapter builds on the heap concepts presented in Chapter 13, “Heaps”. Heapsort takes advantage of a heap being, by definition, a partially sorted binary tree.
Quicksort is another comparison-based sorting algorithm. Much like merge sort, it uses the same strategy of divide and conquer. In this chapter, you'll implement quicksort and look at various partitioning strategies to get the most out of this sorting algorithm.

## Section V: Graphs

Graphs are an instrumental data structure that can model a wide range of things: webpages on the internet, the migration patterns of birds, even protons in the nucleus of an atom. This section gets you thinking deeply (and broadly) about using graphs and graph algorithms to solve real-world problems.

• Chapter 20: Graphs: What do social networks have in common with booking cheap flights around the world? You can represent both of these real-world models as graphs. A graph is a data structure that captures relationships between objects. It’s made up of vertices connected by edges. In a weighted graph, every edge has a weight associated with it that represents the cost of using this edge. These weights let you choose the cheapest or shortest path between two vertices.

• Chapter 21: Breadth-First Search: In the previous chapter, you explored using graphs to capture relationships between objects. Several algorithms exist to traverse or search through a graph’s vertices. One such algorithm is the breadth-first search algorithm, which visits the closest vertices around the starting point before moving on to further vertices.

• Chapter 22: Depth-First Search: In contrast to the breadth-first search, which explores close neighboring vertices before far ones, the depth-first search attempts to explore one branch as far as possible before backtracking and visiting another branch.

• Chapter 23: Dijkstra’s Algorithm: Dijkstra’s algorithm finds the shortest paths between vertices in weighted graphs. This algorithm will bring together a number of data structures that you’ve learned throughout the book, including graphs, trees, priority queues, heaps, maps, sets and lists.

What do social networks have in common with booking cheap flights around the world? You can represent both of these real-world models as graphs. A graph is a data structure that captures relationships between objects. It's made up of vertices connected by edges. In a weighted graph, every edge has a weight associated with it that represents the cost of using this edge. These weights let you choose the cheapest or shortest path between two vertices.
In the previous chapter, you explored using graphs to capture relationships between objects. Several algorithms exist to traverse or search through a graph's vertices. One such algorithm is the breadth-first search algorithm, which visits the closest vertices around the starting point before moving on to further vertices.
In the previous chapter, you looked at breadth-first search, where you had to explore every neighbor of a vertex before going to the next level. In this chapter, you'll look at depth-first search, which attempts to explore a branch as far as possible before backtracking and visiting the next branch.
Dijkstra’s algorithm finds the shortest paths between vertices in weighted graphs. This algorithm will bring together a number of data structures that you've learned earlier in the book.

## Section VI: Challenge Solutions

This section contains all of the solutions to the challenges throughout the book. They’re printed here for your convenience and to aid your understanding, but you’ll receive the most benefit if you attempt to solve the challenges yourself before looking at the answers.

The code for all of the solutions is also available for download in the supplemental materials that accompany this book.

Solutions to the challenges in Chapter 4, "Stacks".
Solutions to the challenges in Chapter 5, "Linked Lists".
Solutions to the challenges in Chapter 6, "Queues".
Solutions to the challenges in Chapter 7, "Trees".
Solutions to the challenges in Chapter 8, "Binary Trees".
Solutions to the challenges in Chapter 9, "Binary Search Trees".
Solutions to the challenges in Chapter 10, "AVL Trees".
Solutions to the challenges in Chapter 11, "Tries".
Solutions to the challenges in Chapter 12, "Binary Search".
Solutions to the challenges in Chapter 13, "Heaps".
Solutions to the challenges in Chapter 14, "Priority Queues".
Solutions to the challenges in Chapter 15, "O(n²) Sorting Algorithms".
Solutions to the challenges in Chapter 16, "Merge Sort".
Solutions to the challenges in Chapter 17, "Radix Sort".
Solutions to the challenges in Chapter 18, "Heapsort".
Solutions to the challenges in Chapter 19, "Quicksort".
Solutions to the challenges in Chapter 20, "Graphs".
Solutions to the challenges in Chapter 21, "Breadth-First Search".
Solutions to the challenges in Chapter 22, "Depth-First Search".
Solutions to the challenges in Chapter 23, "Dijkstra’s Algorithm".

Jonathan Sande

## Contributors

Jonathan Sande

Author

Author

Author

Final Pass Editor

Pagesetter