Initial commit

This commit is contained in:
Jay
2026-03-01 13:57:55 +00:00
commit 3d8d25976e
13 changed files with 661 additions and 0 deletions

View File

@@ -0,0 +1,347 @@
# Chapter 2: Data Models and Query Languages
Data models are perhaps the most important part of developing software. They define on how we *think about the problem* we are solving.
Most applications are built by layering one data model on top of another. For each layer the key question is: how is it *represented* in terms of the next-lower layer? For example:
1. Application developer looks at the real world and model in terms of objects/data structures and APIs that manipulate those data structures.
2. Storing is done in JSON, a relational database or a graph model.
3. Database engineers then map these structures in terms of bytes in memory on a disk or on a network. This representation needs to allow querying, updating, deletion etc.
4. Then the physical layer of actual electrical signals.
## Relational Model Vs Document Model
In a relational model, data is organised into *relations* (called *tables* in SQL), where each relation is an unordered collection of *tuples* (*rows* in SQL).
### The Birth of NoSQL
\#NoSQL is retroactively interpreted as *Not Only SQL*.
There are several driving forces behind the adoption of NoSQL databases:
- A need for greater scalability than relational databases can easily achieve, include very large datasets or very high write throughput.
- A widespread preference for free and open source software over commercial database products.
- Specialised query operations that are not well supported by the relational model.
- Frustration with the restrictiveness of relational schemas, and a desire for a more dynamic and expressive data model.
### The Object-Relational Mismatch
Most application development today is done in OOP, meaning if data is stored in relational tables, an awkward transition layer is required between the object in application code and the database model of tables, rows and columns. The disconnect between the models is sometimes called an *impedance mismatch*.
Object-relational mapping (ORM) frameworks reduce the amount of boiler plate required for this translation layer, but they cannot completely hide it.
For example, storing a resume on a relational schema can be tricky. The profile as a while can be identified by a unique identifier `user_id`. Fields like `first_name` and `last_name` appear exactly once per user so they can be modeled as columns in the table. However most people have had `n` jobs, this is a one-to-many relationship.
1. In traditional SQL, jobs would be put in a separate table, with foreign keys in the user table.
2. There are some DBs that have added standard support for multi-valued data to be stored in a single row
3. Encode this information in a string field as JSON.
<figure>
<img src="/books/designing_data_intensive_applications/media/ddia_0201.jpeg">
<figcaption>Representing a LinkedIn profile using a relational schema.</figcaption>
</figure>
Here is the same data stored as a JSON object:
```json
{
"user_id": 251,
"first_name": "Bill",
"last_name": "Gates",
"summary": "Co-chair of the Bill & Melinda Gates... Active blogger.",
"region_id": "us:91",
"industry_id": 131,
"photo_url": "/p/7/000/253/05b/308dd6e.jpg",
"positions": [
{
"job_title": "Co-chair",
"organization": "Bill & Melinda Gates Foundation"
},
{
"job_title": "Co-founder, Chairman",
"organization": "Microsoft"
}
],
"education": [
{
"school_name": "Harvard University",
"start": 1973,
"end": 1975
},
{
"school_name": "Lakeside School, Seattle",
"start": null,
"end": null
}
],
"contact_info": {
"blog": "http://thegatesnotes.com",
"twitter": "http://twitter.com/BillGates"
}
}
```
The JSON model reduces the impedance mismatch between the application code and the storage layer. The lack of schema is often cited as an advantage.
The JSON representation has better *locality* than the multi-table schema, if you want to fetch a profile in the relational example, you need to perform multiple queries or a join between 2 or more tables. In the JSON format all relevent data is in one place.
The one-to-many relationships from the user profile to the user's positions, education, contact information etc imply a tree like structure, the JSON representation makes this tree structure explicit.
<figure>
<img src="/books/designing_data_intensive_applications/media/ddia_0202.gif">
<figcaption>One-to-many relationships forming a tree structure</figcaption>
</figure>
### Many-to-One and Many-to-Many Relationships
In the previous example `region_id` are given as IDs, not as plain-text strings. This is because:
- Consistent style
- Avoids ambiguity (if there are several similarly named cities)
- Ease of updating - name is only stored in one place
- Localisation support
Whenever you store an ID or a text string is a question of duplication. When you use an ID, the information that is meaningful to humans is stored in only one place and everything that refers to it uses an ID.
The advantages of using an ID is that because it has no meaning to humans, it never needs to change: the ID can remain the same, even if the information it identifies changes.
Anything that is meaningful to humans may need to change sometime in the future - and if that information is duplicated, all the redundant copies need to be updated.
Removing such duplication is the key idea behind *normalisation* in databases.
Even if the initial version of an application fits well in a join-free document model, data has a tendency of becoming more interconnected as features are added to applications. See below how adding two extra features turns one-to-many to many-to-many.
<figure>
<img src="/books/designing_data_intensive_applications/media/ddia_0204.gif">
<figcaption>Extending resumes with many-to-many relationships</figcaption>
</figure>
### Are Document Databases Repeating History
While many-to-many relationships and joins are routinely used in relational databases, document databases and NoSQL reopened the debate on how best to represent such relationships in a database.
This debate is much older than NoSQL - going back to the 1970s.
#### The Network Model
In the tree structure of the hierarchical model, every record has exactly one parent; in the network model, a record could have multiple parents.
For example, there could be one record for the `"Greater Seatlle Area"` region and every user who lived in that region could be linked to it. This allowed one-to-many and many-to-many relationships to be modeled.
The links between records in the network model were not foreign keys, but more like pointers in a programming language. The only way of accessing a record was to follow a path from a root record along these chains of links. This was called an *access path*.
In the simplest case, an access path could be like the traversal of a linked list: start at the head of the list and look one record at a time until you find the one you want. But in a world of many-to-many relationships, several different paths can lead to the same record, and a programmer working with the network model had to keep track of these different access paths in their head.
A **query** was performed by moving a cursor through the database by iterating over lists of records and following access paths. If a record has multiple parents (i.e. multiple incoming pointers from other records), the application code had to keep track of all the various relationships.
#### The Relational Model
What the relational model did, by contrast, was to lay out all the data in the open: a relation (table) is simply a collection of tuples (rows), and that it. There are no labyrinthine nested structures, no complicated access paths to follow if you want to query data you can:
- Read any or all of the rows in a table, selecting those that match your conditions.
- Read a particular row by designating some columns as a key and matching on those
- Insert a new row into any table without worrying about foreign key relationships to and from other tables.
The *query optimiser* automatically decides which parts of the query to execute in which order, and which indexes to use.
Those choices are effectively the equivalent of the "access path", but the big difference is it is made by the query optimiser, not the application developer.
#### Comparison to Document Databases
Document databases reverted back to the hierarchical model in one aspect: storing nested records (one-to-many) relationships within their parent record rather than a separate table.
However, when it come to representing many-to-one and many-to-many relationships, relational and document databases both refer using foreign keys.
#### Relational Versus Document Databases today
The main arguments in favour of the document data model are schema flexibility, better performance due to locality, and that for some applications it is closer to the data structures used by the application.
The relational model counters by providing better support for joins, and many-to-one and many-to-many relationships.
#### Which data model leads to simpler application code?
If data in your application has a document-like structure (i.e. a tree of one-to-many relationships where typically the entire tree is loaded at once), then the document model makes sense.
The relational technique of *shredding* - splitting a document-like structure into multiple tables - can lead to cumbersome schemas and complex code.
If a document model is deeply nested it can cause problems as nested items cannot be queried directly. For example "the second item in the list of employers for user 251" is inefficient.
However if you applicaiton does use many-to-many relationships, the document model is less appealing. It's possible to reduce the need for joins by denormalising but then the application code needs to do additional work to keep the denormalised data consistent. Joins can be emulated in application code by making multiple requests to the database. But that moves complexity to the application code and multiple calls is usually slower than the optimised JOIN request.
#### Schema Flexibility in the Document Model
No schema means that arbitrary keys can values can be added to a document, and when reading, clients have no guarantees as to what fields the documents may contain.
Document databases are sometimes called *schemaless*, but that's misleading, as the code that read the data usually assumes some kind of structure. A more accurate term is *schema-on-read*. In contrast *schema-on-write* is enforced by the database on writes.
For example, say you have currently storing user's full name in one field, however now you want to store them separately. In a document database:
```js
if (user && user.name && !user.first_name) {
// Documents written before Dec 8, 2013 don't have first_name
user.first_name = user.name.split(" ")[0];
}
```
On the other hand, in a "statically typed" database *schema-on-write* approach.
```sql
ALTER TABLE users
ADD COLUMN first_name text;
UPDATE users
SET first_name = split_part(name, ' ', 1);
```
Altering the table is relatively quick however setting every row in the table is time consuming.
The schema-on-read approach is advantageous if the items in the collection don't all have the same structure.
#### Data Locality for Queries
A document is usually stored as a single continuous string, encoded as JSON or binary (MongoDB's BSON). If your application often needs access to the entire document (e.g. rendering to a web page), there is a performance advantage to this *storage locality*. If data is split across multiple tables, multiple index lookups are required to retrieve it all.
The database typically needs to load the entire document, even if you access only a small portion of it. On updates to a document, the entire document usually needs to be rewritten - only modifications that don't change encoded size can be performed in place (rare).
For this reason its recommended to keep documents small and avoid frequent updates.
Some relational databases can offer this locality. Oracle's feature: *multi-table index cluster tables* which declares rows should be inter-leaved in the parent table. There is also the *column-family* concept in Cassandra.
#### Convergence of document and relational databases
Relational databases have supported XML since their inception - however many now support JSON.
Document databases now supports relational like joins in its query language and some MongoDB drivers automatically resolve database references.
It seems that relational and document databases are becoming more similar over time, and that is a good thing: the data models complement each other. If a database is able to handle document-like data and also perform relational queries on it, applications can use the combination of features that best fits their needs.
### Query Languages for Data
**SQL** is a *declarative* query language.
*Imperative* example:
```js
function getSharks() {
var sharks = [];
for(var i = 0; i < animals.length; i++) {
if (animals[i].family === "Sharks") {
sharks.push(animals[i]);
}
return sharks;
}
```
In relational algebra, you would instead write:
$$
sharks = \sigma_{family =''Sharks''} (animals)
$$
Where $\sigma$ is the selection operator, returning only those animals that match the condition $family = ''Sharks''$. SQL follows this closely.
```SQL
SELECT * FROM animals WHERE family = 'Sharks';
```
An imperative language tells the computer to perform certain operations in a certain order.
In a declarative query language, you just specify the pattern of the data you want. e.g. what conditions should be met, how the data should be transformed - but not *how* to achieve that goal. The declarative query language hides the implementation details of the database engine. This allows the database engine to be optimised and improved without the need to change the query language itself.
Declarative languages are very easy to parallelise - they specify the pattern of results not the algorithm to be used.
#### Declarative Queries on the Web
```html
<ul>
<li class="selected"><p>Sharks</p></li>
<li><p>Whales</p></li>
<li><p>Fish</p></li>
</ul>
```
```css
li.selected > p {
background-color: blue;
}
```
Here the CSS selector `li.selected > p` declares the pattern of elements to colour blue: all `<p>` elements whise direct parent is a `<li>` element which a class of `selected`.
Doing this with an imperative approach is a nightmare.
```js
const liElements = document.getElementsByTagName("li");
const selectedLiElements = liElements.filter(liElement => liElement.className === "Selected")
for (selectedElement : selectedLiElements) {
for (child : selectedElement.childrenNodes()) {
if (child.tagName === "p") {
child.setAttribute("style", "background-color: blue")
}
}
}
```
- If the *selected* class is removed because the user clicks onto a different page, the colour won't be removed - even if the code is re-run, so the item will remain highlighted until refresh. With CSS the browser automatically detects when the rule no longer applies.
- If you want to take advantage of a new API, such as `document.getElementsByClassName()`, the code will have to be entirely re-written. On the other hand browsers can improve the performance of CSS without breaking compatibility.
#### MapReduce Querying
*MapReduce* is a programming model for processing large amount of data in bulk across many machines. This is supported by MongoDB as a mechanism for performing read-only queries across many documents.
MapReduce is neither declarative nor imperative but somewhere in between.
Example in PostgreSQL
```SQL
SELECT date_trunc('month', observation_timestamp) as observation_month, sum(num_animals) AS total_animals
FROM observations
WHERE family = "Sharks"
GROUP BY observation_month;
```
Example in MongoDB using MapReduce
```
db.observations.mapReduce(
function map() {
var year = this.observationTimestamp.getYear();
var month = this.observationTimestamp.getMonth();
return [`${year}-${month}`, this.numAnimals];
},
function reduce(key, values) {
return Array.sum(values);
},
query: {
family: "Sharks"
},
out: {
"monthlySharkReport"
}
);
```
The `map` function would be called once for each document (e.g. returning `["2026-01", 3], ["2026-01", 4]`. Subsequently the `reduce` function would be called `["2026-01", [3,4]]` returning 7.
Map and Reduce functions must be pure with no side effects (no additional db calls). This allows them to be run anywhere, in any order and re-run on failure.
MapReduce was replaced by the *aggregation pipeline*.
```json
{
"$match": {
"family": "Sharks"
}
},
{
"$group": {
"_id": {
"year": {
"$year": "$observationTimestamp"
},
"month": {
"$month": "$observationTimestamp"
}
},
"totalAnimals": {
"$sum": "$numAnimals"
}
}
}
```
Aggregation pipeline language is similar in expressiveness to a subset of SQL, but it uses JSON syntax rather than SQL's English sentence style.