commit 0482ded1f7597db103b6996686131779e7d35bae
Author: gitea-actions[bot] 0&&i[i.length-1])&&(p[0]===6||p[0]===2)){r=0;continue}if(p[0]===3&&(!i||p[1]>i[0]&&p[1]=e.length&&(e=void 0),{value:e&&e[o++],done:!e}}};throw new TypeError(t?"Object is not iterable.":"Symbol.iterator is not defined.")}function K(e,t){var r=typeof Symbol=="function"&&e[Symbol.iterator];if(!r)return e;var o=r.call(e),n,i=[],s;try{for(;(t===void 0||t-- >0)&&!(n=o.next()).done;)i.push(n.value)}catch(a){s={error:a}}finally{try{n&&!n.done&&(r=o.return)&&r.call(o)}finally{if(s)throw s.error}}return i}function B(e,t,r){if(r||arguments.length===2)for(var o=0,n=t.length,i;o API: Application Programming Interface One of the main reasons is the idea of statefulness. Consider the following API method names: Each one of these RPCs is pretty descriptive, but we have to memorize these methods, each of which is subtly different. We need to standardise, by providing a standard set of building blocks - method-resource Resource-oriented APIs will be much easier for users to learn, understand and remember. What is the purpose of building an API in the first place? APIs that rely on repeated patterns applied to both the API surface definition and the behaviour. Users very rarely learn an entire API, they learn the parts they need to and make assumptions when they need to make additions. e.g. if a query parameter is called text in one endpoint, it should not be called string or query in another. APIs that rely on repeated, predictable patterns are easier and faster to learn; and therefore better. A software design pattern is a particular design that can be applied over and over to lots of similar software problems, with only minor adjustments. It is not a pre-built library but more of a blueprint for solving similarly structured problems. Pagination Pattern: The pagination pattern is a way of retrieving a long list of items in smaller, more manageable chunks. The pattern relies on extra fields on both the request and response. Moving from a non-paginated to paginated response pattern: Q. What happens if we don't start with the pattern? In every software system we build, and every API we design or use - there are names that will live far longer than we ever intend them to. It is important to choose great names. When designing and building an API, the names we use will be seen by & interacted with all users of the API. It is critical that a name clearly convey the thing is it naming. Language being inherently flexible and ambiguous can be a good thing and a bad thing. Use American English. REST standard verbs should use the imperative mood. They are all commands or orders. This means we need to keep the context of our API in mind. A name can become more clear when using a richer data type. The arrangement of resources in our API, the fields that define those resources, and how those resources relate to one another through those fields. In other words, resource layout is the entity (resource) relationship model for a particular design of an API. The simplest way or two resources to relate to one another is by a simple reference. In this case, there is an implied hierarchy of When building an API, after we've chosen the list of things or resources that matter to us, the next step is to decide how these resources relate to one another. Reference relationships should be purposeful and fundamental to the desired behaviour. Any reference relationship should be something important for the API to accomplish its primary goal. Optimise for the common case - without compromising the feasibility of the advanced case. The biggest differences with this type of relationship are the cascading effect of actions and the inheritance of behaviours and properties from parent to child. It can often be tempting to create resources for even the tiniest concept you might want to model. Rule of thumb: If you don't need to interact with one of your resources independent of a resource it's associated with, then it can probably be a data type. Overly deep hierarchies can be confusing and difficult to manage. Page 63 4.3.3 in-line everything Many applications today are data-intensive, as opposed to compute-intensive. Raw CPU power is rarely a limiting factor for these applications. A data-intensive application is built from the following building blocks Database and a message queue are quite similar. They both store data for some time - though they have very different access patterns which means different performance characteristics and thus very different implementations. Boundaries between these implementations are becoming slightly blurred. There are data-stores that are also used as message queues (Redis) and there are messages queues with database-like durability guarantees (Apache Kafka). When you combine several tools in order to provide a service, the service's interface or application programming interface (API) usually hides those implementation details from clients. Things that ca go wrong are called faults. Systems that anticipate faults and can cope with them are called fault-tolerant or resilient. Fault tolerance does not mean making a system tolerant of all faults, but only tolerating certain types of faults. NOTE: A fault is not the same as a failure. It is impossible to to reduce the probability of a fault to zero; therefore it is best to design fault-tolerance mechanisms that prevent faults from causing failures. Hard disks are reported as having a mean time to failing (MTTF) of about 10 to 50 years. So on a storage cluster with 10,000 disks, we should expect on average one disk to die per day. A good combatant for this is redundancy. Disks may be set up in RAID configurations, servers can have dual power supplies etc. When a component dies, the redundant component can take it's place whilst the broken one is being replaced. This approach cannot complete prevent hardware problems from causing failures, but it is well understood and can often keep a machine running uninterrupted for years. However, as data volumes and applications' computing demands have increased, more applications have begun using larger number of machines, which proportionally increase the rate of hardware faults. Moreover, in some cloud platforms such as AWS it is fairly common for virtual machine instances to become unavailable without warning as the platforms are designed to prioritise flexibility and elasticity over single-machine reliability. Hence there is a move toward systems that can tolerate the loss of entire machines, by using software fault-tolerance techniques in preference or in addition to hardware redundancy. Such systems also have operations advantages: a single-server system requires planned downtime, whereas a system that can tolerate machine failure can be patched one node at a time with no downtime of the entire system (rolling upgrade). Hardware faults are normally random and independent form each other. This is not the case for software faults. Software fault can lie dormant for a long time until they are triggered by am unusual set of circumstances. Though there is no quick solution, there are lots of small ones: Humans design and build software systems, and the operators are also human. Humans are unreliable. 10%-25% of outages are caused by hardware faults, the rest are human related faults. Even if a system is working reliably today, that doesn't mean it will necessarily work reliably in the future. Scalability is the term we used to describe a system's ability to cope with increased load. Load can be described with a few numbers which we call load parameters. These parameters depend on the architecture of the system. It might be: Consider Twitter as an example, they have two main operations, post tweet and home timeline. There are two ways of implementing these. Approach 1: Posting a tweet simply inserts the new tweet into a global collection of tweets. When user requests their home timeline, look up all the people they follow, find all the tweets for each of those users and merge them (sorting on time). In a relational database
+ The first version of Twitter used approach 1, but the systems struggled to keep up with the load of home timeline queries, so the company switched to approach 2. The average rate of published tweets is almost two orders of magnitude lower than the rate of home timeline reads, so in this case its preferable to do more work at write time and less at read time. However the downside of approach 2 is posting a tweet now requires a lot of extra work. On average a tweet is delivered to about 75 followers, so 4.6K tweets/second became 345k writes/second to home timeline caches. However now consider some accounts have 30 million followers. Twitter uses a hybrid of both solutions. For users with smaller follow counts approach 2 is used, however for celebrity accounts approach 1 is used and these two timelines are merged together. Once you have described the load on your system, you can investigate what happens when load increases. LATENCY AND RESPONSE TIME Latency and response time are often used synonymously, but they are not the same.
+Response time: Is what the client sees: the sum of service time, network delays and queuing delays.
+Latency: Is the duration that a request is waiting to be handled - during which it is latent, awaiting service. Most requests are reasonably fast, but there are occasional outliers that take much longer. Perhaps these requests are intrinsically more expensive - however even the same request will see variations due to all matter of reasons. Average response time of a service is common however it is not a very good metric if you want to know your "typical" response time - it doesn't tell you how many users actually experienced that delay. Percentiles are a better metric. Amazon descries response time requirements for internal services in terms of p999 even though it only affects 1 in 1000 requests. This is because customers with the slowest requests are often those who have the most data in their accounts (valuable customers). Queuing delays often account for a large part of the response time at high percentiles. It only takes a small number of sow requests to hold up the processing of subsequent requests - known as head-of-line blocking. Due to this it is important to measure response times on client side. Vertical Scaling: Moving to a more powerful machine. Horizontal Scaling: Distributing the load across multiple smaller machines. Some systems are elastic, meaning that they can automatically add computing resources when they detect a load increase. Elastic systems are useful if load is unpredictable, but manual/periodic scaled systems are simpler and have fewer operational surprises. While distributing stateless services across multiple machines is fairly straightforward, taking stateful data systems from a single node to a distributed set up can introduce additional complexity. Common wisdom (until recently) was to keep your database on a single node and vertically scale until cost dictated horizontal scaling. Majority of the cost of software is not initial development, but in on going maintenance: Operability: Make it easy for operations teams to keep the system running smoothly. Simplicity: Make it easy for new engineers to understand the system, by removing as much complexity as possible from the system. Evolvability: Make it easy for engineers to make changes to the system in the future, adapting it for unanticipated use cases are requirements change. (Also known as extensibility, modifiability or plasticity) "Good operations can work around the limitations of bad software, but good software cannot run reliably with bad operations" Operation teams are responsible for the following: Good operability means making routine tasks easy - allowing the operations team to focus their efforts on high-value activities. Data systems can do various things to make routine tasks easy: In complex software, there is a greater risk of introducing bugs when making a change: when the system is harder for developers to understand and reason about, hidden assumptions, unintended consequences, and unexpected interactions are more easily overlooked. Complexity can be accidental. This is defined if it is not inherent in the problem the software is trying to solve, but only arises from implementation. One of the best tools for removing accidental complexity is abstraction. The ease with which you can modify a data system, and adapt it to changing requirements, is closely linked to its simplicity and its abstractions: simple and easy-to-understand systems are usually easier to modify than complex ones. Evolvability can be thought of the agility on a data system level. Data models are perhaps the most important part of developing software. They define on how we think about the problem we are solving. Most applications are built by layering one data model on top of another. For each layer the key question is: how is it represented in terms of the next-lower layer? For example: In a relational model, data is organised into relations (called tables in SQL), where each relation is an unordered collection of tuples (rows in SQL). #NoSQL is retroactively interpreted as Not Only SQL. There are several driving forces behind the adoption of NoSQL databases: Most application development today is done in OOP, meaning if data is stored in relational tables, an awkward transition layer is required between the object in application code and the database model of tables, rows and columns. The disconnect between the models is sometimes called an impedance mismatch. Object-relational mapping (ORM) frameworks reduce the amount of boiler plate required for this translation layer, but they cannot completely hide it. For example, storing a resume on a relational schema can be tricky. The profile as a while can be identified by a unique identifier Here is the same data stored as a JSON object: The JSON model reduces the impedance mismatch between the application code and the storage layer. The lack of schema is often cited as an advantage. The JSON representation has better locality than the multi-table schema, if you want to fetch a profile in the relational example, you need to perform multiple queries or a join between 2 or more tables. In the JSON format all relevent data is in one place. The one-to-many relationships from the user profile to the user's positions, education, contact information etc imply a tree like structure, the JSON representation makes this tree structure explicit. In the previous example Whenever you store an ID or a text string is a question of duplication. When you use an ID, the information that is meaningful to humans is stored in only one place and everything that refers to it uses an ID. The advantages of using an ID is that because it has no meaning to humans, it never needs to change: the ID can remain the same, even if the information it identifies changes. Anything that is meaningful to humans may need to change sometime in the future - and if that information is duplicated, all the redundant copies need to be updated. Removing such duplication is the key idea behind normalisation in databases. Even if the initial version of an application fits well in a join-free document model, data has a tendency of becoming more interconnected as features are added to applications. See below how adding two extra features turns one-to-many to many-to-many. While many-to-many relationships and joins are routinely used in relational databases, document databases and NoSQL reopened the debate on how best to represent such relationships in a database. This debate is much older than NoSQL - going back to the 1970s. In the tree structure of the hierarchical model, every record has exactly one parent; in the network model, a record could have multiple parents. For example, there could be one record for the The links between records in the network model were not foreign keys, but more like pointers in a programming language. The only way of accessing a record was to follow a path from a root record along these chains of links. This was called an access path. In the simplest case, an access path could be like the traversal of a linked list: start at the head of the list and look one record at a time until you find the one you want. But in a world of many-to-many relationships, several different paths can lead to the same record, and a programmer working with the network model had to keep track of these different access paths in their head. A query was performed by moving a cursor through the database by iterating over lists of records and following access paths. If a record has multiple parents (i.e. multiple incoming pointers from other records), the application code had to keep track of all the various relationships. What the relational model did, by contrast, was to lay out all the data in the open: a relation (table) is simply a collection of tuples (rows), and that it. There are no labyrinthine nested structures, no complicated access paths to follow if you want to query data you can: The query optimiser automatically decides which parts of the query to execute in which order, and which indexes to use. Those choices are effectively the equivalent of the "access path", but the big difference is it is made by the query optimiser, not the application developer. Document databases reverted back to the hierarchical model in one aspect: storing nested records (one-to-many) relationships within their parent record rather than a separate table. However, when it come to representing many-to-one and many-to-many relationships, relational and document databases both refer using foreign keys. The main arguments in favour of the document data model are schema flexibility, better performance due to locality, and that for some applications it is closer to the data structures used by the application. The relational model counters by providing better support for joins, and many-to-one and many-to-many relationships. If data in your application has a document-like structure (i.e. a tree of one-to-many relationships where typically the entire tree is loaded at once), then the document model makes sense. The relational technique of shredding - splitting a document-like structure into multiple tables - can lead to cumbersome schemas and complex code. If a document model is deeply nested it can cause problems as nested items cannot be queried directly. For example "the second item in the list of employers for user 251" is inefficient. However if you applicaiton does use many-to-many relationships, the document model is less appealing. It's possible to reduce the need for joins by denormalising but then the application code needs to do additional work to keep the denormalised data consistent. Joins can be emulated in application code by making multiple requests to the database. But that moves complexity to the application code and multiple calls is usually slower than the optimised JOIN request. No schema means that arbitrary keys can values can be added to a document, and when reading, clients have no guarantees as to what fields the documents may contain. Document databases are sometimes called schemaless, but that's misleading, as the code that read the data usually assumes some kind of structure. A more accurate term is schema-on-read. In contrast schema-on-write is enforced by the database on writes. For example, say you have currently storing user's full name in one field, however now you want to store them separately. In a document database: On the other hand, in a "statically typed" database schema-on-write approach. Altering the table is relatively quick however setting every row in the table is time consuming. The schema-on-read approach is advantageous if the items in the collection don't all have the same structure. A document is usually stored as a single continuous string, encoded as JSON or binary (MongoDB's BSON). If your application often needs access to the entire document (e.g. rendering to a web page), there is a performance advantage to this storage locality. If data is split across multiple tables, multiple index lookups are required to retrieve it all. The database typically needs to load the entire document, even if you access only a small portion of it. On updates to a document, the entire document usually needs to be rewritten - only modifications that don't change encoded size can be performed in place (rare). For this reason its recommended to keep documents small and avoid frequent updates. Some relational databases can offer this locality. Oracle's feature: multi-table index cluster tables which declares rows should be inter-leaved in the parent table. There is also the column-family concept in Cassandra. Relational databases have supported XML since their inception - however many now support JSON. Document databases now supports relational like joins in its query language and some MongoDB drivers automatically resolve database references. It seems that relational and document databases are becoming more similar over time, and that is a good thing: the data models complement each other. If a database is able to handle document-like data and also perform relational queries on it, applications can use the combination of features that best fits their needs. SQL is a declarative query language. Imperative example:
+ Where \(\sigma\) is the selection operator, returning only those animals that match the condition \(family = ''Sharks''\). SQL follows this closely. An imperative language tells the computer to perform certain operations in a certain order. In a declarative query language, you just specify the pattern of the data you want. e.g. what conditions should be met, how the data should be transformed - but not how to achieve that goal. The declarative query language hides the implementation details of the database engine. This allows the database engine to be optimised and improved without the need to change the query language itself. Declarative languages are very easy to parallelise - they specify the pattern of results not the algorithm to be used. Here the CSS selector Doing this with an imperative approach is a nightmare.
+ MapReduce is a programming model for processing large amount of data in bulk across many machines. This is supported by MongoDB as a mechanism for performing read-only queries across many documents. MapReduce is neither declarative nor imperative but somewhere in between. Example in PostgreSQL
+ Example in MongoDB using MapReduce
+ The Map and Reduce functions must be pure with no side effects (no additional db calls). This allows them to be run anywhere, in any order and re-run on failure. MapReduce was replaced by the aggregation pipeline. Aggregation pipeline language is similar in expressiveness to a subset of SQL, but it uses JSON syntax rather than SQL's English sentence style. There many been many developments in distributed systems, databases and the applications build on top of them, there are various driving forces: An application is data-intensive if data is it's primary challenge. This is opposed to compute-intensive where the CPU is the bottle neck. API: Application Programming Interface One of the main reasons is the idea of statefulness. Consider the following API method names: Each one of these RPCs is pretty descriptive, but we have to memorize these methods, each of which is subtly different. We need to standardise, by providing a standard set of building blocks - method-resource Resource-oriented APIs will be much easier for users to learn, understand and remember. What is the purpose of building an API in the first place? APIs that rely on repeated patterns applied to both the API surface definition and the behaviour. Users very rarely learn an entire API, they learn the parts they need to and make assumptions when they need to make additions. e.g. if a query parameter is called text in one endpoint, it should not be called string or query in another. APIs that rely on repeated, predictable patterns are easier and faster to learn; and therefore better. A software design pattern is a particular design that can be applied over and over to lots of similar software problems, with only minor adjustments. It is not a pre-built library but more of a blueprint for solving similarly structured problems. Pagination Pattern: The pagination pattern is a way of retrieving a long list of items in smaller, more manageable chunks. The pattern relies on extra fields on both the request and response. Moving from a non-paginated to paginated response pattern: Q. What happens if we don't start with the pattern? In every software system we build, and every API we design or use - there are names that will live far longer than we ever intend them to. It is important to choose great names. When designing and building an API, the names we use will be seen by & interacted with all users of the API. It is critical that a name clearly convey the thing is it naming. Language being inherently flexible and ambiguous can be a good thing and a bad thing. Use American English. REST standard verbs should use the imperative mood. They are all commands or orders. This means we need to keep the context of our API in mind. A name can become more clear when using a richer data type. The arrangement of resources in our API, the fields that define those resources, and how those resources relate to one another through those fields. In other words, resource layout is the entity (resource) relationship model for a particular design of an API. The simplest way or two resources to relate to one another is by a simple reference. In this case, there is an implied hierarchy of When building an API, after we've chosen the list of things or resources that matter to us, the next step is to decide how these resources relate to one another. Reference relationships should be purposeful and fundamental to the desired behaviour. Any reference relationship should be something important for the API to accomplish its primary goal. Optimise for the common case - without compromising the feasibility of the advanced case. The biggest differences with this type of relationship are the cascading effect of actions and the inheritance of behaviours and properties from parent to child. It can often be tempting to create resources for even the tiniest concept you might want to model. Rule of thumb: If you don't need to interact with one of your resources independent of a resource it's associated with, then it can probably be a data type. Overly deep hierarchies can be confusing and difficult to manage. Page 63 4.3.3 in-line everything Many applications today are data-intensive, as opposed to compute-intensive. Raw CPU power is rarely a limiting factor for these applications. A data-intensive application is built from the following building blocks Database and a message queue are quite similar. They both store data for some time - though they have very different access patterns which means different performance characteristics and thus very different implementations. Boundaries between these implementations are becoming slightly blurred. There are data-stores that are also used as message queues (Redis) and there are messages queues with database-like durability guarantees (Apache Kafka). When you combine several tools in order to provide a service, the service's interface or application programming interface (API) usually hides those implementation details from clients. Things that ca go wrong are called faults. Systems that anticipate faults and can cope with them are called fault-tolerant or resilient. Fault tolerance does not mean making a system tolerant of all faults, but only tolerating certain types of faults. NOTE: A fault is not the same as a failure. It is impossible to to reduce the probability of a fault to zero; therefore it is best to design fault-tolerance mechanisms that prevent faults from causing failures. Hard disks are reported as having a mean time to failing (MTTF) of about 10 to 50 years. So on a storage cluster with 10,000 disks, we should expect on average one disk to die per day. A good combatant for this is redundancy. Disks may be set up in RAID configurations, servers can have dual power supplies etc. When a component dies, the redundant component can take it's place whilst the broken one is being replaced. This approach cannot complete prevent hardware problems from causing failures, but it is well understood and can often keep a machine running uninterrupted for years. However, as data volumes and applications' computing demands have increased, more applications have begun using larger number of machines, which proportionally increase the rate of hardware faults. Moreover, in some cloud platforms such as AWS it is fairly common for virtual machine instances to become unavailable without warning as the platforms are designed to prioritise flexibility and elasticity over single-machine reliability. Hence there is a move toward systems that can tolerate the loss of entire machines, by using software fault-tolerance techniques in preference or in addition to hardware redundancy. Such systems also have operations advantages: a single-server system requires planned downtime, whereas a system that can tolerate machine failure can be patched one node at a time with no downtime of the entire system (rolling upgrade). Hardware faults are normally random and independent form each other. This is not the case for software faults. Software fault can lie dormant for a long time until they are triggered by am unusual set of circumstances. Though there is no quick solution, there are lots of small ones: Humans design and build software systems, and the operators are also human. Humans are unreliable. 10%-25% of outages are caused by hardware faults, the rest are human related faults. Even if a system is working reliably today, that doesn't mean it will necessarily work reliably in the future. Scalability is the term we used to describe a system's ability to cope with increased load. Load can be described with a few numbers which we call load parameters. These parameters depend on the architecture of the system. It might be: Consider Twitter as an example, they have two main operations, post tweet and home timeline. There are two ways of implementing these. Approach 1: Posting a tweet simply inserts the new tweet into a global collection of tweets. When user requests their home timeline, look up all the people they follow, find all the tweets for each of those users and merge them (sorting on time). In a relational database The first version of Twitter used approach 1, but the systems struggled to keep up with the load of home timeline queries, so the company switched to approach 2. The average rate of published tweets is almost two orders of magnitude lower than the rate of home timeline reads, so in this case its preferable to do more work at write time and less at read time. However the downside of approach 2 is posting a tweet now requires a lot of extra work. On average a tweet is delivered to about 75 followers, so 4.6K tweets/second became 345k writes/second to home timeline caches. However now consider some accounts have 30 million followers. Twitter uses a hybrid of both solutions. For users with smaller follow counts approach 2 is used, however for celebrity accounts approach 1 is used and these two timelines are merged together. Once you have described the load on your system, you can investigate what happens when load increases. LATENCY AND RESPONSE TIME Latency and response time are often used synonymously, but they are not the same. Response time: Is what the client sees: the sum of service time, network delays and queuing delays. Latency: Is the duration that a request is waiting to be handled - during which it is latent, awaiting service. Most requests are reasonably fast, but there are occasional outliers that take much longer. Perhaps these requests are intrinsically more expensive - however even the same request will see variations due to all matter of reasons. Average response time of a service is common however it is not a very good metric if you want to know your \"typical\" response time - it doesn't tell you how many users actually experienced that delay. Percentiles are a better metric. Amazon descries response time requirements for internal services in terms of p999 even though it only affects 1 in 1000 requests. This is because customers with the slowest requests are often those who have the most data in their accounts (valuable customers). Queuing delays often account for a large part of the response time at high percentiles. It only takes a small number of sow requests to hold up the processing of subsequent requests - known as head-of-line blocking. Due to this it is important to measure response times on client side. Vertical Scaling: Moving to a more powerful machine. Horizontal Scaling: Distributing the load across multiple smaller machines. Some systems are elastic, meaning that they can automatically add computing resources when they detect a load increase. Elastic systems are useful if load is unpredictable, but manual/periodic scaled systems are simpler and have fewer operational surprises. While distributing stateless services across multiple machines is fairly straightforward, taking stateful data systems from a single node to a distributed set up can introduce additional complexity. Common wisdom (until recently) was to keep your database on a single node and vertically scale until cost dictated horizontal scaling. Majority of the cost of software is not initial development, but in on going maintenance: Operability: Make it easy for operations teams to keep the system running smoothly. Simplicity: Make it easy for new engineers to understand the system, by removing as much complexity as possible from the system. Evolvability: Make it easy for engineers to make changes to the system in the future, adapting it for unanticipated use cases are requirements change. (Also known as extensibility, modifiability or plasticity) \"Good operations can work around the limitations of bad software, but good software cannot run reliably with bad operations\" Operation teams are responsible for the following: Good operability means making routine tasks easy - allowing the operations team to focus their efforts on high-value activities. Data systems can do various things to make routine tasks easy: In complex software, there is a greater risk of introducing bugs when making a change: when the system is harder for developers to understand and reason about, hidden assumptions, unintended consequences, and unexpected interactions are more easily overlooked. Complexity can be accidental. This is defined if it is not inherent in the problem the software is trying to solve, but only arises from implementation. One of the best tools for removing accidental complexity is abstraction. The ease with which you can modify a data system, and adapt it to changing requirements, is closely linked to its simplicity and its abstractions: simple and easy-to-understand systems are usually easier to modify than complex ones. Evolvability can be thought of the agility on a data system level. Data models are perhaps the most important part of developing software. They define on how we think about the problem we are solving. Most applications are built by layering one data model on top of another. For each layer the key question is: how is it represented in terms of the next-lower layer? For example: In a relational model, data is organised into relations (called tables in SQL), where each relation is an unordered collection of tuples (rows in SQL). #NoSQL is retroactively interpreted as Not Only SQL. There are several driving forces behind the adoption of NoSQL databases: Most application development today is done in OOP, meaning if data is stored in relational tables, an awkward transition layer is required between the object in application code and the database model of tables, rows and columns. The disconnect between the models is sometimes called an impedance mismatch. Object-relational mapping (ORM) frameworks reduce the amount of boiler plate required for this translation layer, but they cannot completely hide it. For example, storing a resume on a relational schema can be tricky. The profile as a while can be identified by a unique identifier Here is the same data stored as a JSON object: The JSON model reduces the impedance mismatch between the application code and the storage layer. The lack of schema is often cited as an advantage. The JSON representation has better locality than the multi-table schema, if you want to fetch a profile in the relational example, you need to perform multiple queries or a join between 2 or more tables. In the JSON format all relevent data is in one place. The one-to-many relationships from the user profile to the user's positions, education, contact information etc imply a tree like structure, the JSON representation makes this tree structure explicit. In the previous example Whenever you store an ID or a text string is a question of duplication. When you use an ID, the information that is meaningful to humans is stored in only one place and everything that refers to it uses an ID. The advantages of using an ID is that because it has no meaning to humans, it never needs to change: the ID can remain the same, even if the information it identifies changes. Anything that is meaningful to humans may need to change sometime in the future - and if that information is duplicated, all the redundant copies need to be updated. Removing such duplication is the key idea behind normalisation in databases. Even if the initial version of an application fits well in a join-free document model, data has a tendency of becoming more interconnected as features are added to applications. See below how adding two extra features turns one-to-many to many-to-many. While many-to-many relationships and joins are routinely used in relational databases, document databases and NoSQL reopened the debate on how best to represent such relationships in a database. This debate is much older than NoSQL - going back to the 1970s. In the tree structure of the hierarchical model, every record has exactly one parent; in the network model, a record could have multiple parents. For example, there could be one record for the The links between records in the network model were not foreign keys, but more like pointers in a programming language. The only way of accessing a record was to follow a path from a root record along these chains of links. This was called an access path. In the simplest case, an access path could be like the traversal of a linked list: start at the head of the list and look one record at a time until you find the one you want. But in a world of many-to-many relationships, several different paths can lead to the same record, and a programmer working with the network model had to keep track of these different access paths in their head. A query was performed by moving a cursor through the database by iterating over lists of records and following access paths. If a record has multiple parents (i.e. multiple incoming pointers from other records), the application code had to keep track of all the various relationships. What the relational model did, by contrast, was to lay out all the data in the open: a relation (table) is simply a collection of tuples (rows), and that it. There are no labyrinthine nested structures, no complicated access paths to follow if you want to query data you can: The query optimiser automatically decides which parts of the query to execute in which order, and which indexes to use. Those choices are effectively the equivalent of the \"access path\", but the big difference is it is made by the query optimiser, not the application developer. Document databases reverted back to the hierarchical model in one aspect: storing nested records (one-to-many) relationships within their parent record rather than a separate table. However, when it come to representing many-to-one and many-to-many relationships, relational and document databases both refer using foreign keys. The main arguments in favour of the document data model are schema flexibility, better performance due to locality, and that for some applications it is closer to the data structures used by the application. The relational model counters by providing better support for joins, and many-to-one and many-to-many relationships. If data in your application has a document-like structure (i.e. a tree of one-to-many relationships where typically the entire tree is loaded at once), then the document model makes sense. The relational technique of shredding - splitting a document-like structure into multiple tables - can lead to cumbersome schemas and complex code. If a document model is deeply nested it can cause problems as nested items cannot be queried directly. For example \"the second item in the list of employers for user 251\" is inefficient. However if you applicaiton does use many-to-many relationships, the document model is less appealing. It's possible to reduce the need for joins by denormalising but then the application code needs to do additional work to keep the denormalised data consistent. Joins can be emulated in application code by making multiple requests to the database. But that moves complexity to the application code and multiple calls is usually slower than the optimised JOIN request. No schema means that arbitrary keys can values can be added to a document, and when reading, clients have no guarantees as to what fields the documents may contain. Document databases are sometimes called schemaless, but that's misleading, as the code that read the data usually assumes some kind of structure. A more accurate term is schema-on-read. In contrast schema-on-write is enforced by the database on writes. For example, say you have currently storing user's full name in one field, however now you want to store them separately. In a document database: On the other hand, in a \"statically typed\" database schema-on-write approach. Altering the table is relatively quick however setting every row in the table is time consuming. The schema-on-read approach is advantageous if the items in the collection don't all have the same structure. A document is usually stored as a single continuous string, encoded as JSON or binary (MongoDB's BSON). If your application often needs access to the entire document (e.g. rendering to a web page), there is a performance advantage to this storage locality. If data is split across multiple tables, multiple index lookups are required to retrieve it all. The database typically needs to load the entire document, even if you access only a small portion of it. On updates to a document, the entire document usually needs to be rewritten - only modifications that don't change encoded size can be performed in place (rare). For this reason its recommended to keep documents small and avoid frequent updates. Some relational databases can offer this locality. Oracle's feature: multi-table index cluster tables which declares rows should be inter-leaved in the parent table. There is also the column-family concept in Cassandra. Relational databases have supported XML since their inception - however many now support JSON. Document databases now supports relational like joins in its query language and some MongoDB drivers automatically resolve database references. It seems that relational and document databases are becoming more similar over time, and that is a good thing: the data models complement each other. If a database is able to handle document-like data and also perform relational queries on it, applications can use the combination of features that best fits their needs. SQL is a declarative query language. Imperative example: 404 - Not found
+
+ =q){if(s=W.limit_backward,W.limit_backward=q,W.ket=W.cursor,e=W.find_among_b(P,7))switch(W.bra=W.cursor,e){case 1:if(l()){if(i=W.limit-W.cursor,!W.eq_s_b(1,"s")&&(W.cursor=W.limit-i,!W.eq_s_b(1,"t")))break;W.slice_del()}break;case 2:W.slice_from("i");break;case 3:W.slice_del();break;case 4:W.eq_s_b(2,"gu")&&W.slice_del()}W.limit_backward=s}}function b(){var e=W.limit-W.cursor;W.find_among_b(U,5)&&(W.cursor=W.limit-e,W.ket=W.cursor,W.cursor>W.limit_backward&&(W.cursor--,W.bra=W.cursor,W.slice_del()))}function d(){for(var e,r=1;W.out_grouping_b(F,97,251);)r--;if(r<=0){if(W.ket=W.cursor,e=W.limit-W.cursor,!W.eq_s_b(1,"é")&&(W.cursor=W.limit-e,!W.eq_s_b(1,"è")))return;W.bra=W.cursor,W.slice_from("e")}}function k(){if(!w()&&(W.cursor=W.limit,!f()&&(W.cursor=W.limit,!m())))return W.cursor=W.limit,void _();W.cursor=W.limit,W.ket=W.cursor,W.eq_s_b(1,"Y")?(W.bra=W.cursor,W.slice_from("i")):(W.cursor=W.limit,W.eq_s_b(1,"ç")&&(W.bra=W.cursor,W.slice_from("c")))}var p,g,q,v=[new r("col",-1,-1),new r("par",-1,-1),new r("tap",-1,-1)],h=[new r("",-1,4),new r("I",0,1),new r("U",0,2),new r("Y",0,3)],z=[new r("iqU",-1,3),new r("abl",-1,3),new r("Ièr",-1,4),new r("ièr",-1,4),new r("eus",-1,2),new r("iv",-1,1)],y=[new r("ic",-1,2),new r("abil",-1,1),new r("iv",-1,3)],C=[new r("iqUe",-1,1),new r("atrice",-1,2),new r("ance",-1,1),new r("ence",-1,5),new r("logie",-1,3),new r("able",-1,1),new r("isme",-1,1),new r("euse",-1,11),new r("iste",-1,1),new r("ive",-1,8),new r("if",-1,8),new r("usion",-1,4),new r("ation",-1,2),new r("ution",-1,4),new r("ateur",-1,2),new r("iqUes",-1,1),new r("atrices",-1,2),new r("ances",-1,1),new r("ences",-1,5),new r("logies",-1,3),new r("ables",-1,1),new r("ismes",-1,1),new r("euses",-1,11),new r("istes",-1,1),new r("ives",-1,8),new r("ifs",-1,8),new r("usions",-1,4),new r("ations",-1,2),new r("utions",-1,4),new r("ateurs",-1,2),new r("ments",-1,15),new r("ements",30,6),new r("issements",31,12),new r("ités",-1,7),new r("ment",-1,15),new r("ement",34,6),new r("issement",35,12),new r("amment",34,13),new r("emment",34,14),new r("aux",-1,10),new r("eaux",39,9),new r("eux",-1,1),new r("ité",-1,7)],x=[new r("ira",-1,1),new r("ie",-1,1),new r("isse",-1,1),new r("issante",-1,1),new r("i",-1,1),new r("irai",4,1),new r("ir",-1,1),new r("iras",-1,1),new r("ies",-1,1),new r("îmes",-1,1),new r("isses",-1,1),new r("issantes",-1,1),new r("îtes",-1,1),new r("is",-1,1),new r("irais",13,1),new r("issais",13,1),new r("irions",-1,1),new r("issions",-1,1),new r("irons",-1,1),new r("issons",-1,1),new r("issants",-1,1),new r("it",-1,1),new r("irait",21,1),new r("issait",21,1),new r("issant",-1,1),new r("iraIent",-1,1),new r("issaIent",-1,1),new r("irent",-1,1),new r("issent",-1,1),new r("iront",-1,1),new r("ît",-1,1),new r("iriez",-1,1),new r("issiez",-1,1),new r("irez",-1,1),new r("issez",-1,1)],I=[new r("a",-1,3),new r("era",0,2),new r("asse",-1,3),new r("ante",-1,3),new r("ée",-1,2),new r("ai",-1,3),new r("erai",5,2),new r("er",-1,2),new r("as",-1,3),new r("eras",8,2),new r("âmes",-1,3),new r("asses",-1,3),new r("antes",-1,3),new r("âtes",-1,3),new r("ées",-1,2),new r("ais",-1,3),new r("erais",15,2),new r("ions",-1,1),new r("erions",17,2),new r("assions",17,3),new r("erons",-1,2),new r("ants",-1,3),new r("és",-1,2),new r("ait",-1,3),new r("erait",23,2),new r("ant",-1,3),new r("aIent",-1,3),new r("eraIent",26,2),new r("èrent",-1,2),new r("assent",-1,3),new r("eront",-1,2),new r("ât",-1,3),new r("ez",-1,2),new r("iez",32,2),new r("eriez",33,2),new r("assiez",33,3),new r("erez",32,2),new r("é",-1,2)],P=[new r("e",-1,3),new r("Ière",0,2),new r("ière",0,2),new r("ion",-1,1),new r("Ier",-1,2),new r("ier",-1,2),new r("ë",-1,4)],U=[new r("ell",-1,-1),new r("eill",-1,-1),new r("enn",-1,-1),new r("onn",-1,-1),new r("ett",-1,-1)],F=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,128,130,103,8,5],S=[1,65,20,0,0,0,0,0,0,0,0,0,0,0,0,0,128],W=new s;this.setCurrent=function(e){W.setCurrent(e)},this.getCurrent=function(){return W.getCurrent()},this.stem=function(){var e=W.cursor;return n(),W.cursor=e,u(),W.limit_backward=e,W.cursor=W.limit,k(),W.cursor=W.limit,b(),W.cursor=W.limit,d(),W.cursor=W.limit_backward,o(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return i.setCurrent(e),i.stem(),i.getCurrent()}):(i.setCurrent(e),i.stem(),i.getCurrent())}}(),e.Pipeline.registerFunction(e.fr.stemmer,"stemmer-fr"),e.fr.stopWordFilter=e.generateStopWordFilter("ai aie aient aies ait as au aura aurai auraient aurais aurait auras aurez auriez aurions aurons auront aux avaient avais avait avec avez aviez avions avons ayant ayez ayons c ce ceci celà ces cet cette d dans de des du elle en es est et eu eue eues eurent eus eusse eussent eusses eussiez eussions eut eux eûmes eût eûtes furent fus fusse fussent fusses fussiez fussions fut fûmes fût fûtes ici il ils j je l la le les leur leurs lui m ma mais me mes moi mon même n ne nos notre nous on ont ou par pas pour qu que quel quelle quelles quels qui s sa sans se sera serai seraient serais serait seras serez seriez serions serons seront ses soi soient sois soit sommes son sont soyez soyons suis sur t ta te tes toi ton tu un une vos votre vous y à étaient étais était étant étiez étions été étée étées étés êtes".split(" ")),e.Pipeline.registerFunction(e.fr.stopWordFilter,"stopWordFilter-fr")}});
\ No newline at end of file
diff --git a/assets/javascripts/lunr/min/lunr.he.min.js b/assets/javascripts/lunr/min/lunr.he.min.js
new file mode 100644
index 0000000..b863d3e
--- /dev/null
+++ b/assets/javascripts/lunr/min/lunr.he.min.js
@@ -0,0 +1 @@
+!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.he=function(){this.pipeline.reset(),this.pipeline.add(e.he.trimmer,e.he.stopWordFilter,e.he.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.he.stemmer))},e.he.wordCharacters="֑-״א-תa-zA-Za-zA-Z0-90-9",e.he.trimmer=e.trimmerSupport.generateTrimmer(e.he.wordCharacters),e.Pipeline.registerFunction(e.he.trimmer,"trimmer-he"),e.he.stemmer=function(){var e=this;return e.result=!1,e.preRemoved=!1,e.sufRemoved=!1,e.pre={pre1:"ה ו י ת",pre2:"ב כ ל מ ש כש",pre3:"הב הכ הל המ הש בש לכ",pre4:"וב וכ ול ומ וש",pre5:"מה שה כל",pre6:"מב מכ מל ממ מש",pre7:"בה בו בי בת כה כו כי כת לה לו לי לת",pre8:"ובה ובו ובי ובת וכה וכו וכי וכת ולה ולו ולי ולת"},e.suf={suf1:"ך כ ם ן נ",suf2:"ים ות וך וכ ום ון ונ הם הן יכ יך ינ ים",suf3:"תי תך תכ תם תן תנ",suf4:"ותי ותך ותכ ותם ותן ותנ",suf5:"נו כם כן הם הן",suf6:"ונו וכם וכן והם והן",suf7:"תכם תכן תנו תהם תהן",suf8:"הוא היא הם הן אני אתה את אנו אתם אתן",suf9:"ני נו כי כו כם כן תי תך תכ תם תן",suf10:"י ך כ ם ן נ ת"},e.patterns=JSON.parse('{"hebrewPatterns": [{"pt1": [{"c": "ה", "l": 0}]}, {"pt2": [{"c": "ו", "l": 0}]}, {"pt3": [{"c": "י", "l": 0}]}, {"pt4": [{"c": "ת", "l": 0}]}, {"pt5": [{"c": "מ", "l": 0}]}, {"pt6": [{"c": "ל", "l": 0}]}, {"pt7": [{"c": "ב", "l": 0}]}, {"pt8": [{"c": "כ", "l": 0}]}, {"pt9": [{"c": "ש", "l": 0}]}, {"pt10": [{"c": "כש", "l": 0}]}, {"pt11": [{"c": "בה", "l": 0}]}, {"pt12": [{"c": "וב", "l": 0}]}, {"pt13": [{"c": "וכ", "l": 0}]}, {"pt14": [{"c": "ול", "l": 0}]}, {"pt15": [{"c": "ומ", "l": 0}]}, {"pt16": [{"c": "וש", "l": 0}]}, {"pt17": [{"c": "הב", "l": 0}]}, {"pt18": [{"c": "הכ", "l": 0}]}, {"pt19": [{"c": "הל", "l": 0}]}, {"pt20": [{"c": "המ", "l": 0}]}, {"pt21": [{"c": "הש", "l": 0}]}, {"pt22": [{"c": "מה", "l": 0}]}, {"pt23": [{"c": "שה", "l": 0}]}, {"pt24": [{"c": "כל", "l": 0}]}]}'),e.execArray=["cleanWord","removeDiacritics","removeStopWords","normalizeHebrewCharacters"],e.stem=function(){var r=0;for(e.result=!1,e.preRemoved=!1,e.sufRemoved=!1;r
=a&&(r=w.limit_backward,w.limit_backward=a,w.ket=w.cursor,e=w.find_among_b(m,29),w.limit_backward=r,e))switch(w.bra=w.cursor,e){case 1:w.slice_del();break;case 2:n=w.limit-w.cursor,w.in_grouping_b(c,98,122)?w.slice_del():(w.cursor=w.limit-n,w.eq_s_b(1,"k")&&w.out_grouping_b(d,97,248)&&w.slice_del());break;case 3:w.slice_from("er")}}function t(){var e,r=w.limit-w.cursor;w.cursor>=a&&(e=w.limit_backward,w.limit_backward=a,w.ket=w.cursor,w.find_among_b(u,2)?(w.bra=w.cursor,w.limit_backward=e,w.cursor=w.limit-r,w.cursor>w.limit_backward&&(w.cursor--,w.bra=w.cursor,w.slice_del())):w.limit_backward=e)}function o(){var e,r;w.cursor>=a&&(r=w.limit_backward,w.limit_backward=a,w.ket=w.cursor,e=w.find_among_b(l,11),e?(w.bra=w.cursor,w.limit_backward=r,1==e&&w.slice_del()):w.limit_backward=r)}var s,a,m=[new r("a",-1,1),new r("e",-1,1),new r("ede",1,1),new r("ande",1,1),new r("ende",1,1),new r("ane",1,1),new r("ene",1,1),new r("hetene",6,1),new r("erte",1,3),new r("en",-1,1),new r("heten",9,1),new r("ar",-1,1),new r("er",-1,1),new r("heter",12,1),new r("s",-1,2),new r("as",14,1),new r("es",14,1),new r("edes",16,1),new r("endes",16,1),new r("enes",16,1),new r("hetenes",19,1),new r("ens",14,1),new r("hetens",21,1),new r("ers",14,1),new r("ets",14,1),new r("et",-1,1),new r("het",25,1),new r("ert",-1,3),new r("ast",-1,1)],u=[new r("dt",-1,-1),new r("vt",-1,-1)],l=[new r("leg",-1,1),new r("eleg",0,1),new r("ig",-1,1),new r("eig",2,1),new r("lig",2,1),new r("elig",4,1),new r("els",-1,1),new r("lov",-1,1),new r("elov",7,1),new r("slov",7,1),new r("hetslov",9,1)],d=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,48,0,128],c=[119,125,149,1],w=new n;this.setCurrent=function(e){w.setCurrent(e)},this.getCurrent=function(){return w.getCurrent()},this.stem=function(){var r=w.cursor;return e(),w.limit_backward=r,w.cursor=w.limit,i(),w.cursor=w.limit,t(),w.cursor=w.limit,o(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return i.setCurrent(e),i.stem(),i.getCurrent()}):(i.setCurrent(e),i.stem(),i.getCurrent())}}(),e.Pipeline.registerFunction(e.no.stemmer,"stemmer-no"),e.no.stopWordFilter=e.generateStopWordFilter("alle at av bare begge ble blei bli blir blitt både båe da de deg dei deim deira deires dem den denne der dere deres det dette di din disse ditt du dykk dykkar då eg ein eit eitt eller elles en enn er et ett etter for fordi fra før ha hadde han hans har hennar henne hennes her hjå ho hoe honom hoss hossen hun hva hvem hver hvilke hvilken hvis hvor hvordan hvorfor i ikke ikkje ikkje ingen ingi inkje inn inni ja jeg kan kom korleis korso kun kunne kva kvar kvarhelst kven kvi kvifor man mange me med medan meg meget mellom men mi min mine mitt mot mykje ned no noe noen noka noko nokon nokor nokre nå når og også om opp oss over på samme seg selv si si sia sidan siden sin sine sitt sjøl skal skulle slik so som som somme somt så sånn til um upp ut uten var vart varte ved vere verte vi vil ville vore vors vort vår være være vært å".split(" ")),e.Pipeline.registerFunction(e.no.stopWordFilter,"stopWordFilter-no")}});
\ No newline at end of file
diff --git a/assets/javascripts/lunr/min/lunr.pt.min.js b/assets/javascripts/lunr/min/lunr.pt.min.js
new file mode 100644
index 0000000..6c16996
--- /dev/null
+++ b/assets/javascripts/lunr/min/lunr.pt.min.js
@@ -0,0 +1,18 @@
+/*!
+ * Lunr languages, `Portuguese` language
+ * https://github.com/MihaiValentin/lunr-languages
+ *
+ * Copyright 2014, Mihai Valentin
+ * http://www.mozilla.org/MPL/
+ */
+/*!
+ * based on
+ * Snowball JavaScript Library v0.3
+ * http://code.google.com/p/urim/
+ * http://snowball.tartarus.org/
+ *
+ * Copyright 2010, Oleg Mazko
+ * http://www.mozilla.org/MPL/
+ */
+
+!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.pt=function(){this.pipeline.reset(),this.pipeline.add(e.pt.trimmer,e.pt.stopWordFilter,e.pt.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.pt.stemmer))},e.pt.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.pt.trimmer=e.trimmerSupport.generateTrimmer(e.pt.wordCharacters),e.Pipeline.registerFunction(e.pt.trimmer,"trimmer-pt"),e.pt.stemmer=function(){var r=e.stemmerSupport.Among,s=e.stemmerSupport.SnowballProgram,n=new function(){function e(){for(var e;;){if(z.bra=z.cursor,e=z.find_among(k,3))switch(z.ket=z.cursor,e){case 1:z.slice_from("a~");continue;case 2:z.slice_from("o~");continue;case 3:if(z.cursor>=z.limit)break;z.cursor++;continue}break}}function n(){if(z.out_grouping(y,97,250)){for(;!z.in_grouping(y,97,250);){if(z.cursor>=z.limit)return!0;z.cursor++}return!1}return!0}function i(){if(z.in_grouping(y,97,250))for(;!z.out_grouping(y,97,250);){if(z.cursor>=z.limit)return!1;z.cursor++}return g=z.cursor,!0}function o(){var e,r,s=z.cursor;if(z.in_grouping(y,97,250))if(e=z.cursor,n()){if(z.cursor=e,i())return}else g=z.cursor;if(z.cursor=s,z.out_grouping(y,97,250)){if(r=z.cursor,n()){if(z.cursor=r,!z.in_grouping(y,97,250)||z.cursor>=z.limit)return;z.cursor++}g=z.cursor}}function t(){for(;!z.in_grouping(y,97,250);){if(z.cursor>=z.limit)return!1;z.cursor++}for(;!z.out_grouping(y,97,250);){if(z.cursor>=z.limit)return!1;z.cursor++}return!0}function a(){var e=z.cursor;g=z.limit,b=g,h=g,o(),z.cursor=e,t()&&(b=z.cursor,t()&&(h=z.cursor))}function u(){for(var e;;){if(z.bra=z.cursor,e=z.find_among(q,3))switch(z.ket=z.cursor,e){case 1:z.slice_from("ã");continue;case 2:z.slice_from("õ");continue;case 3:if(z.cursor>=z.limit)break;z.cursor++;continue}break}}function w(){return g<=z.cursor}function m(){return b<=z.cursor}function c(){return h<=z.cursor}function l(){var e;if(z.ket=z.cursor,!(e=z.find_among_b(F,45)))return!1;switch(z.bra=z.cursor,e){case 1:if(!c())return!1;z.slice_del();break;case 2:if(!c())return!1;z.slice_from("log");break;case 3:if(!c())return!1;z.slice_from("u");break;case 4:if(!c())return!1;z.slice_from("ente");break;case 5:if(!m())return!1;z.slice_del(),z.ket=z.cursor,e=z.find_among_b(j,4),e&&(z.bra=z.cursor,c()&&(z.slice_del(),1==e&&(z.ket=z.cursor,z.eq_s_b(2,"at")&&(z.bra=z.cursor,c()&&z.slice_del()))));break;case 6:if(!c())return!1;z.slice_del(),z.ket=z.cursor,e=z.find_among_b(C,3),e&&(z.bra=z.cursor,1==e&&c()&&z.slice_del());break;case 7:if(!c())return!1;z.slice_del(),z.ket=z.cursor,e=z.find_among_b(P,3),e&&(z.bra=z.cursor,1==e&&c()&&z.slice_del());break;case 8:if(!c())return!1;z.slice_del(),z.ket=z.cursor,z.eq_s_b(2,"at")&&(z.bra=z.cursor,c()&&z.slice_del());break;case 9:if(!w()||!z.eq_s_b(1,"e"))return!1;z.slice_from("ir")}return!0}function f(){var e,r;if(z.cursor>=g){if(r=z.limit_backward,z.limit_backward=g,z.ket=z.cursor,e=z.find_among_b(S,120))return z.bra=z.cursor,1==e&&z.slice_del(),z.limit_backward=r,!0;z.limit_backward=r}return!1}function d(){var e;z.ket=z.cursor,(e=z.find_among_b(W,7))&&(z.bra=z.cursor,1==e&&w()&&z.slice_del())}function v(e,r){if(z.eq_s_b(1,e)){z.bra=z.cursor;var s=z.limit-z.cursor;if(z.eq_s_b(1,r))return z.cursor=z.limit-s,w()&&z.slice_del(),!1}return!0}function p(){var e;if(z.ket=z.cursor,e=z.find_among_b(L,4))switch(z.bra=z.cursor,e){case 1:w()&&(z.slice_del(),z.ket=z.cursor,z.limit-z.cursor,v("u","g")&&v("i","c"));break;case 2:z.slice_from("c")}}function _(){if(!l()&&(z.cursor=z.limit,!f()))return z.cursor=z.limit,void d();z.cursor=z.limit,z.ket=z.cursor,z.eq_s_b(1,"i")&&(z.bra=z.cursor,z.eq_s_b(1,"c")&&(z.cursor=z.limit,w()&&z.slice_del()))}var h,b,g,k=[new r("",-1,3),new r("ã",0,1),new r("õ",0,2)],q=[new r("",-1,3),new r("a~",0,1),new r("o~",0,2)],j=[new r("ic",-1,-1),new r("ad",-1,-1),new r("os",-1,-1),new r("iv",-1,1)],C=[new r("ante",-1,1),new r("avel",-1,1),new r("ível",-1,1)],P=[new r("ic",-1,1),new r("abil",-1,1),new r("iv",-1,1)],F=[new r("ica",-1,1),new r("ância",-1,1),new r("ência",-1,4),new r("ira",-1,9),new r("adora",-1,1),new r("osa",-1,1),new r("ista",-1,1),new r("iva",-1,8),new r("eza",-1,1),new r("logía",-1,2),new r("idade",-1,7),new r("ante",-1,1),new r("mente",-1,6),new r("amente",12,5),new r("ável",-1,1),new r("ível",-1,1),new r("ución",-1,3),new r("ico",-1,1),new r("ismo",-1,1),new r("oso",-1,1),new r("amento",-1,1),new r("imento",-1,1),new r("ivo",-1,8),new r("aça~o",-1,1),new r("ador",-1,1),new r("icas",-1,1),new r("ências",-1,4),new r("iras",-1,9),new r("adoras",-1,1),new r("osas",-1,1),new r("istas",-1,1),new r("ivas",-1,8),new r("ezas",-1,1),new r("logías",-1,2),new r("idades",-1,7),new r("uciones",-1,3),new r("adores",-1,1),new r("antes",-1,1),new r("aço~es",-1,1),new r("icos",-1,1),new r("ismos",-1,1),new r("osos",-1,1),new r("amentos",-1,1),new r("imentos",-1,1),new r("ivos",-1,8)],S=[new r("ada",-1,1),new r("ida",-1,1),new r("ia",-1,1),new r("aria",2,1),new r("eria",2,1),new r("iria",2,1),new r("ara",-1,1),new r("era",-1,1),new r("ira",-1,1),new r("ava",-1,1),new r("asse",-1,1),new r("esse",-1,1),new r("isse",-1,1),new r("aste",-1,1),new r("este",-1,1),new r("iste",-1,1),new r("ei",-1,1),new r("arei",16,1),new r("erei",16,1),new r("irei",16,1),new r("am",-1,1),new r("iam",20,1),new r("ariam",21,1),new r("eriam",21,1),new r("iriam",21,1),new r("aram",20,1),new r("eram",20,1),new r("iram",20,1),new r("avam",20,1),new r("em",-1,1),new r("arem",29,1),new r("erem",29,1),new r("irem",29,1),new r("assem",29,1),new r("essem",29,1),new r("issem",29,1),new r("ado",-1,1),new r("ido",-1,1),new r("ando",-1,1),new r("endo",-1,1),new r("indo",-1,1),new r("ara~o",-1,1),new r("era~o",-1,1),new r("ira~o",-1,1),new r("ar",-1,1),new r("er",-1,1),new r("ir",-1,1),new r("as",-1,1),new r("adas",47,1),new r("idas",47,1),new r("ias",47,1),new r("arias",50,1),new r("erias",50,1),new r("irias",50,1),new r("aras",47,1),new r("eras",47,1),new r("iras",47,1),new r("avas",47,1),new r("es",-1,1),new r("ardes",58,1),new r("erdes",58,1),new r("irdes",58,1),new r("ares",58,1),new r("eres",58,1),new r("ires",58,1),new r("asses",58,1),new r("esses",58,1),new r("isses",58,1),new r("astes",58,1),new r("estes",58,1),new r("istes",58,1),new r("is",-1,1),new r("ais",71,1),new r("eis",71,1),new r("areis",73,1),new r("ereis",73,1),new r("ireis",73,1),new r("áreis",73,1),new r("éreis",73,1),new r("íreis",73,1),new r("ásseis",73,1),new r("ésseis",73,1),new r("ísseis",73,1),new r("áveis",73,1),new r("íeis",73,1),new r("aríeis",84,1),new r("eríeis",84,1),new r("iríeis",84,1),new r("ados",-1,1),new r("idos",-1,1),new r("amos",-1,1),new r("áramos",90,1),new r("éramos",90,1),new r("íramos",90,1),new r("ávamos",90,1),new r("íamos",90,1),new r("aríamos",95,1),new r("eríamos",95,1),new r("iríamos",95,1),new r("emos",-1,1),new r("aremos",99,1),new r("eremos",99,1),new r("iremos",99,1),new r("ássemos",99,1),new r("êssemos",99,1),new r("íssemos",99,1),new r("imos",-1,1),new r("armos",-1,1),new r("ermos",-1,1),new r("irmos",-1,1),new r("ámos",-1,1),new r("arás",-1,1),new r("erás",-1,1),new r("irás",-1,1),new r("eu",-1,1),new r("iu",-1,1),new r("ou",-1,1),new r("ará",-1,1),new r("erá",-1,1),new r("irá",-1,1)],W=[new r("a",-1,1),new r("i",-1,1),new r("o",-1,1),new r("os",-1,1),new r("á",-1,1),new r("í",-1,1),new r("ó",-1,1)],L=[new r("e",-1,1),new r("ç",-1,2),new r("é",-1,1),new r("ê",-1,1)],y=[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,0,3,19,12,2],z=new s;this.setCurrent=function(e){z.setCurrent(e)},this.getCurrent=function(){return z.getCurrent()},this.stem=function(){var r=z.cursor;return e(),z.cursor=r,a(),z.limit_backward=r,z.cursor=z.limit,_(),z.cursor=z.limit,p(),z.cursor=z.limit_backward,u(),!0}};return function(e){return"function"==typeof e.update?e.update(function(e){return n.setCurrent(e),n.stem(),n.getCurrent()}):(n.setCurrent(e),n.stem(),n.getCurrent())}}(),e.Pipeline.registerFunction(e.pt.stemmer,"stemmer-pt"),e.pt.stopWordFilter=e.generateStopWordFilter("a ao aos aquela aquelas aquele aqueles aquilo as até com como da das de dela delas dele deles depois do dos e ela elas ele eles em entre era eram essa essas esse esses esta estamos estas estava estavam este esteja estejam estejamos estes esteve estive estivemos estiver estivera estiveram estiverem estivermos estivesse estivessem estivéramos estivéssemos estou está estávamos estão eu foi fomos for fora foram forem formos fosse fossem fui fôramos fôssemos haja hajam hajamos havemos hei houve houvemos houver houvera houveram houverei houverem houveremos houveria houveriam houvermos houverá houverão houveríamos houvesse houvessem houvéramos houvéssemos há hão isso isto já lhe lhes mais mas me mesmo meu meus minha minhas muito na nas nem no nos nossa nossas nosso nossos num numa não nós o os ou para pela pelas pelo pelos por qual quando que quem se seja sejam sejamos sem serei seremos seria seriam será serão seríamos seu seus somos sou sua suas são só também te tem temos tenha tenham tenhamos tenho terei teremos teria teriam terá terão teríamos teu teus teve tinha tinham tive tivemos tiver tivera tiveram tiverem tivermos tivesse tivessem tivéramos tivéssemos tu tua tuas tém tínhamos um uma você vocês vos à às éramos".split(" ")),e.Pipeline.registerFunction(e.pt.stopWordFilter,"stopWordFilter-pt")}});
\ No newline at end of file
diff --git a/assets/javascripts/lunr/min/lunr.ro.min.js b/assets/javascripts/lunr/min/lunr.ro.min.js
new file mode 100644
index 0000000..7277140
--- /dev/null
+++ b/assets/javascripts/lunr/min/lunr.ro.min.js
@@ -0,0 +1,18 @@
+/*!
+ * Lunr languages, `Romanian` language
+ * https://github.com/MihaiValentin/lunr-languages
+ *
+ * Copyright 2014, Mihai Valentin
+ * http://www.mozilla.org/MPL/
+ */
+/*!
+ * based on
+ * Snowball JavaScript Library v0.3
+ * http://code.google.com/p/urim/
+ * http://snowball.tartarus.org/
+ *
+ * Copyright 2010, Oleg Mazko
+ * http://www.mozilla.org/MPL/
+ */
+
+!function(e,i){"function"==typeof define&&define.amd?define(i):"object"==typeof exports?module.exports=i():i()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");e.ro=function(){this.pipeline.reset(),this.pipeline.add(e.ro.trimmer,e.ro.stopWordFilter,e.ro.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.ro.stemmer))},e.ro.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA-Za-z",e.ro.trimmer=e.trimmerSupport.generateTrimmer(e.ro.wordCharacters),e.Pipeline.registerFunction(e.ro.trimmer,"trimmer-ro"),e.ro.stemmer=function(){var i=e.stemmerSupport.Among,r=e.stemmerSupport.SnowballProgram,n=new function(){function e(e,i){L.eq_s(1,e)&&(L.ket=L.cursor,L.in_grouping(W,97,259)&&L.slice_from(i))}function n(){for(var i,r;;){if(i=L.cursor,L.in_grouping(W,97,259)&&(r=L.cursor,L.bra=r,e("u","U"),L.cursor=r,e("i","I")),L.cursor=i,L.cursor>=L.limit)break;L.cursor++}}function t(){if(L.out_grouping(W,97,259)){for(;!L.in_grouping(W,97,259);){if(L.cursor>=L.limit)return!0;L.cursor++}return!1}return!0}function a(){if(L.in_grouping(W,97,259))for(;!L.out_grouping(W,97,259);){if(L.cursor>=L.limit)return!0;L.cursor++}return!1}function o(){var e,i,r=L.cursor;if(L.in_grouping(W,97,259)){if(e=L.cursor,!t())return void(h=L.cursor);if(L.cursor=e,!a())return void(h=L.cursor)}L.cursor=r,L.out_grouping(W,97,259)&&(i=L.cursor,t()&&(L.cursor=i,L.in_grouping(W,97,259)&&L.cursor=e;r--){var n=this.uncheckedNodes[r],i=n.child.toString();i in this.minimizedNodes?n.parent.edges[n.char]=this.minimizedNodes[i]:(n.child._str=i,this.minimizedNodes[i]=n.child),this.uncheckedNodes.pop()}};/*!
+ * lunr.Index
+ * Copyright (C) 2020 Oliver Nightingale
+ */t.Index=function(e){this.invertedIndex=e.invertedIndex,this.fieldVectors=e.fieldVectors,this.tokenSet=e.tokenSet,this.fields=e.fields,this.pipeline=e.pipeline},t.Index.prototype.search=function(e){return this.query(function(r){var n=new t.QueryParser(e,r);n.parse()})},t.Index.prototype.query=function(e){for(var r=new t.Query(this.fields),n=Object.create(null),i=Object.create(null),s=Object.create(null),o=Object.create(null),a=Object.create(null),u=0;u
+
+
+
+
+
+
+
+
+ Introduction to APIs¶
+What are web APIs?¶
+
+
+
+
+What are resource-oriented APIs?¶
+
+
+
+
+So why aren't all APIs RPC-orinented?¶
+
+
+
+
+
+
+ScheduleFlight()GetFlightDetails()ShowAllFlights()CancelReservation()RescheduleFlight()UpgradeTrip()
+
+
+
+ShowFlights(), ShowAllFlights(), ListFlights() etc
+
+CreateFlightReservation()GetFlightReservation()ListFlightReservation()DeleteFlightReservation()UpdateFlightReservation()
+
+What makes an API "good"?¶
+
+
+Operational¶
+
+
+
+
+
+
+Expressive¶
+
+
+
+
+Simple¶
+
+
+
+
+ExecuteAction() method just shifts complexity from one place to another.
+
+GET /translate?lang=en, allowing the user to add a specific language model as a mandatory field is complex for the average user and will slow down basic scenarios.Predictable¶
+Summary¶
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Introduction to API Design Patterns¶
+What are API Design Patterns?¶
+
+
+
+
+Why are API Design Patterns Important?¶
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Naming¶
+Why do names matter?¶
+What makes a name "good"?¶
+Expressive¶
+
+
+
+
+topic_modeltopic_messageSimple¶
+
+
+
+
+
+
+
+
+
+Name
+Note
+
+
+
+UserSpecifiedPreferencesExpressive, but not simple enough
+
+
+
+UserPreferencesBoth simple & expressive
+
+
+
+
+PreferencesToo simple
+Predictable¶
+
+
+Language, Grammar & Syntax¶
+
+
+
+
+image_url rather than jpeg_url presents us from limiting ourselves to a single image format.Language¶
+Grammar¶
+Imperative Actions¶
+
+
+isValid(): Should it return simple boolean field? Should it return a list of errors?GetValidationErrors(): Clear that it will return list of errors, empty list if is valid.Prepositions¶
+
+
+Book resources with the Author, it's tempting to name BooksWithAuthor.with is indicative of a more fundamental problem.Pluralisation¶
+
+
+
+
+Context¶
+
+
+book in the library API, we are referring to the resource, however in a flight booking API - we are referring to an action.
+
+
+
+Data types and units¶
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ dimensions: String; - this is ambiguousdimensions: Dimensions; (where Dimensions is an object)
+
+
+
+
+
+
+
+
+ Resource Scope and Hierarchy¶
+What is a resource layout?¶
+Types of Relationships¶
+Reference Relationships¶
+
+
+
+
+
+Self-Reference Relationships¶
+
+ Hierarchical Relationships¶
+
+
+
+
+
+ ChatRooms containing or owning Messages.Choosing the Right Relationship¶
+Do you need a relationship at all?¶
+
+
+Users. A single change to one resource can affect millions of other related resources.References or in-line data¶
+
+
+
+
+Hierarchy¶
+
+
+Anti-patterns¶
+Resources for Everything¶
+Deep Hierarchies¶
+
+
+
+
+
+
+
+
+
+ Chapter 1: Reliable, Scalable and Maintainable Applications¶
+
+
+Thinking about Data Systems¶
+
+
+
+Reliability¶
+
+
+
+
+Hardware Faults¶
+Software Faults¶
+
+
+Human Errors¶
+
+
+
+
+
+
+
+
+Scalability¶
+Describing Load¶
+
+
+SELECT tweets.*, users.*
+FROM tweets
+JOIN users ON tweets.sender_id = users.id
+JOIN follows ON follows.followee_id = users.id
+WHERE follows.follower_id = current_user
+
+ Describing Performance¶
+
+
+
+
+
+
+
+
+
+
+ Approaches for Coping with Load¶
+Maintainability¶
+
+
+Operability: Making Life Easy for Operations¶
+
+
+
+
+
+
+Simplicity: Managing Complexity¶
+Evolvability: Making Change Easy¶
+
+
+
+
+
+
+
+
+
+ Chapter 2: Data Models and Query Languages¶
+
+
+Relational Model Vs Document Model¶
+The Birth of NoSQL¶
+
+
+The Object-Relational Mismatch¶
+user_id. Fields like first_name and last_name appear exactly once per user so they can be modeled as columns in the table. However most people have had n jobs, this is a one-to-many relationship.
+
+
+ {
+ "user_id": 251,
+ "first_name": "Bill",
+ "last_name": "Gates",
+ "summary": "Co-chair of the Bill & Melinda Gates... Active blogger.",
+ "region_id": "us:91",
+ "industry_id": 131,
+ "photo_url": "/p/7/000/253/05b/308dd6e.jpg",
+ "positions": [
+ {
+ "job_title": "Co-chair",
+ "organization": "Bill & Melinda Gates Foundation"
+ },
+ {
+ "job_title": "Co-founder, Chairman",
+ "organization": "Microsoft"
+ }
+ ],
+ "education": [
+ {
+ "school_name": "Harvard University",
+ "start": 1973,
+ "end": 1975
+ },
+ {
+ "school_name": "Lakeside School, Seattle",
+ "start": null,
+ "end": null
+ }
+ ],
+ "contact_info": {
+ "blog": "http://thegatesnotes.com",
+ "twitter": "http://twitter.com/BillGates"
+ }
+}
+
+ Many-to-One and Many-to-Many Relationships¶
+region_id are given as IDs, not as plain-text strings. This is because:
+
+
+ Are Document Databases Repeating History¶
+The Network Model¶
+"Greater Seatlle Area" region and every user who lived in that region could be linked to it. This allowed one-to-many and many-to-many relationships to be modeled.The Relational Model¶
+
+
+Comparison to Document Databases¶
+Relational Versus Document Databases today¶
+Which data model leads to simpler application code?¶
+Schema Flexibility in the Document Model¶
+if (user && user.name && !user.first_name) {
+ // Documents written before Dec 8, 2013 don't have first_name
+ user.first_name = user.name.split(" ")[0];
+}
+ALTER TABLE users
+ADD COLUMN first_name text;
+UPDATE users
+SET first_name = split_part(name, ' ', 1);
+Data Locality for Queries¶
+Convergence of document and relational databases¶
+Query Languages for Data¶
+function getSharks() {
+ var sharks = [];
+ for(var i = 0; i < animals.length; i++) {
+ if (animals[i].family === "Sharks") {
+ sharks.push(animals[i]);
+ }
+ return sharks;
+}
+SELECT * FROM animals WHERE family = 'Sharks';
+Declarative Queries on the Web¶
+<ul>
+ <li class="selected"><p>Sharks</p></li>
+ <li><p>Whales</p></li>
+ <li><p>Fish</p></li>
+</ul>
+li.selected > p {
+ background-color: blue;
+}
+li.selected > p declares the pattern of elements to colour blue: all <p> elements whise direct parent is a <li> element which a class of selected.const liElements = document.getElementsByTagName("li");
+const selectedLiElements = liElements.filter(liElement => liElement.className === "Selected")
+for (selectedElement : selectedLiElements) {
+ for (child : selectedElement.childrenNodes()) {
+ if (child.tagName === "p") {
+ child.setAttribute("style", "background-color: blue")
+ }
+ }
+}
+
+
+document.getElementsByClassName(), the code will have to be entirely re-written. On the other hand browsers can improve the performance of CSS without breaking compatibility.MapReduce Querying¶
+SELECT date_trunc('month', observation_timestamp) as observation_month, sum(num_animals) AS total_animals
+FROM observations
+WHERE family = "Sharks"
+GROUP BY observation_month;
+db.observations.mapReduce(
+ function map() {
+ var year = this.observationTimestamp.getYear();
+ var month = this.observationTimestamp.getMonth();
+
+ return [`${year}-${month}`, this.numAnimals];
+ },
+ function reduce(key, values) {
+ return Array.sum(values);
+ },
+ query: {
+ family: "Sharks"
+ },
+ out: {
+ "monthlySharkReport"
+ }
+);
+map function would be called once for each document (e.g. returning ["2026-01", 3], ["2026-01", 4]. Subsequently the reduce function would be called ["2026-01", [3,4]] returning 7.{
+ "$match": {
+ "family": "Sharks"
+ }
+},
+{
+ "$group": {
+ "_id": {
+ "year": {
+ "$year": "$observationTimestamp"
+ },
+ "month": {
+ "$month": "$observationTimestamp"
+ }
+ },
+ "totalAnimals": {
+ "$sum": "$numAnimals"
+ }
+ }
+}
+
+
+
+
+
+
+
+
+
+ Preface¶
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Umbra Notes¶
+
+
+
+
+
+
+
+
+
+
+
+
+
+
"},{"location":"books/api_design_patterns/part1/chapter1/#what-are-resource-oriented-apis","title":"What are resource-oriented APIs?","text":"
"},{"location":"books/api_design_patterns/part1/chapter1/#so-why-arent-all-apis-rpc-orinented","title":"So why aren't all APIs RPC-orinented?","text":"
ScheduleFlight()GetFlightDetails()ShowAllFlights()CancelReservation()RescheduleFlight()UpgradeTrip()
ShowFlights(), ShowAllFlights(), ListFlights() etc
CreateFlightReservation()GetFlightReservation()ListFlightReservation()DeleteFlightReservation()UpdateFlightReservation()
"},{"location":"books/api_design_patterns/part1/chapter1/#what-makes-an-api-good","title":"What makes an API \"good\"?","text":"
"},{"location":"books/api_design_patterns/part1/chapter1/#operational","title":"Operational","text":"
"},{"location":"books/api_design_patterns/part1/chapter1/#expressive","title":"Expressive","text":"
"},{"location":"books/api_design_patterns/part1/chapter1/#simple","title":"Simple","text":"
"},{"location":"books/api_design_patterns/part1/chapter1/#predictable","title":"Predictable","text":"
ExecuteAction() method just shifts complexity from one place to another.
GET /translate?lang=en, allowing the user to add a specific language model as a mandatory field is complex for the average user and will slow down basic scenarios.
"},{"location":"books/api_design_patterns/part1/chapter2/","title":"Introduction to API Design Patterns","text":""},{"location":"books/api_design_patterns/part1/chapter2/#what-are-api-design-patterns","title":"What are API Design Patterns?","text":"
"},{"location":"books/api_design_patterns/part1/chapter2/#why-are-api-design-patterns-important","title":"Why are API Design Patterns Important?","text":"
"},{"location":"books/api_design_patterns/part2/chapter3/","title":"Naming","text":"
"},{"location":"books/api_design_patterns/part2/chapter3/#simple","title":"Simple","text":"
topic_modeltopic_message
Name Note UserSpecifiedPreferences Expressive, but not simple enough UserPreferences Both simple & expressive Preferences Too simple"},{"location":"books/api_design_patterns/part2/chapter3/#predictable","title":"Predictable","text":"
"},{"location":"books/api_design_patterns/part2/chapter3/#language-grammar-syntax","title":"Language, Grammar & Syntax","text":"
"},{"location":"books/api_design_patterns/part2/chapter3/#language","title":"Language","text":"
image_url rather than jpeg_url presents us from limiting ourselves to a single image format.
"},{"location":"books/api_design_patterns/part2/chapter3/#prepositions","title":"Prepositions","text":"isValid(): Should it return simple boolean field? Should it return a list of errors?GetValidationErrors(): Clear that it will return list of errors, empty list if is valid.
"},{"location":"books/api_design_patterns/part2/chapter3/#pluralisation","title":"Pluralisation","text":"Book resources with the Author, it's tempting to name BooksWithAuthor.with is indicative of a more fundamental problem.
"},{"location":"books/api_design_patterns/part2/chapter3/#context","title":"Context","text":"
book in the library API, we are referring to the resource, however in a flight booking API - we are referring to an action.
"},{"location":"books/api_design_patterns/part2/chapter3/#data-types-and-units","title":"Data types and units","text":"
"},{"location":"books/api_design_patterns/part2/chapter4/","title":"Resource Scope and Hierarchy","text":""},{"location":"books/api_design_patterns/part2/chapter4/#what-is-a-resource-layout","title":"What is a resource layout?","text":"dimensions: String; - this is ambiguousdimensions: Dimensions; (where Dimensions is an object)
"},{"location":"books/api_design_patterns/part2/chapter4/#self-reference-relationships","title":"Self-Reference Relationships","text":"An employee resource points at other employee resources as managers and assistants."},{"location":"books/api_design_patterns/part2/chapter4/#hierarchical-relationships","title":"Hierarchical Relationships","text":"
ChatRoom resources act as the owner of Message resources through a hierarchical relationship.
ChatRooms containing or owning Messages.
Users. A single change to one resource can affect millions of other related resources.
"},{"location":"books/api_design_patterns/part2/chapter4/#anti-patterns","title":"Anti-patterns","text":""},{"location":"books/api_design_patterns/part2/chapter4/#resources-for-everything","title":"Resources for Everything","text":"
"},{"location":"books/designing_data_intensive_applications/part1/chapter1/#thinking-about-data-systems","title":"Thinking about Data Systems","text":"
"},{"location":"books/designing_data_intensive_applications/part1/chapter1/#reliability","title":"Reliability","text":"
"},{"location":"books/designing_data_intensive_applications/part1/chapter1/#human-errors","title":"Human Errors","text":"
"},{"location":"books/designing_data_intensive_applications/part1/chapter1/#scalability","title":"Scalability","text":"
Approach 2: Maintain a cache for each user's home timeline - like a mailbox of tweets for each user. When user posts a tweet, look up all the people who follow that user, and insert the new tweet into each of their home timeline caches. The request to read the home timeline is the cheap, because its result has been computed ahead of time.SELECT tweets.*, users.*\nFROM tweets\nJOIN users ON tweets.sender_id = users.id\nJOIN follows ON follows.followee_id = users.id\nWHERE follows.follower_id = current_user\n
"},{"location":"books/designing_data_intensive_applications/part1/chapter1/#simplicity-managing-complexity","title":"Simplicity: Managing Complexity","text":"
"},{"location":"books/designing_data_intensive_applications/part1/chapter2/#relational-model-vs-document-model","title":"Relational Model Vs Document Model","text":"
"},{"location":"books/designing_data_intensive_applications/part1/chapter2/#the-object-relational-mismatch","title":"The Object-Relational Mismatch","text":"user_id. Fields like first_name and last_name appear exactly once per user so they can be modeled as columns in the table. However most people have had n jobs, this is a one-to-many relationship.
Representing a LinkedIn profile using a relational schema. {\n \"user_id\": 251,\n \"first_name\": \"Bill\",\n \"last_name\": \"Gates\",\n \"summary\": \"Co-chair of the Bill & Melinda Gates... Active blogger.\",\n \"region_id\": \"us:91\",\n \"industry_id\": 131,\n \"photo_url\": \"/p/7/000/253/05b/308dd6e.jpg\",\n \"positions\": [\n {\n \"job_title\": \"Co-chair\",\n \"organization\": \"Bill & Melinda Gates Foundation\"\n },\n {\n \"job_title\": \"Co-founder, Chairman\",\n \"organization\": \"Microsoft\"\n }\n ],\n \"education\": [\n {\n \"school_name\": \"Harvard University\",\n \"start\": 1973,\n \"end\": 1975\n },\n {\n \"school_name\": \"Lakeside School, Seattle\",\n \"start\": null,\n \"end\": null\n }\n ],\n \"contact_info\": {\n \"blog\": \"http://thegatesnotes.com\",\n \"twitter\": \"http://twitter.com/BillGates\"\n }\n}\nregion_id are given as IDs, not as plain-text strings. This is because:
\"Greater Seatlle Area\" region and every user who lived in that region could be linked to it. This allowed one-to-many and many-to-many relationships to be modeled.
if (user && user.name && !user.first_name) {\n // Documents written before Dec 8, 2013 don't have first_name\n user.first_name = user.name.split(\" \")[0];\n}\nALTER TABLE users\nADD COLUMN first_name text;\nUPDATE users\nSET first_name = split_part(name, ' ', 1);\n
In relational algebra, you would instead write: $$ sharks = \\sigma_{family =''Sharks''} (animals) $$function getSharks() {\n var sharks = [];\n for(var i = 0; i < animals.length; i++) {\n if (animals[i].family === \"Sharks\") {\n sharks.push(animals[i]);\n }\n return sharks;\n}\n
Where \\(\\sigma\\) is the selection operator, returning only those animals that match the condition \\(family = ''Sharks''\\). SQL follows this closely.
SELECT * FROM animals WHERE family = 'Sharks';\n An imperative language tells the computer to perform certain operations in a certain order.
In a declarative query language, you just specify the pattern of the data you want. e.g. what conditions should be met, how the data should be transformed - but not how to achieve that goal. The declarative query language hides the implementation details of the database engine. This allows the database engine to be optimised and improved without the need to change the query language itself.
Declarative languages are very easy to parallelise - they specify the pattern of results not the algorithm to be used.
"},{"location":"books/designing_data_intensive_applications/part1/chapter2/#declarative-queries-on-the-web","title":"Declarative Queries on the Web","text":"<ul>\n <li class=\"selected\"><p>Sharks</p></li>\n <li><p>Whales</p></li>\n <li><p>Fish</p></li>\n</ul>\n li.selected > p {\n background-color: blue;\n}\n Here the CSS selector li.selected > p declares the pattern of elements to colour blue: all <p> elements whise direct parent is a <li> element which a class of selected.
Doing this with an imperative approach is a nightmare.
const liElements = document.getElementsByTagName(\"li\");\nconst selectedLiElements = liElements.filter(liElement => liElement.className === \"Selected\")\nfor (selectedElement : selectedLiElements) {\n for (child : selectedElement.childrenNodes()) {\n if (child.tagName === \"p\") {\n child.setAttribute(\"style\", \"background-color: blue\")\n }\n }\n}\n document.getElementsByClassName(), the code will have to be entirely re-written. On the other hand browsers can improve the performance of CSS without breaking compatibility.MapReduce is a programming model for processing large amount of data in bulk across many machines. This is supported by MongoDB as a mechanism for performing read-only queries across many documents.
MapReduce is neither declarative nor imperative but somewhere in between.
Example in PostgreSQL
SELECT date_trunc('month', observation_timestamp) as observation_month, sum(num_animals) AS total_animals\nFROM observations\nWHERE family = \"Sharks\"\nGROUP BY observation_month;\n Example in MongoDB using MapReduce
db.observations.mapReduce(\n function map() {\n var year = this.observationTimestamp.getYear();\n var month = this.observationTimestamp.getMonth();\n\n return [`${year}-${month}`, this.numAnimals];\n },\n function reduce(key, values) {\n return Array.sum(values);\n },\n query: {\n family: \"Sharks\"\n },\n out: {\n \"monthlySharkReport\"\n }\n);\n The map function would be called once for each document (e.g. returning [\"2026-01\", 3], [\"2026-01\", 4]. Subsequently the reduce function would be called [\"2026-01\", [3,4]] returning 7.
Map and Reduce functions must be pure with no side effects (no additional db calls). This allows them to be run anywhere, in any order and re-run on failure.
MapReduce was replaced by the aggregation pipeline.
{\n \"$match\": {\n \"family\": \"Sharks\"\n }\n},\n{\n \"$group\": {\n \"_id\": {\n \"year\": {\n \"$year\": \"$observationTimestamp\"\n },\n \"month\": {\n \"$month\": \"$observationTimestamp\"\n }\n },\n \"totalAnimals\": {\n \"$sum\": \"$numAnimals\"\n }\n }\n}\n Aggregation pipeline language is similar in expressiveness to a subset of SQL, but it uses JSON syntax rather than SQL's English sentence style.
"},{"location":"books/designing_data_intensive_applications/preface/","title":"Preface","text":"There many been many developments in distributed systems, databases and the applications build on top of them, there are various driving forces:
An application is data-intensive if data is it's primary challenge.
This is opposed to compute-intensive where the CPU is the bottle neck.
"}]} \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml new file mode 100644 index 0000000..0f8724e --- /dev/null +++ b/sitemap.xml @@ -0,0 +1,3 @@ + +