Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Becoming a Salesforce Certified Technical Architect

You're reading from   Becoming a Salesforce Certified Technical Architect Build a strong command of architectural principles and strategies to prepare for the CTA review board

Arrow left icon
Product type Paperback
Published in Jun 2023
Publisher Packt
ISBN-13 9781803239439
Length 568 pages
Edition 2nd Edition
Arrow right icon
Author (1):
Arrow left icon
Tameem Bahri Tameem Bahri
Author Profile Icon Tameem Bahri
Tameem Bahri
Arrow right icon
View More author details
Toc

Table of Contents (18) Chapters Close

Preface 1. Chapter 1: Starting Your Journey as a CTA 2. Chapter 2: Core Architectural Concepts: Data Life Cycle FREE CHAPTER 3. Chapter 3: Core Architectural Concepts: Integration and Cryptography 4. Chapter 4: Core Architectural Concepts: Identity and Access Management 5. Chapter 5: Developing a Scalable System Architecture 6. Chapter 6: Formulating a Secure Architecture in Salesforce 7. Chapter 7: Designing a Scalable Salesforce Data Architecture 8. Chapter 8: Creating a Lean Solution Architecture 9. Chapter 9: Forging an Integrated Solution 10. Chapter 10: Development Life Cycle and Deployment Planning 11. Chapter 11: Communicating and Socializing Your Solution 12. Chapter 12: Practice the Review Board: First Mock 13. Chapter 13: Present and Defend: First Mock 14. Chapter 14: Practice the Review Board: Second Mock 15. Chapter 15: Present and Defend: Second Mock 16. Other Books You May Enjoy Appendix: Tips and Tricks, and the Way Forward

Designing and Documenting Your Data Model

The importance of your data model’s design cannot be emphasized enough. It might get overlooked in many cases in the Salesforce world due to the fact that Salesforce comes with a pre-built data model, in addition to great flexibility for creating and changing custom fields and objects.

However, the proper data model design could be the difference between a smart, flexible, and scalable solution (that delivers valuable real-time insight that justifies all efforts required to enter the data) and a poorly designed database with tons of dumped data (that gives an overall feeling that data entry is an overhead).

Your data model is the foundation of your solution. Several solution aspects depend on your data models, such as the data sharing and visibility strategy and your ability to identify large data volume (LDV) objects.

You can read through the solution by understanding its data model. Your solution will be as strong as its foundation. You need to get your data model design right, and to do so, you need to understand key concepts in data modeling. This starts with normalization versus denormalization, going through three of the standard normal forms for database design, and finally, understanding the common relationship types between database tables and how that is reflected in Salesforce.

Normalization Versus Denormalization

Normalization is the process of arranging data in a database efficiently based on its relationships. This approach aims to remove data redundancy as it wastes disk space, slows down queries, and costs more processing time to execute create, read, update, and delete (CRUD) operations. Moreover, redundant data might also increase data inconsistency. For example, when the same data persists in multiple places and gets updated in one of them, you need to ensure that the change gets reflected in all other occurrences; otherwise, you may risk having data inconsistencies. Normalization should aim to get rid of data redundancy, but not at the cost of losing data integrity.

Normalization is based on the concept of what is known as normal forms. The dataset must meet specific criteria in order to be considered in one of these normal forms. There are three main normal forms (1NF, 2NF, and 3NF), all of which you will cover shortly, and others such as BCNF, 4NF, 5NF, and so on, which you are not going to cover in this book for brevity.

Denormalization can be considered the opposite of the normalization process. In a denormalized dataset, you intentionally use redundant information. This is done for several purposes but mostly to improve performance while executing queries and performing analytics. There is an associated overhead, of course, in keeping all the redundant data consistent and aligned.

The denormalization process reduces the number of tables (though it consumes more storage) and simplifies complicated table joins, which effectively enhances the performance while querying data that resides in multiple tables. The concept adopted by denormalization is that by placing all the data in one place, you could simplify the search process as it only needs to be executed on one table.

While designing your data model, you may come across use cases that could be better solved using a normalized set of tables. In other cases, denormalization could be the answer. You would understand the difference better with reference to these user stories.

As an account manager, I want to store the addresses of all my customers so that I can generate an important report showing the number of shipments we sent to each of these addresses in a given period of time.

The focus of this user story is on analytics/reports. Considering the standard reporting capabilities in Salesforce, it makes sense to store the account and account address details in two separate tables. This will also enable us to link the shipment records to the right address straight away, as well as build the desired report with minimal effort, even though Salesforce is doing additional processes behind the scenes to query data from multiple tables.

The following figure represents the proposed data model and an example of the data that is stored:

The figure depicts the proposed data model with data stored in four tables. The first is “Account” which has “ID” and “Account Name” connected with “Address_c” which has “ID”, “Account ID”, “Street Name”, “City”, “Country”. The next two tables list two real-life data, namely, Acme and Globex Ltd. The last table fills all the details under the “Address” label.

Figure 2.3 – Data model in a normalized form example

Now, explore the second user story.

As an account manager, I want to store the addresses of all my customers so that I can quickly find contact addresses when looking at my customer record page and list views.

The focus here is on the user experience while entering or viewing the data. In this case, it makes sense to use a denormalized dataset. These denormalized fields can easily be added to list views and page layouts. They can also be edited using fewer clicks.

You will come across more complicated scenarios where you could utilize a denormalized dataset to reduce data storage throughout this book. Although theoretically, a denormalized dataset consumes more storage data, in Salesforce, the data storage for the records of most objects is roughly 2 KB (with a few exceptions, such as person accounts and articles). This is true regardless of the number of fields in it, as well as if these fields are filled in or not (some field types are exceptions, such as rich text fields). As mentioned earlier in this book, there are some concepts of data modeling that may look different in Salesforce.

The following figure represents the proposed data model and an example of the data that is stored:

This figure is an example of a data model in a denormalized form.

Figure 2.4 – Example of a data model in a denormalized form

The differences between normalized versus denormalized datasets can be summarized as follows:

  • The normalization process relies on splitting data into multiple tables. The aim is to reduce data redundancy and increase consistency and data integrity. On the other hand, denormalization relies on combining data to speed up the retrieval processes. In Salesforce, it could also be used to reduce data storage and reduce the size of large data volume (LDV) objects. Despite that, this is not a common benefit in other databases.
  • Normalization is usually used in online transaction processing (OLTP) systems, where the speed of insert, delete, and update operations is the key. On the other hand, denormalization is used with online analytical processing (OLAP), where the query’s speed and analytics are key.
  • Data integrity is hard to maintain in denormalized datasets, unlike normalized datasets.
  • Denormalization increases data redundancy.
  • The denormalization process reduces the number of tables and potential join statements, as compared to normalization, which increases both of these.
  • Typically, denormalized datasets take more disk storage. As mentioned earlier, this is not necessarily true in Salesforce.

The standard Salesforce data model is in normalized form by default. To further understand the normalization process, you need to understand the three main different types of normal forms.

Normal Forms

As you explored earlier, normalization is all about arranging data in a database efficiently based on its relationships. There are three common forms of data normalization. Explore each of these next.

First Normal Form

A database is considered in the first normal form if it meets the following conditions:

  • Contains atomic values only: Atomic values are values that cannot be divided. For example, in the following figure, the value of the Phone Number column can be divided into three different phone numbers. Therefore, it is not in the first normal form (not 1NF):

This shows ‘Table 1’ which does not meet the first normal form (1NF) that has three customer IDs and respective phone numbers. Here, multiple phone numbers are clubbed in one column.

Figure 2.5 – Table 1, which does not meet the first normal form (1NF)

  • No repeating groups: This means that the table does not contain two or more fields/columns that represent multiple values for the same data entity. For example, in the following figure, you can see that the Phone Number 1, Phone Number 2, and Phone Number 3 fields represent multiple values for the same data entity, which is the phone number. Therefore, this table is not in 1NF:
This shows ‘Table 2’ which does not meet the first normal form (1NF) and has three customer IDs and respective phone numbers. Here, multiple phone numbers are separated into different columns, namely ‘Phone Number 1’, ‘Phone Number 2’, and ‘Phone Number 3’.

Figure 2.6 – Table 2, which does not meet the first normal form (1NF)

To bring the table shown in Figure 2.5 into the 1NF, you must split the table into the following two tables:

This shows ‘Table 1’ which is modified to meet the first normal form (1NF). There are two tables. One lists ‘Customer ID’ and ‘Phone ID’ and the other lists ‘Phone ID’ and ‘Phone Number’.

Figure 2.7 – Table 1 from Figure 2.5 modified to meet the first normal form (1NF)

Second Normal Form

A database is considered in the second normal form if it meets the following conditions:

  • It is in 1NF.
  • Non-key attributes function is based on the primary key. This is particularly applicable to cases where you have a composite key. (In Salesforce, the ID field is always the primary ID. There is no use for composite keys, which means that this condition is always met.) For example, the following table is not in 2NF because Address City is dependent on a subset of the composite key (which is Site ID). This can be clearly seen in the second and fourth rows. Therefore, this table is not in 2NF:
This shows a table that does not meet the second normal form (2NF). There is one table listing ‘Customer ID’, ‘Site ID’, and ‘Address City’.

Figure 2.8 – A table that does not meet the second normal form (2NF)

To bring the table into 2NF, you must split the table into the following two tables:

This shows a table modified to meet the second normal form [2NF]. The table in Fig. 2.5 is split into two tables. One lists ‘Customer ID’ and ‘Site ID’. The other lists ‘Site ID’ and ‘Address City’.

Figure 2.9 – The table from Figure 2.8 modified to meet the second normal form (2NF)

Third Normal Form

A database is considered in the third normal form if it meets the following conditions:

  • It is in 2NF.
  • Non-key attributes are not transitively dependent on the primary key. Take the following table as an example. The ID field is the primary key. The table is in 1NF and 2NF. The Name, Partner Number, and Bank Code fields are functionally dependent on the ID field. However, the Bank Name field is dependent on the Bank Code field. Therefore, this table is not in 3NF:
This shows a table that does not meet the third normal form [3NF]. The table lists ‘ID’, ‘Name’, ‘Partner Number’, ‘Bank Code’ and ‘Bank Name’.

Figure 2.10 – A table that does not meet the third normal form (3NF)

To bring this table into 3NF, you must split the table into the following two tables:

This table is modified to meet the third normal form [3NF]. The table in Fig 2.8 is split into two tables. One lists ‘ID’, ‘Name’, ‘Partner Number’ and ‘Bank Code’. The other lists ‘Bank Code’ and ‘Bank Name’.

Figure 2.11 – The table from Figure 2.10 modified to meet the third normal form (3NF)

Now that you have covered the three normalization normal forms, you will explore the types of relationships that can be created between the different database tables.

You have been reading a chapter from
Becoming a Salesforce Certified Technical Architect - Second Edition
Published in: Jun 2023
Publisher: Packt
ISBN-13: 9781803239439
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image