Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Mastering Apex Programming
Mastering Apex Programming

Mastering Apex Programming: A Salesforce developer's guide to learn advanced techniques and programming best practices for building robust and scalable enterprise-grade applications , Second Edition

eBook
$24.99 $35.99
Paperback
$44.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

Mastering Apex Programming

Common Apex Mistakes

In this chapter, we will cover common mistakes made when writing Apex code and the appropriate defensive programming techniques to avoid them. Before we begin to move into more advanced topics within the Apex language, it is important that first we ensure a common grounding by removing any common errors within our code. For some of you, this material may simply be a refresher or a reiteration of some practices. In general, these mistakes form the basis of many of the common exceptions and errors that I have seen in my time working with the Salesforce platform and so are worth addressing upfront. By the end of this chapter, you will hopefully be more aware of when these mistakes may occur within your code and be able to develop in a defensive style to avoid them.

In this chapter, we will cover the following topics:

  • Null pointer exceptions
  • Bulkification of Apex code to avoid governor limits
  • Hardcoding references to specific object instances
  • Patterns to deal with managing data across a transaction

Null pointer exceptions

Almost every developer working with the Salesforce platform will have encountered the dreaded phrase Attempt to de-reference a null object. At its heart, this is one of the simplest errors to both generate and handle effectively, but its error message can cause great confusion for new and experienced developers alike, as it is often unclear how the exception is occurring.

Let’s start by discussing in the abstract form how the error is generated. This is a runtime error, caused by the system attempting to read data from memory where the memory is blank. Apex is built on top of the Java language and uses the Java Virtual Machine (JVM) runtime under the hood. What follows is a highly simplified discussion of how Java manages memory, which will help us to understand what is happening behind the scenes.

Whenever an object is instantiated in Java, it is created and managed on the heap, which is a block of memory used to dynamically hold data for objects and classes at runtime. A separate set of memory, called the stack, stores references to these objects and instances. So, in simplistic terms, when you instantiate an instance of a Person class called paul, that instance is stored on the heap and a reference to this heap memory is stored on the stack, with the label paul. Apex is built on Java and compiles down to Java bytecode (this started after an update from Salesforce in 2012), and, although Apex does not utilize a full version of the JVM, it uses the JVM as the basis for its operations, including garbage and memory management.

With this in mind, we are now better able to understand how the two most common types of NullPointerException instances within Apex occur: when working with specific object instances and when referencing values within maps.

Exceptions on object instances

Let’s imagine I have the following code within my environment:

public class Person {
    public String name;
}
Person paul;

In this code, we have a Person class defined, with a single publicly accessible member variable. We have then declared a variable, paul, using this new data type. In memory, Salesforce now has a label on the stack called paul that is not pointing to any address on the heap, as paul currently has the value of null.

If we now attempt to run System.debug(paul.name);, we will get an exception of type NullPointerException with the message Attempt to de-reference a null object. What is happening is that the system is trying to use the paul variable to retrieve the object instance and then access the name property of that instance. Because the instance is null, the reference to this memory does not exist, and so a NullPointerException is thrown; that is, we have nothing to point with.

With this understanding of how memory management is working under the hood (in an approximate fashion) and how we are generating these errors, it is therefore easy to see how we code against them—avoid calling methods and accessing variables and properties on an object that has not been instantiated. This can be done by ensuring we always call a constructor when initializing a variable, as shown in the following code snippet:

Person paul = new Person();

In general, when developing, we should pay attention to any public methods or variables that return complex types or data from a complex type. A common practice is simply to instantiate new instances of the underlying object in the constructor for any data that may be returned before being populated.

Exceptions when working with maps

Another common way in which this exception presents itself is when working with collections and data retrieved from collections—most notably, maps. As an example, we may have some data in a map for us to use in processing unrelated records. Let’s say we have a Map<String, Contact> contactsByBadgeId instance that allows us to use an individual’s unique badge ID string to retrieve their contact record for processing. Let’s try to run the following:

String badBadgeId = 'THIS ID DOES NOT EXIST';
String ownerName = contactsByBadgeId.get(badBadgeId).FirstName;

Assuming that the map will not have the key value that badBadgeId is holding, the get method on the map will return null, and our attempt to access the FirstName property will be met with NullPointerException being thrown.

The simplest and most effective way to manage this is to wrap our method in a simple if block, as follows:

String badBadgeId = 'THIS ID DOES NOT EXIST';
if(contactsByBadgeId.containsKey(badBadgeId)) {
    String ownerName = contactsByBadgeId.get(badBadgeId)
  .FirstName;
}

By adding this guard clause, we have proactively filtered out any bad keys for the map by removing the error.

As an alternative, if we were looping through a list of badge IDs, like this:

for(String badgeId : badgeIdList) {
    String ownerName = contactsByBadgeId.get(badBadgeId).FirstName;
}

We could also use the methods available on set collections to both potentially reduce our loop size and avoid the issue, as follows:

Set<String> badgeIdSet = new Set<String>(badgeIdList).retainAll(contactsByBadgeId.keySet());
for(String badgeId : badgeIdSet) {
    String ownerName = contactsByBadgeId.get(badBadgeId).FirstName;
}

In the preceding example, we have filtered down the items to be iterated through to only those in the keySet instance of the map. This may not be possible in many instances, as we may be looping through a collection of a non-primitive type or a type that does not match our keySet instance. In these cases, our if statement is one solution. We can also use a feature called safe navigation to allow Apex to assist us in avoiding these issues.

Safe navigation operator

The discussion and methods shown in the preceding sections provide ways for us to code to avoid a NullPointerException instance occurring through the way we structure our code, which can also assist in performance. For example, the second set of code for looping through a pre-filtered list of IDs based on the map’s keys will avoid unnecessary looping and operations. Since the first edition of this book was published, Salesforce has added in a safe navigation operator that helps remove a lot of the null checks we previously had to perform.

The safe navigation operator (?.) will return null if a value is unavailable (when previously a NullPointerException instance would occur) and return a value if one is available. We can rewrite our badge ID retrieval code as follows:

String badgeId = 'THIS ID DOES NOT EXIST';
String ownerName = contactsByBadgeId.get(badBadgeId)?.FirstName;

In this instance, if badgeId is a key on the map and can be retrieved, then the FirstName value for the contact is assigned to ownerName. If the value of badgeId is not a valid key, then ownerName is set to null. We can see some more examples in the following code block:

Integer employeeCount = myAccount?.NumberOfEmployees; //returns null if myAccount is not an Account instance
paul?.getContactRecord()?.FirstName; //returns null if paul is null or
  the getContactRecord() method returns null
String accName = [SELECT Name FROM Account WHERE Account_Reference__c 
  = :externalAccountKey]?.Name; //returns null if no Account record is
    found by the query

The safe navigation operator is an extremely useful operator to help developers in minimizing errors; however, it should not be considered a panacea. You may unintentionally cause further NullPointerException instances within your code by not correctly verifying whether a null value has been returned. In the following example, we take our badge ID code to retrieve the name of the badge’s owner:

String ownerName = contactsByBadgeId.get(badBadgeId)?.FirstName;

We may then use this value within a component or page to display a greeting to the individual. If a null value has been returned, this can lead to some unexpected messages:

String ownerName = contactsByBadgeId.get(badBadgeId)?.FirstName;
String welcomeMessage = 'Hello there ' + ownerName;
//"Hello there null" would be displayed as a greeting.

It is important, therefore, that within your code base and among your development team an agreement is reached as to where the responsibility for these checks lies within the code to ensure that unintended consequences do not occur.

In general, most NullPointerException instances occur when a premature assumption about the availability of data has been made—for example, that the object has been instantiated or that our map contains the key we are looking for. Trying to recognize these assumptions will assist in avoiding these exceptions, going forward. We can also use the safe navigation operator in instances where we want to ensure safety to allow the code execution to continue, but must then be aware of checking for null values within our code. With this in mind, let us now look at how we can effectively bulkify our Apex code.

Retrieving configuration data in a bulkified way

No book on Apex is complete without some discussion around bulkification. Bulkification is a requirement in Salesforce due to governor limits that are imposed on developers because of the multi-tenant nature of the platform. Many developers that are new to the platform see the governor limits as a hindrance rather than an assistance. We will cover this in more detail in Chapter 18, Performance and the Salesforce Governor Limits. However, a common mistake that developers make on the platform is to not bulkify their code appropriately—particularly, triggers. It is also common for intermediate developers to not bulkify their non-trigger code appropriately either. We will discuss the bulkification of triggers more explicitly in Chapter 3, Triggers and Managing Trigger Execution, and will cover querying and Data Manipulation Language (DML) within loops later in this chapter. Firstly, however, I want to discuss bulkifying the retrieval of data that is not typically stored in a custom or standard object—configuration data.

Hot and cold data

I want to begin this section with a discussion on hot and cold data within the system, as well as the implications this has on bulkification. For all of the data within our system, let us assume that the data starts off with a temperature of 0 (our scale should not matter, but let us assume we are using Celsius, where 0 is freezing and 100 is boiling). Every time our data is written to, its temperature increases by one degree, and if an entire day goes without it being updated, it drops a degree and decreases by one. If we were to run this thought experiment across our data, we would then obtain a scale for each data type we are retrieving, where the data would range from very cold (that is, hardly ever written to) through to extremely hot (edited multiple times a day).

For most objects in Salesforce, the temperature graph of the data would appear in a long-tailed distribution manner—that is, an initial peak of activity as a record is created and then updated until it reaches a stage in its life cycle where it is no longer viewed and only really included for auditing and reporting purposes. Think of an opportunity record, for example: a lot of initial activity until it is closed, won, or lost, and then it is used mainly for reporting. When working with these records in Apex, we will need to ensure we query for them to get the latest version for accurate and up-to-date data. As we are in most cases not writing Apex to work on “cold” instances of this data, we need to be aware of the fact that these records may change during the scope of a transaction due to an update we are making. Our normal bulkification practices, discussed next, will help us manage this.

What about data that is truly cold—that is, created for all intents and purposes but never updated? Think of a custom metadata record that holds some configuration used in an Apex process. Such information is created and then only updated when the process itself changes. For such data, we actively want to avoid querying multiple times during a transaction yet ensure that it can be made available across the entire transaction as needed. As Salesforce applications have grown and more custom configuration has been added to make the applications more dynamic and easier to update, more organizations (orgs) have deployed custom metadata, custom settings, and custom configuration objects (although these are now largely superseded by custom metadata and custom settings). How do we as developers manage retrieving this data in a manner that is bulkified and that allows us to reuse this data across a transaction?

Retrieving and sharing data throughout a transaction

For this use case, a developer can either use the singleton pattern or make appropriate use of static variables to manage the retrieval of this data. We could implement a singleton utility class, as follows:

public class ExampleSingleton {
    private static ExampleSingleton instance;
    private Example__mdt metadata;
    public static ExampleSingleton getInstance() {
        if(instance == null) {
            instance = new ExampleSingleton();
        }
        return instance;
    }
    private ExampleSingleton() {
    }
    public Example__mdt getMetadata() {
        if(metadata == null) {
            metadata = [SELECT Id FROM Example__mdt LIMIT 1];
        }
        return metadata;
    }
}

Our private static member variable—instance—holds the unique instance of the ExampleSingleton class for the transaction. The getInstance method is the only way of either instantiating or retrieving the instance for use, and we have ensured this by making the default constructor private. We can use this instance to retrieve our Example__mdt record, using the getMetadata method on our returned instance. The actual instance of the Example__mdt record is stored as a private member for the class and is state abstracted away from the change.

The benefit of such an approach is that we can encapsulate both our data and its workings and ensure that we are only ever retrieving the information from the database once. This singleton could be used to hold many different types of data as needed so that the entire transaction can scale in its usage of such data in a common way.

Alternatively, we could implement a static class such as the following one:

public class ExampleStatic {
    private static Example__mdt metadata;
    public static Example__mdt getExampleMetadata() {
        if(metadata == null) {
            metadata = [SELECT Id FROM Example__mdt LIMIT 1];
        }
        return metadata;
    }
    private ExampleStatic() {
    }
}

Again, we have ensured that our metadata will only be loaded once across the transaction and have a smaller code footprint to manage. Note that this is not a true static class, as these are not available in Apex. However, there is a close enough analogy to this in the language that we can consider as a static class (it cannot be externally instantiated due to its private constructor).

For cold data such as custom metadata, custom settings, or any custom configuration in objects, the use of a singleton or a static class can greatly improve bulkification. I have seen instances in production where the same set of metadata records was retrieved multiple times during a transaction as the code began to interact by recursively firing triggers through a combination of updates.

Singleton versus static class

It is a fair question to ask right now whether a singleton or a static class instance should be used in utility classes. The answer (as with all good questions) is it depends. While both have similar functionality from our perspective in terms of retrieving data only once, singletons can be passed around as object instances for use across the application. They can also extend other classes and implement interfaces with the full range of Apex’s object-oriented (OO) features. While static classes cannot do this, they are more useful in lightweight implementations where the OO features of the language are not required.

As we will discuss in the final section of the book when speaking about performance, we must always be aware of the trade-offs we are making when we are implementing a particular performance improvement. In this instance, we have statically cached our data to reduce the number of queries we are making within our code. In doing so, we have increased the amount of data stored on the heap. The heap is also a finite resource with a fixed governor limit, and as such, statically caching too much data can cause us to overuse the heap and fall foul of that governor limit. Thus, we must always be careful of the choices we are making and the implications of any performance improvement. We will cover this in much more detail in the final section of the book, but I wanted to ensure this was called out now to highlight the fact there is always a trade-off.

Bulkification – querying within loops

Another common mistake seen in Apex is querying for data within loops. This is different than repeated querying for data, as discussed previously, as instead, it focuses on performing a query with a (potentially) unique outcome for each iteration of a loop. This is particularly true within triggers.

When working with a trigger, you should always prepare your code to handle a batch of 200 records at once as a minimum. Although the limit for the number of items passed into a single trigger context is 200 for standard and custom objects, processes invoked via the Bulk API or in a batch Apex context may call a trigger multiple times in a transaction. This is true regardless of whether or not you believe the tool will only pass records to the trigger individually; all that is required is for an enterprising administrator to create a flow that manipulates multiple records that fire your trigger or a large volume of data to be loaded, and you will have issues.

Consider the following code block, wherein we are looping through each contact we have been provided in a Contact trigger and retrieving the related Account record, including some information:

trigger ContactTrigger on Contact (before insert, after insert) {
    switch on Trigger.operationType {
        when BEFORE_INSERT {
            for(Contact con : Trigger.new) {
                Account acc = [SELECT UpsellOpportunity__c FROM 
                    Account WHERE Id = :con.AccountId];
                con.Contact_for_Upsell__c = acc.UpsellOpportunity__c 
                    != 'No';
            }
        }
        when AFTER_INSERT {
            //after insert code
        }
    }
}

This simple trigger will set the Contact_for_Upsell__c field to true if the account is marked as having any upsell opportunity.

There are a couple of fairly obvious problems with the way we are querying here. Firstly, this is not bulkified—if we have 200 records passed into the trigger (over 100 records, in fact), we will break the governor limit for Salesforce Object Query Language (SOQL) queries and receive an exception that we cannot handle. Secondly, this setup is also inefficient as it may retrieve the same account record from the database twice.

A better way to manage this would be to gather all of the account IDs in a set and then query once. Not only will this avoid the governor limit, but it will also avoid us querying for duplicate results. An updated version of the code to do this is shown here:

trigger ContactTrigger on Contact (before insert, after insert) {
    switch on Trigger.operationType {
        when BEFORE_INSERT {
            Set<Id> accountIds = new Set<Id>();
            for(Contact con : Trigger.new) {
                accountIds.add(con.AccountId);
            }
            Map<Id, Account> accountMap = new Map<Id, Account>([SELECT
              UpsellOpportunity__c FROM Account WHERE Id
                in :accountIds]);
            for(Contact con : Trigger.new) {
                con.Contact_for_Upsell__c = accountMap.get(con.
              AccountId).UpsellOpportunity__c != 'No';
            }
        }
        when AFTER_INSERT {
            //after insert code
        }
    }
}

In this code, we declare a Set<Id> called accountIds to hold the account ID for each contact without duplicates. We then query our Account records into a Map<Id, Account> so that when looping through each contact for a second time, we can set the value correctly.

Some of you may now be wondering whether we have merely moved our performance issue from having too many queries to having multiple loops through all the data. In Chapter 19, Performance Profiling, when we talk about performance profiling and profiling our code, we will cover the use of big-O notation in detail when discussing scaling. However, to touch on the subject of scaling here, it is worth doing some rudimentary analysis. Looping through these records (maximum 200) will be extremely quick on the central processing unit (CPU) and is an inexpensive operation. It is also an operation that scales linearly as the number of records within the trigger grows. In our original trigger, for each new record, we had the following:

  • One loop iteration
  • One query

This is scaled linearly at a rate of 1x for both resources—that is, doubling the items doubled the resources being utilized, until a point of failure with a governor limit (in this instance, queries). In our new trigger structure, we have the following for each record:

  • Two loop iterations (one for each for loop)
  • Zero additional queries

Our new resource usage scales linearly for loop iterations but is constant for queries, which are a more limited resource. As we will see later, this is the type of optimization we want within our code. It is, therefore, imperative that whenever we are looping through records and wish to query related data, we do so in a bulkified manner that, wherever possible, performs a single query for the entire loop.

Bulkification – DML within loops

Similar to the issue of querying in loops is that of performing DML within loops. The limit for DML statements is higher than that of SOQL queries at the time of writing and so is unlikely to present itself as early; however, it follows the same root cause and also the solution.

Take the following code example, in which we are now in the after insert context for our trigger:

trigger ContactTrigger on Contact (before insert) {
    switch on Trigger.operationType {
        when BEFORE_INSERT {
            //previous trigger code
        }
        when AFTER_INSERT {
            for(Contact con : Trigger.new) {
                if(con.Contact_for_Upsell__c) {
                    Task t = new Task();
                    t.Subject = 'Discuss opportunities with new
                contact';
                    t.OwnerId = con.OwnerId;
                    t.WhoId = con.Id;
                    insert t;
                }
            }
        }
    }
}

Here, we are creating a Task record for the owner of any new contact that is marked for upsell to reach out to them and discuss potential opportunities. In our worst-case bulk scenario here, we have 200 contacts that all have the Contact_for_Upsell__c checkbox checked. This will lead to each iteration firing a DML statement that will cause a governor limit exception on record 151. Again, using our rudimentary analysis, we can see that for each additional record on the trigger, we have an additional DML statement that scales linearly until we breach our limit.

Instead, whenever making DML statements (particularly in triggers), we should ensure that we are using the bulk format and passing lists of records to be manipulated into the statement. For example, the trigger code should be written as follows:

trigger ContactTrigger on Contact (before insert) {
    switch on Trigger.operationType {
        when BEFORE_INSERT {
            //previous trigger code
        }
        when AFTER_INSERT {
            List<Task> tasks = new List<Task>();
            for(Contact con : Trigger.new) {
                if(con.Contact_for_Upsell__c) {
                    Task t = new Task();
                    t.Subject = 'Discuss opportunities with new contact';
                    t.OwnerId = con.OwnerId;
                    t.WhoId = con.Id;
                    tasks.add(t);
                }
            }
            insert tasks;
        }
    }
}

This new code has a constant usage of DML statements, one for the entire operation, and can happily scale up to 200 records.

Hardcoding

The final common mistake I want to discuss here is that of hardcoding within Apex—particularly, hardcoding any type of unique identifier such as an ID or a name. For IDs, this mistake is probably quite obvious for most developers as it is well established that between different environments, including sandbox and production environments, IDs can—and should—differ. If you are creating a sandbox from a production environment, then at the time of creation, the IDs are synchronized for any data that is copied down to the sandbox environment. Following this, IDs do not remain synchronized between the two and are generated when a record is created within that environment.

Despite this, many developers, particularly those working within a single environment such as consultants or in-house developers, will hardcode certain IDs if needed. For example, consider the following code:

for(Account acc : Trigger.new) {
    if(acc.OwnerId = 'SOME_USER_ID') {
        break;
    }
    //do something otherwise
}

This code is designed to skip updates on Account records within our trigger context owned by a particular user, most commonly an application programming interface (API) user or an integration user. This pattern enables a developer to filter these records out so that if an update is coming via an integration, actions are skipped and the integration can update records unimpeded.

Should this user ID change, then we will get an error or issue here, so it is wise to remove this hardcoded value. Given that the ID for the user should be copied from production to any sandboxes, you may ask why this is needed. Firstly, there is no guarantee that the user will not be changed for the integration, going forward. Secondly, for development purposes, when initially writing this code, the user will not likely exist in the production org, and so, in your first deployment, you will have to tweak the code to be environment specific. Thirdly, this also limits your ability to test the code effectively, going forward. We will see how shortly.

As an update to this code, some may recommend making the following change (note that this is precisely the instance where we would extract this query to a utility class; however, we are inlining the query here for ease of display and reading):

for(Account acc : Trigger.new) {
    User apiUser = [SELECT Id FROM User WHERE Username = '[email protected]'];
    if(acc.OwnerId == apiuser.Id) {
        break;
    }
    //do something otherwise
}

This code improves upon our previous code in that we are no longer hardcoding the user record ID, although we are still hardcoding the name. Again, should this change over time, then we would still have an issue, such as when we are working in a sandbox environment and the sandbox name is appended to the username. In this instance, the code would not be executable, including in a test context, without updating the record to be correct. This could be done manually every time a new sandbox is created or through the code in the test. This means that our test is now bound to this user, which is not a good practice for tests, should the user change again.

In this instance, we should remove the name string to a custom setting for flexibility and improved testability. If we were to define a custom setting that held the value (for example, Integration_Settings__c), then we could easily retrieve the custom setting and the username for a query at runtime without the query, as follows:

for(Account acc : Trigger.new) {
    Integration_Settings__c setting = Integration_Settings__c.
      getInstance();
    List<User> apiUsers = [SELECT Id FROM User WHERE Username = 
      :setting.Api_Username__c LIMIT 1];
    if(acc.OwnerId == apiUsers[0]?.Id) {
        break;
    }
    //do something otherwise
}

Such a pattern allows us many benefits, as follows:

  • Firstly, we can now apply different settings across the org and profiles, should we so desire. This can be increasingly useful in orgs where multiple business units (BUs) operate.
  • Secondly, for testing, we can create a user within our test data and assign their username to the setting for the test within Apex. This allows our tests to run independently of the actual org configuration and makes processes such as continuous integration (CI) simpler.
  • We have utilized our safe navigation operator and retrieved a list of users (limiting to a size of one) to ensure that if no such user is found, we avoid any other exceptions such as our NullPointerException instance, as discussed previously.

There is, however, another fundamental fix we should make on our code—have you spotted it?

We should not be doing a query in a loop! Congratulations to the eagle-eyed readers who have been paying attention throughout and spotted the issue already. It is important when looking at some code that we do not overly focus on a single issue and fail to spot another one that may be far more troublesome. Our final improved code would, then, look like this:

Integration_Settings__c setting = Integration_Settings__c.
  getInstance();
List<User> apiUsers = [SELECT Id FROM User WHERE Username = :setting.Api_Username__c LIMIT 1];
for(Account acc : Trigger.new) {
    if(acc.OwnerId == apiUsers[0]?.Id) {
        break;
    }
    //do something otherwise
}

Finally, we could extract both of these lines (the retrieval of the custom setting and the user query) to a utility class to abstract away, for the entire transaction to use as needed.

Summary

In this chapter, we reviewed some of the common Apex mistakes made by developers and how to resolve them. For many of you, the topics presented within this chapter will be familiar, although they are hopefully still a worthwhile refresher with some additional and helpful context.

We began this book with this chapter, as it is imperative we consider how to remove these common mistakes before we look at how to extend our knowledge around the rest of the platform’s features. We also tried to cover in greater detail than is typical the reasoning behind some of these errors, either from the perspective of the underlying machine, as with the NullPointerException discussion we started with, or via the impact upon developer and deployment productivity, such as our final discussion on hardcoding.

To start mastering any language means beginning by removing the minor common niggles that can cause issues and easily resolve bugs. Hopefully, through having a deeper or broader understanding of these issues and how they arise, you can more readily spot and rectify them in advance. That is not to say you will stop making them; I still find these bugs and issues in my own code on occasion, but I am able to recognize them in advance as I begin to develop, to stop them as routine and habit take hold.

Now that we have discussed these common problems and how to resolve them, we will move on to move detailed and prescriptive debugging in the next chapter.

Left arrow icon Right arrow icon

Key benefits

  • Understand the various integration asynchronous processing options in Apex and how to use them to scale you application
  • Learn how to integrate external systems with Apex through both inbound and outbound integrations
  • Profile and improve the performance of your Apex code

Description

Applications built on the Salesforce platform are now a key part of many organizations' IT systems, with more complex and integrated solutions being delivered every day. As a Salesforce developer working with Apex, it is important to understand the range and variety of tools at your disposal, how and when to use them, and what the best practices are. This revised second edition includes a complete restructuring and five new chapters filled with detailed content on the latest Salesforce innovations including integrating with DataWeave in Apex, and utilizing Flow and Apex together to build scalable applications with Administrators. This Salesforce book starts with a discussion around common mistakes, debugging, exception handling, and testing. The second section focuses on the different asynchronous Apex programming options to help you build more scalable applications, before the third section focuses on integrations, including working with platform events and developing custom Apex REST web services. Finally, the book finishes with a section dedicated to profiling and improving the performance of your Apex including architecture. With code examples used to facilitate discussion throughout, by the end of the book you will be able to develop robust and scalable applications in Apex with confidence.

Who is this book for?

Developers who have basic to intermediate Apex programming knowledge and are interested in mastering Apex programming while exploring the Salesforce platform. This book is also ideal for experienced Java or C# developers who are moving to Apex programming for developing apps on the Salesforce platform. Basic Apex programming knowledge is assumed.

What you will learn

  • Understand common Apex mistakes and how to avoid them through best practices
  • Learn how to debug Apex code effectively
  • Discover the different asynchronous Apex options, common use cases, and best practices
  • Extend the capabilities of the Salesforce platform with the power of integrations
  • Parse and manipulate data easily with the use of DataWeave functions
  • Develop custom Apex REST services to allow inbound integrations
  • Profile and improve the performance of your Apex code

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Nov 29, 2023
Length: 394 pages
Edition : 2nd
Language : English
ISBN-13 : 9781837630424
Vendor :
Salesforce
Category :
Concepts :
Tools :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Nov 29, 2023
Length: 394 pages
Edition : 2nd
Language : English
ISBN-13 : 9781837630424
Vendor :
Salesforce
Category :
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 154.97
Becoming a Salesforce Certified Technical Architect
$59.99
Salesforce Platform Enterprise Architecture- fourth edition
$49.99
Mastering Apex Programming
$44.99
Total $ 154.97 Stars icon
Banner background image

Table of Contents

27 Chapters
Section 1: Triggers, Testing, and Security Chevron down icon Chevron up icon
Chapter 1: Common Apex Mistakes Chevron down icon Chevron up icon
Chapter 2: Debugging Apex Chevron down icon Chevron up icon
Chapter 3: Triggers and Managing Trigger Execution Chevron down icon Chevron up icon
Chapter 4: Exceptions and Exception Handling Chevron down icon Chevron up icon
Chapter 5: Testing Apex Code Chevron down icon Chevron up icon
Chapter 6: Secure Apex Programming Chevron down icon Chevron up icon
Section 2: Asynchronous Apex Chevron down icon Chevron up icon
Chapter 7: Utilizing Future Methods Chevron down icon Chevron up icon
Chapter 8: Working with Batch Apex Chevron down icon Chevron up icon
Chapter 9: Working with Queueable Apex Chevron down icon Chevron up icon
Chapter 10: Scheduling Apex Jobs Chevron down icon Chevron up icon
Section 3: Integrations Chevron down icon Chevron up icon
Chapter 11: Integrating with Salesforce Chevron down icon Chevron up icon
Chapter 12: Using Platform Events Chevron down icon Chevron up icon
Chapter 13: Apex and Flow Chevron down icon Chevron up icon
Chapter 14: Apex REST and Custom Web Services Chevron down icon Chevron up icon
Chapter 15: Outbound Integrations – REST Chevron down icon Chevron up icon
Chapter 16: Outbound Integrations – SOAP Chevron down icon Chevron up icon
Chapter 17: DataWeave in Apex Chevron down icon Chevron up icon
Section 4: Apex Performance Chevron down icon Chevron up icon
Chapter 18: Performance and the Salesforce Governor Limits Chevron down icon Chevron up icon
Chapter 19: Performance Profiling Chevron down icon Chevron up icon
Chapter 20: Improving Apex Performance Chevron down icon Chevron up icon
Chapter 21: Performance and Application Architectures Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
(1 Ratings)
5 star 0%
4 star 100%
3 star 0%
2 star 0%
1 star 0%
Kadir Ali Apr 10, 2024
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
This book helped me to enhance my knowledge.But it is a bit expensive book.That is why I had to purchase the Kindle version.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.