Andy in the Cloud

From BBC Basic to Force.com and beyond…


Leave a comment

New Book – Salesforce Platform Enterprise Architecture 4th Edition

It has been nearly 10 years back in 2014 when the first edition of my book Force.com Enterprise Architecture was first published and clearly a lot has moved on since then and not just the platform capabilities. Keeping pace with branding changes resulted in the second edition of the book being called Lightning Platform Enterprise Architecture. This latest 4th edition, Salesforce Platform Enterprise Architecture has probably the most accurate and I hope enduring title! This 4th edition is also nearly twice the size of the first edition, coming in at 681 pages. So much so, its 16 chapters is now split over 4 parts to make digesting the book easier. As is tradition by now, this blog will cover some of latest updates and additions and what to expect in general from the book.

Links: Amazon | Packt Publishing

Motivation and what to expect…

I first came to the platform with a lot of experience and understanding of architecture principles from other platforms, including leveraging platform agnostic patterns such as those of Martin Fowler. Applying patterns to the platform requires some nuance at times, thus this book is about enterprise architecture considerations as applied to the Salesforce Platform and much more in respect to its capabilities that accelerate traditional application development concerns and tasks for you – and can hinder if not considered upfront in your architecture design.

The book dedicates 3 of its 16 chapters to Apex code separation of concerns, how to layout your code, while the rest covers the full spectrum of app concerns such as designing for code and database performance, profiling, testing, data storage design, building apis, extensibility and more and of course how to write much less code by harnessing the declarative aspects. All this is driven throughout the book via a fictitious reference application known as FormulaForce based on Formula1 motor racing – full source is included. Also included are numerous tips and tricks and info blocks pitted among the pages that highlight learnings from my own experiences, and latterly that of the many amazing reviewers.

As good as the platform is at abstracting a huge amount of the complexity of managing and securing modern day applications for you, there is no escaping having to fully understand good architecture principles that would, and do, in most cases apply elsewhere, such as query optimization and indexing.

If you are developing a lot on the Salesforce Platform, have a few years experience and are passionate about creating applications that are enduring and empathetic to the platform itself – this book is for you. Read on for further highlights!

Is it for ISVs or anyone?

This edition is the first to split out the motivations of packaging in respect to internal distribution needs within your own company vs distributing a commercial solution on AppExchange. Regardless of which motivation you fit into, the chapters will callout which bits are relevant vs not. Even if you are not into packaging at all right now, much of the content on Apex, Lightning, Flow, APIs etc is of course agnostic to how you choose to distribute your application. Regardless who you are sharing it with, the book gives you an appreciation of advantages and limits of packaged based distribution and in my view is a key aspect of your overall architecture considerations checklist.

Have the Apex coding patterns evolved given recent features?

The advent of user mode capabilities has had a big impact on the approach taken within the sample application of the book when it comes to using the Apex Enterprise Patterns Library (fflib), specifically in how the Unit of Work and Selector patterns are used when updating and querying for data respectively.

Additionally, the Domain pattern chapter now reflects on how Domain logic, can, if desired allow the developer to split Domain and Trigger logic into separate classes and thus introduces a new pattern (SDCL) to captured this approach. Thats not to say the original model has gone, it is still very much present, but it felt appropriate to include something in the book that reflects how the library usage has also evolved over the years – thanks to my good friend John M. Daniel for pushing for this!

Apply skills in other languages and additional scaling options

A brand new chapter in the book is dedicated to how Heroku and Functions can be used to leverage existing skills or indeed investments in open source or internally for code written in other languages such as Java or Node.js. These technologies also open up further scaling options. The chapter breaks down the approaches using the books sample application as a basis. Thus it continues the Formula1 motor racing theme by introducing Web and API interfaces for race fans and vendors (authentication scenario) to interact with the application backend APIs. Later in the book the integration chapter doubles down further on authoring APIs via OpenAPI standards and how that simplifies integration with the platform.

Lightning Web Components and Lightning Experience expansion

Since the third book, all but one of the Aura components have now been removed as many only existed at the time as workarounds to the lack of LWC enabled integration points. Two large chapters are dedicated to the Lightning framework itself, how it handles separation of concerns, eventing, security and its alignment with the industry Web Component standard. Completing with a section covering the ever expanding possibilities to extend vs build when thinking about your applications user experience, regardless if thats desktop, mobile or driven through the native UIs and tools such as Lightning Pages or Flow. And yes, I still love the Utility Bar integration the most – as you can see from the number of historic posts and tools on the topic here.

APIs, APIs and APIs…

Many years ago a product manager once thanked me for pushing an API strategy internally, since they appreciated first hand as a past buyer that APIs are effectively an insurance policy for buyers when it comes to unexpected feature gaps post or during implementation. Thus several of the chapters highlight the many Salesforce APIs in contextual ways.

Though its not until the integration and extensibility chapter the book really doubles down on building your own application APIs, versioning, naming and design and the value they bring to your fellow developers, customers and/or partner ecosystems. External Services is one of the most impressive aspects of integrating with the Salesforce Platform for me and it was my pleasure to update the book with an updated Heroku based API leveraging OpenAPI to get the full power of the feature – the two are a perfect combo!

And …

This blog is already in danger of becoming chapter size itself, so I will leave it here with the above highlights and before I close share my deepest thanks to this editions reviewers John M. Daniel, Sebastiano Costanzo, Arvind Narasimhan and foreword author Daniel J. Peter.

I will be popping up in the not to distant future in ApexHours (https://www.apexhours.com/) and perhaps other locations to talk in more depth about the book and likely other side topics of interest. I will update this blog as those resources arrive. Thank you all in the meantime, the Salesforce community is the best and remains very close to my heart, enjoy!

AndyInTheCloud

P.S. Suffice to say this 4th edition is an evolution of the prior three books! I would like to also thank the following fine folks for their support as past reviewers and foreword authors!

Past Reviewers and Forewords:
Matt Bingham, Steven Herod, Matt Lacey, Andrew Smith, Rohit Arora, Karanraj Samkaranarayanan, Jitendra Zaa, Joshua Burk, Peter Knolle, John Leen, Aaron Slettenhaugh, Avrom Roy-Faderman, Michael Salem, Mohith Shrivastava and Wade Wegner


10 Comments

The Future of DLRS

Salesforce platform has long since had a place in my heart for the empowerment it brings to people who like to create. As a developer I love to create and feel the satisfaction in helping others. With Salesforce I can create without code as well. Now as much as I love the declarative side of building I do love to code!

In 2013 I found a way to magnify this feeling by closing the gap between coding and helping other creators, also known as Admin’s. This blog talks about a big next step in the future of the Declarative Lookup Rollup Summaries Tool or “Dolores” (short for DLRS) as the community refers to it – and how you can help.

SUPPORT: Declarative Lookup Rollup Summary (DLRS) Tool: This tool is open source and is supported by the community in a volunteer capacity.  Kindly please post your questions to the Trailblazer Community Group. Thank you!

Memory Lane…

DLRS was somewhat born out of a solution looking for a problem. Around the same time in 2013 I was playing around with the Metadata API, automating Setup aspects via Apex and building on the platform. In doing so I was discovering some of its declarative gaps. One such gap was the inability to define rollups that are not related to master detail relationships – resolving this within the platform itself required Apex and still does today. However, every now and again I tend to somehow see connections between things that are not always obvious – and maybe a little crazy – and so I wondered – what if an Admin tool generated and managed code?

I am also a frugal developer who likes to focus on the job at hand and thus look to open source when I can. After a quick internet search I found Abhinav Gupta’s open source Apex library LREngine, also known as Salesforce Lookup Rollup Summaries. This still required a developer to write an Apex Trigger, and  it also lacked a way to configure it without a developer. And so the neurons fired and “Declarative” Lookup Rollup Summaries tool was born! Over time LREngine received a number of enhancements via DLRS requirements and optimizations. The ways in which rollups could be invoked and configured was expanded around the library. The rest, as they say, is history!

DLRS Stats and a Realization…

There have been 39 releases to date, and I am proud to have had 18 GitHub contributors supporting the tool with amazing features, fixes and improvements. Along with an amazing Chatter Group on the Trailblazer Community with over 1140 members. DLRS has been installed in tens of thousands of orgs at this point. When I started the Chatter group I tried my best to answer each and every question. What I started to find is the community began to help the community, superstars like Jon LaRosa, Dan Donin and Jim Bartek and many others are so engaged and supportive its wonderful to see.

The reality though as over the past years my time and focus on the tool has not met the increasing and ongoing demand for the tool. Between holiday time coding I don’t feel it’s quite enough and the community and users of this tool deserve more. And so with mixed feelings I sent the following tweet – well actually my wife pressed the send button for me – she knows how much I love this tool and generally knows what’s best – so here we are!

DLRS needs a team, and you can help

TL:DR If you’d like to join a team of community members, helping to ensure the future of DLRS, email sfdo-opensource@salesforce.com for more information on how you can help.

It’s very important to me for it to remain open source, free to the community and get the love and attention it deserves. There are a number of duties managing it that cause me to become a bit of a bottleneck, such as packaging, and also some things that are only in my head around testing and the code base. So I originally reached out for a lead developer to help out.

I had some great responses from the Twitter post, but it was the suggestion to have the project managed as a community team as part of the Salesforce Open Source Commons Program which showed me another path. One that could ensure DLRS continues to thrive and expand well into the future. 

The OSC program provides a framework that wraps a great process and set of tools around open source projects, in partnership between community leaders and Salesforce. Learning about the OSC program made me realize that to ensure DLRS would truly be a sustainable and trusted package for it’s tens of thousands of users, more than a new lead developer would be needed. Maintaining a package like DLRS should be done by a team of volunteers, including those who can help others with support questions, those that want to help with docs, those that like to manage feature backlogs and more automated release processes – the list goes on. There are already eight open source projects involved with the program, ranging from managed packages to help Naval Volunteers (Ombudsman) with case management, to product support help videos, to DEI ecosystem improvement. The program hosts multiple training opportunities, and very popular Community Sprint events (like hackathons) throughout the year, to bring exposure and new volunteers to projects. 

Photo caption: Open Source Community Sprint participants, Philadelphia, October 2019

I have met with Cori O’Brien, Senior Manager of the Open Source Commons, who is very keen to share the benefits of the program further to the DLRS community and especially those that are keen to help in whatever way they can. If you would like to help or just learn more about the program, please drop an email to sfdo-opensource@salesforce.com and we will add you to a kick-off meeting to be held mid August.

Meanwhile – thank you all for the support you have given this tool – its been truly something I will be proud of for the rest of my life. I am very excited about this next step and rest assured I will be around to continue to contribute with the project – but most importantly so will many others!

SUPPORT: Declarative Lookup Rollup Summary (DLRS) Tool: This tool is open source and is supported by the community in a volunteer capacity.  Kindly please post your questions to the Trailblazer Community Group. Thank you!


1 Comment

My Top 5 Apex APIs and Bonus

As readers of this blog will already have determined, I love API’s! So when I was approached by the team at 100 Days of Trailhead to share my Top 5 “things” my mind fell to API’s. This is a short blog post because effectively the brief was a 10 minute video! Well ok I ran a bit over in the end so its actually 15 minutes.

In the video I cover 5 Apex API’s and a Bonus API at the end. Thank you for 100 Days of Trailhead for asking to contribute and please do check out all the other amazing content they have on their website! You can watch the video below (or link here) and find full source code in this GitHub repository.

https://100daysoftrailhead.com/
/**
 .----------------. 
| .--------------. |
| |   _______    | |
| |  |  _____|   | |
| |  | |____     | |
| |  '_.____''.  | |
| |  | \____) |  | |
| |   \______.'  | |
| |              | |
| '--------------' |
 '----------------' 
 */

//
// Firing Platform Events from Batch Apex
//
public with sharing class Five
    implements Database.Batchable<SObject>, Database.RaisesPlatformEvents {

 .----------------. 
| .--------------. |
| |   _    _     | |
| |  | |  | |    | |
| |  | |__| |_   | |
| |  |____   _|  | |
| |      _| |_   | |
| |     |_____|  | |
| |              | |
| '--------------' |
 '----------------' 

//
// Request Class - Request Id
//
Request reqInfo = Request.getCurrent();
String currentRequestId = reqInfo.getRequestId();


 .----------------. 
| .--------------. |
| |    ______    | |
| |   / ____ `.  | |
| |   `'  __) |  | |
| |   _  |__ '.  | |
| |  | \____) |  | |
| |   \______.'  | |
| |              | |
| '--------------' |
 '----------------' 
 
//
// Request Class - Quiddity
//
switch on Context  {
    when SYNCHRONOUS {
        RequestOrderProcessor = new TriggerOrderProcessing();
    } when BULK_API {
        RequestOrderProcessor = new BulkAPIOrderProcessing();    
    } when else {
        RequestOrderProcessor = new BaseOrderProcessing();
    }
}        

 .----------------. 
| .--------------. |
| |    _____     | |
| |   / ___ `.   | |
| |  |_/___) |   | |
| |   .'____.'   | |
| |  / /____     | |
| |  |_______|   | |
| |              | |
| '--------------' |
 '----------------' 

// 
// Send Notifications
//

Messaging.CustomNotification notification = new Messaging.CustomNotification();
notification.setTitle('Amazing new order ' + newOrder.OrderNumber);
notification.setBody('Check out this new order that has just come in!');
notification.setNotificationTypeId(notificationType.Id);
notification.setTargetId(newOrder.Id);
notification.send(new Set<String> { UserInfo.getUserId()} );


 .----------------. 
| .--------------. |
| |     __       | |
| |    /  |      | |
| |    `| |      | |
| |     | |      | |
| |    _| |_     | |
| |   |_____|    | |
| |              | |
| '--------------' |
 '----------------' 

Opportunity opp = new Opportunity();
opp.addError('Description', 'Error Message!'); 
List<Database.Error> errors = opp.getErrors();
System.assertEquals(1, errors.size());
System.assertEquals('Error Message!', errors[0].getMessage());
System.assertEquals('Description', errors[0].getFields()[0]);

/**
 * 
  ____                        
 |  _ \                       
 | |_) | ___  _ __  _   _ ___ 
 |  _ < / _ \| '_ \| | | / __|
 | |_) | (_) | | | | |_| \__ \
 |____/ \___/|_| |_|\__,_|___/                                             
 */

public class BonusFinalizer implements Finalizer {

    public void execute(FinalizerContext ctx) {


4 Comments

Reducing Field Validation Boilerplate Code

Boilerplate code is code that we repeat often with little or no variation. When it comes to writing field validations in Apex, especially within Apex Triggers there are a number of examples of this. Especially when it comes to checking if a field value has changed and/or querying for related records, which also requires observing good bulkifcathion best practices. This blog presents a small proof of concept framework aimed at reducing such logic made possible in Winter’21 with a small but critical enhancement to the Apex runtime that allows developers to dynamically add field errors. Additionally, for those practicing unit testing, the enhancement also allows test assert such errors without DML!

This is a very basic demonstration of the new addError and getErrors methods.

Opportunity opp = new Opportunity();
opp.addError('Description', 'Error Message!'); 
List<Database.Error> errors = opp.getErrors();
System.assertEquals(1, errors.size());
System.assertEquals('Error Message!', errors[0].getMessage());
System.assertEquals('Description', errors[0].getFields()[0]);

However in order to really appreciate the value these two features bring to frameworks let us first review a use case and the traditional approach to coding such validation logic. Our requirements are:

  1. When updating an Opportunity validate the Description and AccountId fields
  2. If the StageName field changes to “Closed Won” and the Description field has changed ensure it is not null.
  3. If the AccountId field changes ensure that the NumberOfEmployees field on the Account is not null
  4. Ensure code is bulkified and queries are only performed when needed.

The following code implements the above requirements, but does contain some boilerplate code.

// Classic style validation
switch on Trigger.operationType {
    when AFTER_UPDATE {
        // Prescan to bulkify querying for related Accounts
        Set<Id> accountIds = new Set<Id>();
        for (Opportunity opp : newMap.values()) {
            Opportunity oldOpp = oldMap.get(opp.Id);
            if(opp.AccountId != oldOpp.AccountId) { // AccountId changed?
                accountIds.add(opp.AccountId);
            }
        }                
        // Query related Account records?
        Map<Id, Account> associatedAccountsById = accountIds.size()==0 ? 
            new Map<Id, Account>() : 
            new Map<Id, Account>([select Id, NumberOfEmployees from Account where Id = :accountIds]);
        // Validate
        for (Opportunity opp : newMap.values()) {
            Opportunity oldOpp = oldMap.get(opp.Id);
            if(opp.StageName != oldOpp.StageName) { // Stage changed?
                if(opp.StageName == 'Closed Won') { // Stage closed won?
                    if(opp.Description != oldOpp.Description) { // Description changed?               
                        if(opp.Description == null) { // Description null?
                            opp.Description.addError('Description must be specified when Opportunity is closed');
                        }
                    }
                }                                
            }
            if(opp.AccountId != oldOpp.AccountId) { // AccountId changed?
                Account acct = associatedAccountsById.get(opp.AccountId);
                if(acct!=null) { // Account queried?
                    if(acct.NumberOfEmployees==null) { // NumberOfEmployees null?
                        opp.AccountId.addError('Account does not have any employees');
                    }    
                }
            }
        }
    }
}               

Below is the same validation implemented using a framework built to reduce boilerplate code.

SObjectFieldValidator.build()            
  .when(TriggerOperation.AFTER_UPDATE)
    .field(Opportunity.Description).hasChanged().isNull().addError('Description must be specified when Opportunity is closed')
      .when(Opportunity.StageName).hasChanged().equals('Closed Won')
    .field(Opportunity.AccountId).whenChanged().addError('Account does not have any employees')
      .when(Account.NumberOfEmployees).isNull()
  .validate(operation, oldMap, newMap);

The SObjectFieldValidator framework uses the Fluent style design to its API and as such allows the validator to be dynamically constructed with ease. Additionally configured instances of it can be passed around and extended by other code paths and modules with the validation itself to be performed in one pass. The framework also attempts some smarts to bulkify queries (in this case related Accounts) and only do so if the target field or related fields have been modified – thus ensuring optimal processing time. The test code for either approaches can of course be written in the usual way as shown below.

// Given
Account relatedAccount = new Account(Name = 'Test', NumberOfEmployees = null);        
insert relatedAccount;
Opportunity opp = new Opportunity(Name = 'Test', CloseDate = Date.today(), StageName = 'Prospecting', Description = 'X', AccountId = null);
insert opp;
opp.StageName = 'Closed Won';
opp.Description = null;
opp.AccountId = relatedAccount.Id;
// When
Database.SaveResult saveResult = Database.update(opp, false);
// Then
List<Database.Error> errors = saveResult.getErrors();
System.assertEquals(2, errors.size());
System.assertEquals('Description', errors[0].getFields()[0]);
System.assertEquals('Description must be specified when Opportunity is closed', errors[0].getMessage());
System.assertEquals('AccountId', errors[1].getFields()[0]);
System.assertEquals('Account does not have any employees', errors[1].getMessage());

While you do still need code coverage for your Apex Trigger logic, those practicing unit testing may prefer to leverage the ability to avoid DML in order to assert more varied validation scenarios. The following code is entirely free of any SOQL and DML statements and thus better for test performance. It leverages the ability to inject related records rather than allowing the framework to query them on demand. The SObjectFieldValidator instance is constructed and configured in a separate class for reuse.

// Given
Account relatedAccount = 
    new Account(Id = TEST_ACCOUNT_ID, Name = 'Test', NumberOfEmployees = null);
Map<Id, SObject> oldMap = 
    new Map<Id, SObject> { TEST_OPPORTUNIT_ID => 
        new Opportunity(Id = TEST_OPPORTUNIT_ID, StageName = 'Prospecting', Description = 'X', AccountId = null)};
Map<Id, SObject> newMap = 
    new Map<Id, SObject> { TEST_OPPORTUNIT_ID => 
        new Opportunity(Id = TEST_OPPORTUNIT_ID, StageName = 'Closed Won', Description = null, AccountId = TEST_ACCOUNT_ID)};
Map<SObjectField, Map<Id, SObject>> relatedRecords = 
    new Map<SObjectField, Map<Id, SObject>> {
        Opportunity.AccountId => 
            new Map<Id, SObject>(new List<Account> { relatedAccount })};
// When
OpportunityTriggerHandler.getValidator()
  .validate(TriggerOperation.AFTER_UPDATE, oldMap, newMap, relatedRecords); 
// Then
List<Database.Error> errors = newMap.get(TEST_OPPORTUNIT_ID).getErrors();
System.assertEquals(2, errors.size());
System.assertEquals('AccountId', errors[0].getFields()[0]);
System.assertEquals('Account does not have any employees', errors[0].getMessage());
System.assertEquals('Description', errors[1].getFields()[0]);
System.assertEquals('Description must be specified when Opportunity is closed', errors[1].getMessage());

Finally it is worth noting that such a framework can of course only get you so far and there will be scenarios where you need to be more rich in your criteria. This is something that could be explored further through the SObjectFieldValidator.FieldValidationCondition base type that allows coded field validations to be added via the condition method. The framework is pretty basic as I really do not have a great deal of time these days to build it out more fully, so totally invite anyone interested to take it further.

Enjoy!


7 Comments

Apex Process Orchestration and Monitoring with Platform Events

When it comes to implementing asynchronous workloads in Apex developers have a number of options such as Batch Apex and Queueable, each can be driven by user or system driven actions. This blog focuses on some of the more advanced aspects of implementing async workloads using Platform Events and Apex.

In comparison to other approaches, implementing asynchronous workloads using Platform Events offers two unique features. The first helps you better dynamically calibrate and manage resources based on data volumes to stay within limits, while the second feature provides automatic retry capabilities when errors occur. Lastly I want to highlight an approach I used to add some custom realtime telemetry to the workload using Platform Events.

Side Note: Before getting into the details, the goal of this blog is not to say one async option is better than another, rather to highlight the above features further so you can better consider your options. I also include a short comparison at the end of this blog with Batch Apex.

Business Scenario

Lets imagine the following scenario to help illustrate the use of the features described below:

  • Business Process
    Imagine that your business is processing Invoice generation on the platform and that the Orders that drive this arrive and are updated constantly.
  • Continuous Processing
    In order to avoid backlogs or spikes of invoices being processed, you want to maintain a continuous flow of the overall process. For this you create a Platform Event called Generate Invoice. This event can easily be sent by an admin / declarative builders who have perhaps setup some rules on the Orders object using Process Builder.
  • Resource Management
    Orders arrive in all shapes and sizes, meaning the processing required to generate Invoices can also vary when you consider variables such as number of order lines, product regional discounts, currencies and tax rules etc. Processing each one at time per execution context is an obvious way to maximize use of available resources and is certainly an option, but if resources allow, processing multiple invoices in one execution context is more efficient.

Below is what the Generate Invoice Platform Event looks like, it simply has a reference to the Order Id (though it could equally reference an External Id on the Order object).

For the purposes of this blog we are not focusing on how the events are sent / published. You can publish events using programatic API’s on or off platform or using one of the platforms declarative tools, there are in fact many ways to send events. For this blog we will just use a basic Apex snippet to generate the events as shown below.

List<GenerateInvoice__e> events = new List<GenerateInvoice__e>();
for(Order order : 
       [select Id from Order 
          where Invoiced__c != true 
          order by OrderNumber asc]) {
   events.add(new GenerateInvoice__e(OrderId__c = order.Id));
}
EventBus.publish(events);        

Here is a basic Apex handler for the above Platform Event that delegates the processing to another Apex class:

trigger GenerateInvoiceSubscriber on GenerateInvoice__e (after insert) {
    Set<Id> orderIds = new Set<Id>();
    for (GenerateInvoice__e event : Trigger.New) {
        orderIds.add(event.OrderId__c);
    }
    OrderService.generateInvoices(orderIds);
}

Processing Chunks of Events and Handling Retries

The following diagram highlights how a more advanced version of the above Apex handler can be used to optimally work within the limits to process chunks of Orders based on their size/complexity and also retry those that result in some errors along the way.

In order to orchestrate things this way you need to use some Apex API’s in your handler logic to let the platform know a few things. At the end of this blog I also share how I added telemetry to better visualize this along with a video. So don’t worry at this juncture if its not 100% clear how what you are seeing below is possible, just keep reading and watching!

Controlling how many Events are passed to your handler

Imagine the above code snippet published 1000 events. The platform docs state that it can pass up to a maximum of 2000 events to an Apex event handler at once, meaning the above will be invoked once. If you have been on the platform a while you will know that 200 (not 2000) is common number used to expressed a minimum number of records you should use when testing Apex Triggers and general bulkkification best practice. So why 2000 in the case of platform event handlers? Well the main aim of the platform is to drain the Platform Event message queue quickly and so it attempts to give the handler as much as possible, just in case it can process it.

As we have set out in our scenario above, Orders can be quite variable in nature and thus while a batch of 1000 orders with only a few order lines might be possible within the execution limits, include few orders in that batch with 100’s or a few thousand line items and its more likely you will hit CPU or heap limits. Fortunately, unlike Batch Apex, you get to control the size of each individual chunk. This is done by effectively giving some of the 1000 block of events passed to your handler back to the platform to pass back in a separate handler invocation where the limits are reset.

Below is some basic code that illustrates how you might go about pre-scanning the Orders to determine complexity (by number of lines) and thus dynamically calibrating how many of the events your code can really process within the limits. The orderIds list that is passed to the service class is reset with the orders that can be processed. The key part here is the use of the setResumeCheckpoint method that tells the platform where to resume from after this handler has completed its processing.

trigger GenerateInvoiceSubscriber on GenerateInvoice__e (after insert) { 

    // Determine number overall order lines to process 
    //   vs maximum within limits (could be config)
    Integer maxLines = 40000;
    Set<Id> orderIds = new Set<Id>();
    for (GenerateInvoice__e event : Trigger.New) {
        orderIds.add(event.OrderId__c);
    }
    Map<Id, Integer> lineCountByOrderId = 
        new OrdersSelector().selectLineCountById(orderIds);

    // Bulkify events passed to the OrderService
    orderIds = new Set<Id>();
    Integer lineCount = 0;
    for (GenerateInvoice__e event : Trigger.New) {
        orderIds.add(event.OrderId__c);
        EventBus.TriggerContext.currentContext().setResumeCheckpoint(event.ReplayId);
        lineCount = lineCount + lineCountByOrderId.get(event.OrderId__c);
        if(lineCount>maxLines) { 
            break;
        }
    }

    OrderService.generateInvoices(orderIds);
}

You can read more about this approach in the formal documentation here.

Implementing Retry Logic

There are a number of reasons errors can occur when your handler code is running. For errors that represent transient situations, such as timeouts and row locks for example, you would normally have to ask the user to retry (via email or notification) or utilize a dynamically scheduled job to retry. With Platform Event handlers in Apex, when the system RetryableException is thrown the Platform will automatically retry a batch events after period of time, up to 9 times (the batch sizes may vary between attempts). It is generally recommended that you do not let your code retry above 6, since when the maximum is reached the platform deactivates the handler/trigger.

The following code is basic illustration of how to use this facility and track the number of retries before reaching the max. In this example if the soft maximum is reached the events are effectively just lost in this case or if needed you can write them to a staging custom object for resubmission or just simply have some code such as the above scan for unprocessed Orders and resubmit events.

    // Invoke OrderService, support retries
    try {
        OrderService.generateInvoices(orderIds);
    } catch (Exception e) {
        // Only retry so many times, before giving up (thus avoid disabling the trigger)
        if (EventBus.TriggerContext.currentContext().retries < 6) {
            throw new EventBus.RetryableException(e.getMessage());
        }
        // In this case its ok to let the events drain away... 
        //   since new events for unprocessed Orders can always be re-generated
    }    

Using Platform Events to monitor activity

I used Platform Events to publish telemetry about the execution of the above handlers, by creating another Platform Event called Subscriber Telemetry and used a Lightning Web Component to monitor the events in realtime. Because Platform Events can be declared as being outside the standard Apex Transaction they are sent immediately using the “Publish Immediately” setting, they are sent even if an error occurs.

To publish to this event I simply added the following snippet of code to the top of my handler.

// Emit telemetry
EventBus.publish(
    new SubscriberTelemetry__e(
        Topic__c = 'GenerateInvoice__e', 
        ApexTrigger__c = 'GenerateInvoiceSubscriber',
        Position__c = 
           [select Position from EventBusSubscriber
              where Topic = 'GenerateInvoice__e'][0].Position,
        BatchSize__c = Trigger.new.size(),
        Retries__c = EventBus.TriggerContext.currentContext().retries,
        LastError__c = EventBus.TriggerContext.currentContext().lastError));

The following video shows me clicking a button to publish a batch of 1000 events, then monitoring the effects on my chunking logic and retry logic. The video actually includes me fixing some data errors in order to highlight the retry capabilities. The errors shown are contrived by some deliberately bad code to illustrate the retry logic, hence the fix to the Order records looks a bit odd, so please ignore that. Finally note that the platform chose to send my handler 83 events first then larger chunks thereafter, but in other tests I got 1000 events in the first chunk.

Batch Apex vs Platform Events

Batch Apex also provides a means to orchestration sequentially the processing of records in chunks, so I thought I would end here with a summary of some of the other differences. As you can see one of the key ones to consider is the user identity the code runs as. This is not impossible to workaround in the platform event handler case, but requires some coding to explicitly set the OwnerId field on records if that information is important to you. Overall though I do feel that Platform Events offers some useful options for switching to a more continuous mode of operation vs batch, so long as your aware of the differences this might be a good fit for you.

Side Note: For Apex Queueable handlers you will soon have the option to implement so called Transaction Finalizers that allow you to implement retry or logging logic.


3 Comments

The Third Edition

bookI’m proud to announce the third edition of my book has now been released. Back in March this year I took the plunge start updates to many key areas and add two brand new chapters. Between the 2 years and 8 months since the last edition there has been several platform releases and an increasing number of new features and innovations that made this the biggest update ever! This edition also embraces the platforms rebranding to Lightning, hence the book is now entitled Salesforce Lightning Platform Enterprise Architecture.

You can purchase this book direct from Packt or of course from Amazon among other sellers.  As is the case every year Salesforce events such as Dreamforce and TrailheaDX this book and many other awesome publications will be on sale. Here are some of the key update highlights:

  • Automation and Tooling Updates
    Throughout the book SFDX CLI, Visual Studio Code and 2nd Generation Packaging are leverage. While the whole book is certainly larger, certain chapters of the book actually reduced in size as steps previously reflecting clicks where replaced with CLI commands! At one point in time I was quite a master in Ant Scripts and Marcos, they have also given way to built in SFDX commands.
  • User Interface Updates
    Lightning Web Components is a relative new kid on the block, but benefits greatly from its standards compliance, meaning there is plenty of fun to go around exploring industry tools like Jest in the Unit Testing chapter. All of the books components have been re-written to the Web Component standard.
  • Big Data and Async Programming
    Big data was once a future concern for new products, these days it is very much a concern from the very start. The book covers Big Objects and Platform Events more extensibility with worked examples, including ingest and calculations driven by Platform Events and Async Apex Triggers. Event Driven Architecture is something every Lightning developer should be embracing as the platform continues to evolve around more and more standard platforms and features that leverage them.
  • Integration and Extensibility
    A particularly enjoyed exploring the use of Platform Events as another means by which you can expose API’s from your packages to support more scalable invocation of your logic and asynchronous plugins.
  • External Integrations and AI
    External integrations with other cloud services are a key part to application development and also the implementation of your solution, thus one of two brand new chapters focuses on Connected Apps, Named Credentials, External Services and External Objects, with worked examples of existing services or sample Heroku based services. Einstein has an ever growing surface area across Salesforce products and the platform. While this topic alone is worth an entire book, I took the time in the second new chapter, to enumerate Einstein from the perspective of the developer and customer configurations. The Formula1 motor racing theme continued with the ingest of historic race data that you can run AI over.
  • Other Updates
    Among other updates is a fairly extensive update to the CI/CD chapter which still covers Jenkins, but leverages the new Jenkins Pipeline feature to integrate SFDX CLI. The Unit Testing chapter has also been extended with further thoughts on unit vs integration testing and a focus on Lightening Web Component testing.

The above is just highlights for this third edition, you can see a full table of contents here. A massive thanks to everyone involving for providing the inspiration and support for making this third edition happen! Enjoy!


55 Comments

Getting your users attention with Custom Notifications

customnotificationsummaryGetting your users attention is not always easy, choosing how, when and where to notify them is critical. Ever since Lightning Experience and Salesforce Mobile came out the notification bell has been a one stop shop for Chatter and Approval notifications, regardless if you are on your desktop or your mobile device.

In beta release at time of writing is a new platform feature known as Notification Manager that allows you to send your own custom notifications to your users for anything your heart desires from the very same locations, even on a users mobile device! This blog dives into this feature and how you can integrate it into your creations regardless if you are a admin click coder, Apex developer or REST API junkie.

Getting Started

The first thing you need to do is define a new Notification Type under the Setup menu. This is a simple process that involves giving it a name and deciding what channels you want the notification to go out on, currently user desktop and mobile devices.

notificationstypes

Once this has been done you can use the new Send Custom Notification action in Process Builder or Flow. This allows you to define the title and body of your notification, along with the target recipients (users, groups, queues and more) along the target record that determines the record the user sees when they click/tap the notification. The following screenshot shows an example of such an Action in Process Builder:-

opportunitycustomnotification

notificationoppty.png

Basically that is all there is to it! You will have in a few clicks empowered yourself with the ability to reach out to not only your users desktop but the actual mobile device notification experience on each of the their mobile devices!  You didn’t have to learn how to write a mobile app, figure out how to do mobile notifications, register things with Google or Apple. I am honestly blown away at how easy and powerful this is!

So it is pretty easy to to send notifications this way from Process Builder processes driven by record updates from the user and also reference field values to customize the notification text. However in the ever expanding world of Platform Events, how do we send custom notifications based on Platform Events?

Sending Custom Notifications for Batch Apex Job Failures

One of my oldest and most popular blog posts discussed design best practices around Batch Apex jobs. One of the considerations it calls out is how important it is to route errors that occur in the background back to the user. Fast forward a bit to this blog, where I covered the new BatchApexError Platform Event as a means to capture and route batch errors (even uncatchable exceptions) in near realtime. It also describes strategy to enabled users to retry failed jobs. What it didn’t really solve, is letting them know something had gone wrong without them checking a custom tab. Let’s change that!

Process Builder is now able to subscribe to the standard BatchApexErrorEvent and thus enables you as an admin to apply filter and routing logic on failed batch jobs. When combined with custom notifications those errors can now be routed to users devices and/or desktops in realtime. While Process Builder can subscribe to events it does have some restrictions on what it can do with the event data itself. Thus we are going to call an autolaunch Flow from Process Builder to actually handle the event and send the custom notification from within Flow. If you are reading this wondering if your Apex code can get in on the action, the answer is yes (ish), more on this later though. The declarative solution utilizes one Process Builder process and two Flows. The separation of concerns between them is shown in the diagram below:-

batchapexeventtonotificationarch

Let’s work from the bottom to the top to understand why I decided to split it up this way. Firstly, SendCustomNotification is a Sub Flow (callable by other Flows) and is a pretty simple wrapper around the new Send Custom Notification action shown above. You can take a closer look at this later through the sample code repository here.

SendCustomNotification

Next the BatchApexErrorPlatformEventHandler Flow defines a set of input variables that are populated from the Process Builder process. These variables match the fields and types per the definition of the Batch Apex Error Event here. The only other thing it does is add the Id of the user that generated the event (aka the user who submitted the failed job) to the list of recipients passed to the SendCustomNotification sub flow above. This could also be a Group Id if you wanted to send the notification further.

batchapexeventhandlerflow.png

Lastly, in the screenshot below you see the Process Builder that subscribes to the Batch Apex Error Event and maps the event field values to the input variables exposed from BatchApexErrorPlatformEventHandler Flow via the EventReference. The example here is very simple, but you can now imagine how you can add other filter criteria to this process that allows you to inspect which Batch Apex job failed and route and/or adjust messaging in the notifications accordingly, all done declaratively of course!

batchapexerrorfrompb.png

NOTE: It is not immediately apparent in all cases that you can access the event fields from Process Builder, since the documentation states them as not supported within formulas. I want to give a shout out to Alex Edelstein PM for Flow for clarifying that it is possible! Check out his amazing blog around all things Flow here. Finally note that Process Builder requires an Object to map the incoming event to. In this case I mapped to a User record using the CreatedById field on the event.

Sending Custom Notifications from Code

The Send Custom Notification action is also exposed via the Salesforce Action REST API defined here (hint hint for Doug Ayers Mass Action tool to support it). You can of course attempt to call this REST API via Apex as well. While there is currently no native Apex Action API, it turns out calling the above SendCustomNotification Flow from Apex works pretty well meanwhile. I have written a small wrapper around this technique to make it a little more elegant to perform from Apex and it also serves to abstract away this hopefully temporary workaround for Apex developers.

new CustomNotification()
    .type('MyNotificationType')
    .title('Fun Custom Notification')
    .body('Custom Notifications are Awesome!')
    .sendToCurrentUser();

The following Apex code results in this notification appearing on your device!

funfromapexnotification.png

This CustomNotification helper class is included in the sample code for this blog and leverages another class I wrote that wraps the native Apex Flow API. I used this wrapper because it allowed me to mock the actual Flow invocation since there is no way as far as I can see to assert the notification was actually sent.

NOTE: When sending custom notifications via declarative tools and/or via code I did confirm in my testing that they are included in the current transaction. Also I recommend you always avoid calling Flow in loops in your Apex code, instead make your Flows take list variables (aka try to bulkify Flows called from Apex). Though not shown in the Apex above, the wrapped Flow takes a list of recipients.

Summary

You can find all the code for this blog in sample code repository here. So there you have it, custom mobile and desktop notifications sent from Process Builder, Flow, Apex and REST API. Keep in mind of course at time of writing this is a Beta feature and thus read the clause in the documentation carefully. Now go forth and start thinking of all the areas you can enable with this feature!

P.S. Check out another new cool feature called Lightning In-App Guidance.

 

 


9 Comments

Salesforce DX Integration Strategies

This blog will cover three ways by which you can interact programmatically with Salesforce DX. DX provides a number of time-saving utilities and commands, sometimes though you want to either combine those together or choose to write your own that fit better with your way of working. Fortunately, DX is very open and in fact, goes beyond just interacting with CLI.

If you are familiar with DX you will likely already be writing or have used shell scripts around the CLI, those scripts are code and the CLI commands and their outputs (especially in JSON mode) is the API in this case. The goal of this blog is to highlight this approach further and also other programming options via REST API or Node.js.

Broadly speaking DX is composed of layers, from client side services to those at the backend. Each of these layers is actually supported and available to you as a developer to consume as well. The diagram shown here shows these layers and the following sections highlight some examples and further use cases for each.

DX CLI

Programming via shell scripts is very common and there is a huge wealth of content and help on the internet regardless of your platform. You can perform condition operations, use variables and even perform loops. The one downside is they are platform specific. So if supporting users on multiple platforms is important to you, and you have skills in other more platform neutral languages you may want to consider automating the CLI that way.

Regardless of how you invoke the CLI, parsing human-readable text from CLI commands is not a great experience and leads to fragility (as it can and should be allowed to change between releases). Thus all Salesforce DX commands support the –json parameter. First, let’s consider the default output of the following command.

sfdx force:org:display
=== Org Description
KEY              VALUE
───────────────  ──────────────────────────────────────────────────────────────────────
Access Token     00DR00.....O1012
Alias            demo
Client Id        SalesforceDevelopmentExperience
Created By       admin@sf-fx.org
Created Date     2019-02-09T23:38:10.000+0000
Dev Hub Id       admin@sf-fx.org
Edition          Developer
Expiration Date  2019-02-16
Id               00DR000000093TsMAI
Instance Url     https://customization-java-9422-dev-ed....salesforce.com/
Org Name         afawcett Company
Status           Active
Username         test....a@example.com

Now let’s contrast the output of this command with the –json parameter.

sfdx force:org:display --json
{"status":0,"result":{"username":"test...a@example.com","devHubId":"admin@sf-fx.org","id":"00DR000000093TsMAI","createdBy":"admin@sf-fx.org","createdDate":"2019-02-09T23:38:10.000+0000","expirationDate":"2019-02-16","status":"Active","edition":"Developer","orgName":"afawcett Company","accessToken":"00DR000...yijdqPlO1012","instanceUrl":"https://customization-java-9422-dev-ed.mobile02.blitz.salesforce.com/","clientId":"SalesforceDevelopmentExperience","alias":"demo"}}}

If you are using a programming language with support for interpreting JSON you can now start to parse the response to obtain the information you need. However, if you are using shell scripts you need a little extract assistance. Thankfully there is an awesome open source utility called jq to the rescue. Just simply piping the JSON output through the jq command allows you to get a better look at things…

sfdx force:org:display --json | jq
{
  "status": 0,
  "result": {
    "username": "test-hm83yjxhunoa@example.com",
    "devHubId": "admin@sf-fx.org",
    "id": "00DR000000093TsMAI",
    "createdBy": "admin@sf-fx.org",
    "createdDate": "2019-02-09T23:38:10.000+0000",
    "expirationDate": "2019-02-16",
    "status": "Active",
    "edition": "Developer",
    "orgName": "afawcett Company",
    "accessToken": "00DR000....O1012",
    "instanceUrl": "https://customization-java-9422-dev-ed.....salesforce.com/",
    "clientId": "SalesforceDevelopmentExperience",
    "alias": "demo"
  }
}

You can then get a bit more specific in terms of the information you want.

sfdx force:org:display --json | jq .result.id -r
00DR000000093TsMAI

You can combine this into a shell script to set variables as follows.

ORG_INFO=$(sfdx force:org:display --json)
ORG_ID=$(echo $ORG_INFO | jq .result.id -r)
ORG_DOMAIN=$(echo $ORG_INFO | jq .result.instanceUrl -r)
ORG_SESSION=$(echo $ORG_INFO | jq .result.accessToken -r)

All the DX commands support JSON output, including the query commands…

sfdx force:data:soql:query -q "select Name from Account" --json | jq .result.records[0].Name -r
GenePoint

The Sample Script for Installing Packages with Dependencies has a great example of using JSON output from the query commands to auto-discover package dependencies. This approach can be adapted however to any object, it also shows another useful approach of combining Python within a Shell script.

DX Core Library and DX Plugins

This is a Node.js library contains core DX functionality such as authentication, org management, project management and the ability to invoke REST API’s against scratch orgs vis JSForce. This library is actually used most commonly when you are authoring a DX plugin, however, it can be used standalone. If you have an existing Node.js based tool or existing CLI library you want to embed DX in.

The samples folder here contains some great examples. This example shows how to use the library to access the alias information and provide a means for the user to edit the alias names.

  // Enter a new alias
  const { newAlias } = await inquirer.prompt([
    { name: 'newAlias', message: 'Enter a new alias (empty to remove):' }
  ]);

  if (alias !== 'N/A') {
    // Remove the old one
    aliases.unset(alias);
    console.log(`Unset alias ${chalk.red(alias)}`);
  }

  if (newAlias) {
    aliases.set(newAlias, username);
    console.log(
      `Set alias ${chalk.green(newAlias)} to username ${chalk.green(username)}`
    );
  }

Tooling API Objects

Finally, there is a host of Tooling API objects that support the above features and some added extra features. These are fully documented and accessible via the Salesforce Tooling API for use in your own plugins or applications capable of making REST API calls.  Keep in mind you can do more than just query these objects, some also represent processes, meaning when you insert into them they do stuff! Here is a brief summary of the most interesting objects.

  • PackageUploadRequest, MetadataPackage, MetadataPackageVersion represent objects you can use as a developer to automate the uploading of first generation packages.
  • Package2, Package2Version, Package2VersionCreateRequest and Package2VersionCreateRequestError represent objects you can use as a developer to automate the uploading of second generation packages.
  • PackageInstallRequest SubscriberPackage SubscriberPackageVersion and Package2Member (second generation only) represent objects that allow you to automate the installation of a package and also allow you to discover the contents of packages installed within an org.
  • SandboxProcess and SandboxInfo represent objects that allow you to automate the creation and refresh of Sandboxes, as well as query for existing ones. For more information see the summary at the bottom of this help topic.
  • SourceMember represents changes you make when using the Setup menu within a Scratch org. It is used by the push and pull commands to track changes. The documentation claims you can create and update records in this object, however, I would recommend that you only use it for informationally purposes. For example, you could write your own poller tool to drive code generation based on custom object changes.

IMPORTANT NOTE: Be sure to consider what CLI commands exist to accomplish your need. As you’ve read above its easy to automate those commands and they manage a lot of the complexity in interacting with these objects directly. This is especially true for packaging objects.

Summary

The above options represent a rich set of abilities to integrate and extend DX. Keep in mind the deeper you go the more flexibility you get, but you are also taking on more complexity. So choose wisely and/or use a mix of approaches. Finally worthy of mention is the future of SFDX CLI and Oclif. Salesforce is busy updating the internals of the DX CLI to use this library and once complete will open up new cool possibilities such as CLI hooks which will allow you to extend the existing commands.