Andy in the Cloud

From BBC Basic to Force.com and beyond…


4 Comments

Reducing Field Validation Boilerplate Code

Boilerplate code is code that we repeat often with little or no variation. When it comes to writing field validations in Apex, especially within Apex Triggers there are a number of examples of this. Especially when it comes to checking if a field value has changed and/or querying for related records, which also requires observing good bulkifcathion best practices. This blog presents a small proof of concept framework aimed at reducing such logic made possible in Winter’21 with a small but critical enhancement to the Apex runtime that allows developers to dynamically add field errors. Additionally, for those practicing unit testing, the enhancement also allows test assert such errors without DML!

This is a very basic demonstration of the new addError and getErrors methods.

Opportunity opp = new Opportunity();
opp.addError('Description', 'Error Message!'); 
List<Database.Error> errors = opp.getErrors();
System.assertEquals(1, errors.size());
System.assertEquals('Error Message!', errors[0].getMessage());
System.assertEquals('Description', errors[0].getFields()[0]);

However in order to really appreciate the value these two features bring to frameworks let us first review a use case and the traditional approach to coding such validation logic. Our requirements are:

  1. When updating an Opportunity validate the Description and AccountId fields
  2. If the StageName field changes to “Closed Won” and the Description field has changed ensure it is not null.
  3. If the AccountId field changes ensure that the NumberOfEmployees field on the Account is not null
  4. Ensure code is bulkified and queries are only performed when needed.

The following code implements the above requirements, but does contain some boilerplate code.

// Classic style validation
switch on Trigger.operationType {
    when AFTER_UPDATE {
        // Prescan to bulkify querying for related Accounts
        Set<Id> accountIds = new Set<Id>();
        for (Opportunity opp : newMap.values()) {
            Opportunity oldOpp = oldMap.get(opp.Id);
            if(opp.AccountId != oldOpp.AccountId) { // AccountId changed?
                accountIds.add(opp.AccountId);
            }
        }                
        // Query related Account records?
        Map<Id, Account> associatedAccountsById = accountIds.size()==0 ? 
            new Map<Id, Account>() : 
            new Map<Id, Account>([select Id, NumberOfEmployees from Account where Id = :accountIds]);
        // Validate
        for (Opportunity opp : newMap.values()) {
            Opportunity oldOpp = oldMap.get(opp.Id);
            if(opp.StageName != oldOpp.StageName) { // Stage changed?
                if(opp.StageName == 'Closed Won') { // Stage closed won?
                    if(opp.Description != oldOpp.Description) { // Description changed?               
                        if(opp.Description == null) { // Description null?
                            opp.Description.addError('Description must be specified when Opportunity is closed');
                        }
                    }
                }                                
            }
            if(opp.AccountId != oldOpp.AccountId) { // AccountId changed?
                Account acct = associatedAccountsById.get(opp.AccountId);
                if(acct!=null) { // Account queried?
                    if(acct.NumberOfEmployees==null) { // NumberOfEmployees null?
                        opp.AccountId.addError('Account does not have any employees');
                    }    
                }
            }
        }
    }
}               

Below is the same validation implemented using a framework built to reduce boilerplate code.

SObjectFieldValidator.build()            
  .when(TriggerOperation.AFTER_UPDATE)
    .field(Opportunity.Description).hasChanged().isNull().addError('Description must be specified when Opportunity is closed')
      .when(Opportunity.StageName).hasChanged().equals('Closed Won')
    .field(Opportunity.AccountId).whenChanged().addError('Account does not have any employees')
      .when(Account.NumberOfEmployees).isNull()
  .validate(operation, oldMap, newMap);

The SObjectFieldValidator framework uses the Fluent style design to its API and as such allows the validator to be dynamically constructed with ease. Additionally configured instances of it can be passed around and extended by other code paths and modules with the validation itself to be performed in one pass. The framework also attempts some smarts to bulkify queries (in this case related Accounts) and only do so if the target field or related fields have been modified – thus ensuring optimal processing time. The test code for either approaches can of course be written in the usual way as shown below.

// Given
Account relatedAccount = new Account(Name = 'Test', NumberOfEmployees = null);        
insert relatedAccount;
Opportunity opp = new Opportunity(Name = 'Test', CloseDate = Date.today(), StageName = 'Prospecting', Description = 'X', AccountId = null);
insert opp;
opp.StageName = 'Closed Won';
opp.Description = null;
opp.AccountId = relatedAccount.Id;
// When
Database.SaveResult saveResult = Database.update(opp, false);
// Then
List<Database.Error> errors = saveResult.getErrors();
System.assertEquals(2, errors.size());
System.assertEquals('Description', errors[0].getFields()[0]);
System.assertEquals('Description must be specified when Opportunity is closed', errors[0].getMessage());
System.assertEquals('AccountId', errors[1].getFields()[0]);
System.assertEquals('Account does not have any employees', errors[1].getMessage());

While you do still need code coverage for your Apex Trigger logic, those practicing unit testing may prefer to leverage the ability to avoid DML in order to assert more varied validation scenarios. The following code is entirely free of any SOQL and DML statements and thus better for test performance. It leverages the ability to inject related records rather than allowing the framework to query them on demand. The SObjectFieldValidator instance is constructed and configured in a separate class for reuse.

// Given
Account relatedAccount = 
    new Account(Id = TEST_ACCOUNT_ID, Name = 'Test', NumberOfEmployees = null);
Map<Id, SObject> oldMap = 
    new Map<Id, SObject> { TEST_OPPORTUNIT_ID => 
        new Opportunity(Id = TEST_OPPORTUNIT_ID, StageName = 'Prospecting', Description = 'X', AccountId = null)};
Map<Id, SObject> newMap = 
    new Map<Id, SObject> { TEST_OPPORTUNIT_ID => 
        new Opportunity(Id = TEST_OPPORTUNIT_ID, StageName = 'Closed Won', Description = null, AccountId = TEST_ACCOUNT_ID)};
Map<SObjectField, Map<Id, SObject>> relatedRecords = 
    new Map<SObjectField, Map<Id, SObject>> {
        Opportunity.AccountId => 
            new Map<Id, SObject>(new List<Account> { relatedAccount })};
// When
OpportunityTriggerHandler.getValidator()
  .validate(TriggerOperation.AFTER_UPDATE, oldMap, newMap, relatedRecords); 
// Then
List<Database.Error> errors = newMap.get(TEST_OPPORTUNIT_ID).getErrors();
System.assertEquals(2, errors.size());
System.assertEquals('AccountId', errors[0].getFields()[0]);
System.assertEquals('Account does not have any employees', errors[0].getMessage());
System.assertEquals('Description', errors[1].getFields()[0]);
System.assertEquals('Description must be specified when Opportunity is closed', errors[1].getMessage());

Finally it is worth noting that such a framework can of course only get you so far and there will be scenarios where you need to be more rich in your criteria. This is something that could be explored further through the SObjectFieldValidator.FieldValidationCondition base type that allows coded field validations to be added via the condition method. The framework is pretty basic as I really do not have a great deal of time these days to build it out more fully, so totally invite anyone interested to take it further.

Enjoy!


7 Comments

Apex Process Orchestration and Monitoring with Platform Events

When it comes to implementing asynchronous workloads in Apex developers have a number of options such as Batch Apex and Queueable, each can be driven by user or system driven actions. This blog focuses on some of the more advanced aspects of implementing async workloads using Platform Events and Apex.

In comparison to other approaches, implementing asynchronous workloads using Platform Events offers two unique features. The first helps you better dynamically calibrate and manage resources based on data volumes to stay within limits, while the second feature provides automatic retry capabilities when errors occur. Lastly I want to highlight an approach I used to add some custom realtime telemetry to the workload using Platform Events.

Side Note: Before getting into the details, the goal of this blog is not to say one async option is better than another, rather to highlight the above features further so you can better consider your options. I also include a short comparison at the end of this blog with Batch Apex.

Business Scenario

Lets imagine the following scenario to help illustrate the use of the features described below:

  • Business Process
    Imagine that your business is processing Invoice generation on the platform and that the Orders that drive this arrive and are updated constantly.
  • Continuous Processing
    In order to avoid backlogs or spikes of invoices being processed, you want to maintain a continuous flow of the overall process. For this you create a Platform Event called Generate Invoice. This event can easily be sent by an admin / declarative builders who have perhaps setup some rules on the Orders object using Process Builder.
  • Resource Management
    Orders arrive in all shapes and sizes, meaning the processing required to generate Invoices can also vary when you consider variables such as number of order lines, product regional discounts, currencies and tax rules etc. Processing each one at time per execution context is an obvious way to maximize use of available resources and is certainly an option, but if resources allow, processing multiple invoices in one execution context is more efficient.

Below is what the Generate Invoice Platform Event looks like, it simply has a reference to the Order Id (though it could equally reference an External Id on the Order object).

For the purposes of this blog we are not focusing on how the events are sent / published. You can publish events using programatic API’s on or off platform or using one of the platforms declarative tools, there are in fact many ways to send events. For this blog we will just use a basic Apex snippet to generate the events as shown below.

List<GenerateInvoice__e> events = new List<GenerateInvoice__e>();
for(Order order : 
       [select Id from Order 
          where Invoiced__c != true 
          order by OrderNumber asc]) {
   events.add(new GenerateInvoice__e(OrderId__c = order.Id));
}
EventBus.publish(events);        

Here is a basic Apex handler for the above Platform Event that delegates the processing to another Apex class:

trigger GenerateInvoiceSubscriber on GenerateInvoice__e (after insert) {
    Set<Id> orderIds = new Set<Id>();
    for (GenerateInvoice__e event : Trigger.New) {
        orderIds.add(event.OrderId__c);
    }
    OrderService.generateInvoices(orderIds);
}

Processing Chunks of Events and Handling Retries

The following diagram highlights how a more advanced version of the above Apex handler can be used to optimally work within the limits to process chunks of Orders based on their size/complexity and also retry those that result in some errors along the way.

In order to orchestrate things this way you need to use some Apex API’s in your handler logic to let the platform know a few things. At the end of this blog I also share how I added telemetry to better visualize this along with a video. So don’t worry at this juncture if its not 100% clear how what you are seeing below is possible, just keep reading and watching!

Controlling how many Events are passed to your handler

Imagine the above code snippet published 1000 events. The platform docs state that it can pass up to a maximum of 2000 events to an Apex event handler at once, meaning the above will be invoked once. If you have been on the platform a while you will know that 200 (not 2000) is common number used to expressed a minimum number of records you should use when testing Apex Triggers and general bulkkification best practice. So why 2000 in the case of platform event handlers? Well the main aim of the platform is to drain the Platform Event message queue quickly and so it attempts to give the handler as much as possible, just in case it can process it.

As we have set out in our scenario above, Orders can be quite variable in nature and thus while a batch of 1000 orders with only a few order lines might be possible within the execution limits, include few orders in that batch with 100’s or a few thousand line items and its more likely you will hit CPU or heap limits. Fortunately, unlike Batch Apex, you get to control the size of each individual chunk. This is done by effectively giving some of the 1000 block of events passed to your handler back to the platform to pass back in a separate handler invocation where the limits are reset.

Below is some basic code that illustrates how you might go about pre-scanning the Orders to determine complexity (by number of lines) and thus dynamically calibrating how many of the events your code can really process within the limits. The orderIds list that is passed to the service class is reset with the orders that can be processed. The key part here is the use of the setResumeCheckpoint method that tells the platform where to resume from after this handler has completed its processing.

trigger GenerateInvoiceSubscriber on GenerateInvoice__e (after insert) { 

    // Determine number overall order lines to process 
    //   vs maximum within limits (could be config)
    Integer maxLines = 40000;
    Set<Id> orderIds = new Set<Id>();
    for (GenerateInvoice__e event : Trigger.New) {
        orderIds.add(event.OrderId__c);
    }
    Map<Id, Integer> lineCountByOrderId = 
        new OrdersSelector().selectLineCountById(orderIds);

    // Bulkify events passed to the OrderService
    orderIds = new Set<Id>();
    Integer lineCount = 0;
    for (GenerateInvoice__e event : Trigger.New) {
        orderIds.add(event.OrderId__c);
        EventBus.TriggerContext.currentContext().setResumeCheckpoint(event.ReplayId);
        lineCount = lineCount + lineCountByOrderId.get(event.OrderId__c);
        if(lineCount>maxLines) { 
            break;
        }
    }

    OrderService.generateInvoices(orderIds);
}

You can read more about this approach in the formal documentation here.

Implementing Retry Logic

There are a number of reasons errors can occur when your handler code is running. For errors that represent transient situations, such as timeouts and row locks for example, you would normally have to ask the user to retry (via email or notification) or utilize a dynamically scheduled job to retry. With Platform Event handlers in Apex, when the system RetryableException is thrown the Platform will automatically retry a batch events after period of time, up to 9 times (the batch sizes may vary between attempts). It is generally recommended that you do not let your code retry above 6, since when the maximum is reached the platform deactivates the handler/trigger.

The following code is basic illustration of how to use this facility and track the number of retries before reaching the max. In this example if the soft maximum is reached the events are effectively just lost in this case or if needed you can write them to a staging custom object for resubmission or just simply have some code such as the above scan for unprocessed Orders and resubmit events.

    // Invoke OrderService, support retries
    try {
        OrderService.generateInvoices(orderIds);
    } catch (Exception e) {
        // Only retry so many times, before giving up (thus avoid disabling the trigger)
        if (EventBus.TriggerContext.currentContext().retries < 6) {
            throw new EventBus.RetryableException(e.getMessage());
        }
        // In this case its ok to let the events drain away... 
        //   since new events for unprocessed Orders can always be re-generated
    }    

Using Platform Events to monitor activity

I used Platform Events to publish telemetry about the execution of the above handlers, by creating another Platform Event called Subscriber Telemetry and used a Lightning Web Component to monitor the events in realtime. Because Platform Events can be declared as being outside the standard Apex Transaction they are sent immediately using the “Publish Immediately” setting, they are sent even if an error occurs.

To publish to this event I simply added the following snippet of code to the top of my handler.

// Emit telemetry
EventBus.publish(
    new SubscriberTelemetry__e(
        Topic__c = 'GenerateInvoice__e', 
        ApexTrigger__c = 'GenerateInvoiceSubscriber',
        Position__c = 
           [select Position from EventBusSubscriber
              where Topic = 'GenerateInvoice__e'][0].Position,
        BatchSize__c = Trigger.new.size(),
        Retries__c = EventBus.TriggerContext.currentContext().retries,
        LastError__c = EventBus.TriggerContext.currentContext().lastError));

The following video shows me clicking a button to publish a batch of 1000 events, then monitoring the effects on my chunking logic and retry logic. The video actually includes me fixing some data errors in order to highlight the retry capabilities. The errors shown are contrived by some deliberately bad code to illustrate the retry logic, hence the fix to the Order records looks a bit odd, so please ignore that. Finally note that the platform chose to send my handler 83 events first then larger chunks thereafter, but in other tests I got 1000 events in the first chunk.

Batch Apex vs Platform Events

Batch Apex also provides a means to orchestration sequentially the processing of records in chunks, so I thought I would end here with a summary of some of the other differences. As you can see one of the key ones to consider is the user identity the code runs as. This is not impossible to workaround in the platform event handler case, but requires some coding to explicitly set the OwnerId field on records if that information is important to you. Overall though I do feel that Platform Events offers some useful options for switching to a more continuous mode of operation vs batch, so long as your aware of the differences this might be a good fit for you.

Side Note: For Apex Queueable handlers you will soon have the option to implement so called Transaction Finalizers that allow you to implement retry or logging logic.


3 Comments

The Third Edition

bookI’m proud to announce the third edition of my book has now been released. Back in March this year I took the plunge start updates to many key areas and add two brand new chapters. Between the 2 years and 8 months since the last edition there has been several platform releases and an increasing number of new features and innovations that made this the biggest update ever! This edition also embraces the platforms rebranding to Lightning, hence the book is now entitled Salesforce Lightning Platform Enterprise Architecture.

You can purchase this book direct from Packt or of course from Amazon among other sellers.  As is the case every year Salesforce events such as Dreamforce and TrailheaDX this book and many other awesome publications will be on sale. Here are some of the key update highlights:

  • Automation and Tooling Updates
    Throughout the book SFDX CLI, Visual Studio Code and 2nd Generation Packaging are leverage. While the whole book is certainly larger, certain chapters of the book actually reduced in size as steps previously reflecting clicks where replaced with CLI commands! At one point in time I was quite a master in Ant Scripts and Marcos, they have also given way to built in SFDX commands.
  • User Interface Updates
    Lightning Web Components is a relative new kid on the block, but benefits greatly from its standards compliance, meaning there is plenty of fun to go around exploring industry tools like Jest in the Unit Testing chapter. All of the books components have been re-written to the Web Component standard.
  • Big Data and Async Programming
    Big data was once a future concern for new products, these days it is very much a concern from the very start. The book covers Big Objects and Platform Events more extensibility with worked examples, including ingest and calculations driven by Platform Events and Async Apex Triggers. Event Driven Architecture is something every Lightning developer should be embracing as the platform continues to evolve around more and more standard platforms and features that leverage them.
  • Integration and Extensibility
    A particularly enjoyed exploring the use of Platform Events as another means by which you can expose API’s from your packages to support more scalable invocation of your logic and asynchronous plugins.
  • External Integrations and AI
    External integrations with other cloud services are a key part to application development and also the implementation of your solution, thus one of two brand new chapters focuses on Connected Apps, Named Credentials, External Services and External Objects, with worked examples of existing services or sample Heroku based services. Einstein has an ever growing surface area across Salesforce products and the platform. While this topic alone is worth an entire book, I took the time in the second new chapter, to enumerate Einstein from the perspective of the developer and customer configurations. The Formula1 motor racing theme continued with the ingest of historic race data that you can run AI over.
  • Other Updates
    Among other updates is a fairly extensive update to the CI/CD chapter which still covers Jenkins, but leverages the new Jenkins Pipeline feature to integrate SFDX CLI. The Unit Testing chapter has also been extended with further thoughts on unit vs integration testing and a focus on Lightening Web Component testing.

The above is just highlights for this third edition, you can see a full table of contents here. A massive thanks to everyone involving for providing the inspiration and support for making this third edition happen! Enjoy!


9 Comments

Salesforce DX Integration Strategies

This blog will cover three ways by which you can interact programmatically with Salesforce DX. DX provides a number of time-saving utilities and commands, sometimes though you want to either combine those together or choose to write your own that fit better with your way of working. Fortunately, DX is very open and in fact, goes beyond just interacting with CLI.

If you are familiar with DX you will likely already be writing or have used shell scripts around the CLI, those scripts are code and the CLI commands and their outputs (especially in JSON mode) is the API in this case. The goal of this blog is to highlight this approach further and also other programming options via REST API or Node.js.

Broadly speaking DX is composed of layers, from client side services to those at the backend. Each of these layers is actually supported and available to you as a developer to consume as well. The diagram shown here shows these layers and the following sections highlight some examples and further use cases for each.

DX CLI

Programming via shell scripts is very common and there is a huge wealth of content and help on the internet regardless of your platform. You can perform condition operations, use variables and even perform loops. The one downside is they are platform specific. So if supporting users on multiple platforms is important to you, and you have skills in other more platform neutral languages you may want to consider automating the CLI that way.

Regardless of how you invoke the CLI, parsing human-readable text from CLI commands is not a great experience and leads to fragility (as it can and should be allowed to change between releases). Thus all Salesforce DX commands support the –json parameter. First, let’s consider the default output of the following command.

sfdx force:org:display
=== Org Description
KEY              VALUE
───────────────  ──────────────────────────────────────────────────────────────────────
Access Token     00DR00.....O1012
Alias            demo
Client Id        SalesforceDevelopmentExperience
Created By       admin@sf-fx.org
Created Date     2019-02-09T23:38:10.000+0000
Dev Hub Id       admin@sf-fx.org
Edition          Developer
Expiration Date  2019-02-16
Id               00DR000000093TsMAI
Instance Url     https://customization-java-9422-dev-ed....salesforce.com/
Org Name         afawcett Company
Status           Active
Username         test....a@example.com

Now let’s contrast the output of this command with the –json parameter.

sfdx force:org:display --json
{"status":0,"result":{"username":"test...a@example.com","devHubId":"admin@sf-fx.org","id":"00DR000000093TsMAI","createdBy":"admin@sf-fx.org","createdDate":"2019-02-09T23:38:10.000+0000","expirationDate":"2019-02-16","status":"Active","edition":"Developer","orgName":"afawcett Company","accessToken":"00DR000...yijdqPlO1012","instanceUrl":"https://customization-java-9422-dev-ed.mobile02.blitz.salesforce.com/","clientId":"SalesforceDevelopmentExperience","alias":"demo"}}}

If you are using a programming language with support for interpreting JSON you can now start to parse the response to obtain the information you need. However, if you are using shell scripts you need a little extract assistance. Thankfully there is an awesome open source utility called jq to the rescue. Just simply piping the JSON output through the jq command allows you to get a better look at things…

sfdx force:org:display --json | jq
{
  "status": 0,
  "result": {
    "username": "test-hm83yjxhunoa@example.com",
    "devHubId": "admin@sf-fx.org",
    "id": "00DR000000093TsMAI",
    "createdBy": "admin@sf-fx.org",
    "createdDate": "2019-02-09T23:38:10.000+0000",
    "expirationDate": "2019-02-16",
    "status": "Active",
    "edition": "Developer",
    "orgName": "afawcett Company",
    "accessToken": "00DR000....O1012",
    "instanceUrl": "https://customization-java-9422-dev-ed.....salesforce.com/",
    "clientId": "SalesforceDevelopmentExperience",
    "alias": "demo"
  }
}

You can then get a bit more specific in terms of the information you want.

sfdx force:org:display --json | jq .result.id -r
00DR000000093TsMAI

You can combine this into a shell script to set variables as follows.

ORG_INFO=$(sfdx force:org:display --json)
ORG_ID=$(echo $ORG_INFO | jq .result.id -r)
ORG_DOMAIN=$(echo $ORG_INFO | jq .result.instanceUrl -r)
ORG_SESSION=$(echo $ORG_INFO | jq .result.accessToken -r)

All the DX commands support JSON output, including the query commands…

sfdx force:data:soql:query -q "select Name from Account" --json | jq .result.records[0].Name -r
GenePoint

The Sample Script for Installing Packages with Dependencies has a great example of using JSON output from the query commands to auto-discover package dependencies. This approach can be adapted however to any object, it also shows another useful approach of combining Python within a Shell script.

DX Core Library and DX Plugins

This is a Node.js library contains core DX functionality such as authentication, org management, project management and the ability to invoke REST API’s against scratch orgs vis JSForce. This library is actually used most commonly when you are authoring a DX plugin, however, it can be used standalone. If you have an existing Node.js based tool or existing CLI library you want to embed DX in.

The samples folder here contains some great examples. This example shows how to use the library to access the alias information and provide a means for the user to edit the alias names.

  // Enter a new alias
  const { newAlias } = await inquirer.prompt([
    { name: 'newAlias', message: 'Enter a new alias (empty to remove):' }
  ]);

  if (alias !== 'N/A') {
    // Remove the old one
    aliases.unset(alias);
    console.log(`Unset alias ${chalk.red(alias)}`);
  }

  if (newAlias) {
    aliases.set(newAlias, username);
    console.log(
      `Set alias ${chalk.green(newAlias)} to username ${chalk.green(username)}`
    );
  }

Tooling API Objects

Finally, there is a host of Tooling API objects that support the above features and some added extra features. These are fully documented and accessible via the Salesforce Tooling API for use in your own plugins or applications capable of making REST API calls.  Keep in mind you can do more than just query these objects, some also represent processes, meaning when you insert into them they do stuff! Here is a brief summary of the most interesting objects.

  • PackageUploadRequest, MetadataPackage, MetadataPackageVersion represent objects you can use as a developer to automate the uploading of first generation packages.
  • Package2, Package2Version, Package2VersionCreateRequest and Package2VersionCreateRequestError represent objects you can use as a developer to automate the uploading of second generation packages.
  • PackageInstallRequest SubscriberPackage SubscriberPackageVersion and Package2Member (second generation only) represent objects that allow you to automate the installation of a package and also allow you to discover the contents of packages installed within an org.
  • SandboxProcess and SandboxInfo represent objects that allow you to automate the creation and refresh of Sandboxes, as well as query for existing ones. For more information see the summary at the bottom of this help topic.
  • SourceMember represents changes you make when using the Setup menu within a Scratch org. It is used by the push and pull commands to track changes. The documentation claims you can create and update records in this object, however, I would recommend that you only use it for informationally purposes. For example, you could write your own poller tool to drive code generation based on custom object changes.

IMPORTANT NOTE: Be sure to consider what CLI commands exist to accomplish your need. As you’ve read above its easy to automate those commands and they manage a lot of the complexity in interacting with these objects directly. This is especially true for packaging objects.

Summary

The above options represent a rich set of abilities to integrate and extend DX. Keep in mind the deeper you go the more flexibility you get, but you are also taking on more complexity. So choose wisely and/or use a mix of approaches. Finally worthy of mention is the future of SFDX CLI and Oclif. Salesforce is busy updating the internals of the DX CLI to use this library and once complete will open up new cool possibilities such as CLI hooks which will allow you to extend the existing commands.


4 Comments

FinancialForce Apex Common Community Updates

This short blog highlights a batch of new features recently merged to the FinancialForce Apex Common library aka fflib. In addition to the various Dreamforce and blog resources linked from the repo, fans of Trailhead can also find modules relating to the library here and here. But please read this blog first before heading out to the trails to hunt down badges! It’s really pleasing to see it continue to get great contributions so here goes…

Added methods for detecting changed records with given fields in the Domain layer (fflib_SObjectDomain)

First up is a great new optimization feature for your Domain class methods from Nathan Pepper aka MayTheSForceBeWithYou based on a suggestion by Daniel Hoechst. Where applicable its a good optimization practice to considering comparing the old and new values of fields relating to processing you are doing in your Domain methods to avoid unnecessary overheads. The new fflib_SObjectDomain.getChangedRecords method can be used as an alternative to the Records property to just the records that have changed based on the field list passed to the method.

// Returns a list of Account where the Name or AnnaulRevenue has changed
List<Account> accounts =
  (List<Account>) getChangedRecords(
     new List<SObjectField> { Account.Name, Account.AnnualRevenue });

Supporting EventBus.publish(list<SObject>) in Unit of Work (fflib_SObjectUnitOfWork)

Platform Events are becoming ever popular in many situations. If you regard them as logically part of the unit of work your code is performing, this enhancement from Chris Mail is for you! You can now register platform events to be sent based on various scenarios. Chris has also provided bulkified versions of the following methods, nice!

    /**
     * Register a newly created SObject (Platform Event) instance to be published when commitWork is called
     *
     * @param record A newly created SObject (Platform Event) instance to be inserted during commitWork
     **/
    void registerPublishBeforeTransaction(SObject record);
    /**
     * Register a newly created SObject (Platform Event) instance to be published when commitWork has successfully
     * completed
     *
     * @param record A newly created SObject (Platform Event) instance to be inserted during commitWork
     **/
    void registerPublishAfterSuccessTransaction(SObject record);
    /**
     * Register a newly created SObject (Platform Event) instance to be published when commitWork has caused an error
     *
     * @param record A newly created SObject (Platform Event) instance to be inserted during commitWork
     **/
    void registerPublishAfterFailureTransaction(SObject record);

Add custom DML for Application.UnitOfWork.newInstance call (fflib_Application)

It’s been possible for a while now to override the default means by which the fflib_SObjectUnitOfWork.commitWork method performs DML operations (for example if you wanted to do some additional pre/post processing or logging). However, if you have been using the Application class pattern to access your UOW (shorthand and helps with mocking) then this has not been possible. Thanks to William Velzeboer you can now get the best of both worlds!

fflib_SObjectUnitOfWork.IDML myDML = new MyCustomDMLImpl();
fflib_ISObjectUnitOfWork uow = Application.UnitOfWork.newIntance(myDML);

Added methods to Unit of Work to be able to register record for upsert (fflib_SObjectUnitOfWork)

Unit Of Work is a very popular class and receives yet another enhancement in this batch from Yury Bondarau. These two methods allow you to register records that will either be inserted or updated as automatically determined by the records having an Id populated or not, aka a UOW upsert.

    /**
     * Register a new or existing record to be inserted or updated during the commitWork method
     *
     * @param record An new or existing record
     **/
    void registerUpsert(SObject record);
    /**
     * Register a list of mix of new and existing records to be upserted during the commitWork method
     *
     * @param records A list of mix of existing and new records
     **/
    void registerUpsert(List&lt;SObject&gt; records);
    /**
     * Register an existing record to be deleted during the commitWork method
     *
     * @param record An existing record
     **/

Alleviates unit-test exception when Org’s email service is limited

Finally, long term mega fan of the library John Storey comes in with an ingenious fix to an Apex test failure which occurs when the org’s email deliverability’s ‘Access Level’ setting is not ‘All Email’. John leveraged an extensibility feature in the Unit Of Work to avoid the test being dependent on this org config all while not losing any code coverage, sweet!

Last but not least, thank you Christian Coleman for fixing those annoying typos in the docs! 🙂


5 Comments

Custom Keyboard Shortcuts with Lightning Background Utilities

kbsexample2

As readers of my blog will know I am a big fan of the rich features the Lightning Experience UI provides to developers. Having blogged several times about the amazing Utility Bar, I have been keen to explore new possibilities with the new Background Utility feature. These are utilities that have no UI so do not use up space in the Utility Bar. Instead, they sit in the background monitoring things like other events generated by the user. One such documented use case is the possibility to monitor keyboard events! And so the Custom Keyboard Shortcut Component has been born! This component effectively runs Flows based on keyboard shortcuts defined by the admin! More on this later…

standardshortcuts.png

You may or may not know that Lightning Experience actually already provides some standard keyboard shortcut cuts? Just press Cmd+/ (Mac) or Ctrl+/ (Windows) to get a nice summary of them!

However, per the standard shortcut documentation, it’s not possible to add custom ones. By using the new lightning:backgroundUtilityItem interface we can rectify this. This blog explains a basic hardcoded example component and also introduces an open source component (installable package provided) that links admin defined keyboard shortcuts to Flows and certain navigation events.

In just a few lines of markup and JavaScript code you can get a basic example up and running.

<aura:component implements="lightning:backgroundUtilityItem" >
	<aura:handler name="init" value="{!this}" action="{!c.init}" />
</aura:component>

The component controller simply uses the standard addEventListener method. You can also inspect the keydown event properties to determine what keys are pressed, such as Shift or Control plus another key. This example simply determines if H is pressed and navigates to Home.

({
   init: function(component, event, helper) {
      window.addEventListener('keydown', function(e) {
      if (e.key === 'H') {
         $A.get('e.force:navigateToURL').fire({ url: '/lightning/page/home' })
      }
    });
  }
})

Once deployed go to the App Manager under Setup and add the component to the Utility Items list and that’s it! Note that the component has a different icon indicating it’s a non-visual component. Neat!

Of course, I could not simply leave things like this, so I set about making a more dynamic version. The configuration of the Custom Keyboard Shortcut component is shown at the top of this blog. It’s leveraging the fact that when you configure a Utility Bar component the App Manager inspects the .design file for the component to understand what attributes the component needs the user to configure. At runtime, the controller logic then parses the 9 attributes containing the keyboard shortcuts entered by the user into an internal map that is used by the keyboard event handler to match actions against keyboard activity.

Once you have installed the component either via a package install (admin friendly) or via sfdx force:source:deploy (devs). Add the component within the App Manager to configure keyboard shortcuts.

Through configuration you can connect keyboard shortcuts to the following:-

  • Open a UI Flow in a modal popup
  • Run an Autolaunch Flow
  • Display popup messages communicating the actions taken by the flow
  • Navigate the user to the Home tab
  • Navigate the user to records created by the flow

Further details on configuring the component can be found in the README here. Finally, you may recall that I used a Background Utility in this years Dreamforce presentation. In this case, it was using the new Streaming Component to listen to Platform Events. You can find the source code here.

Have fun!

 


5 Comments

Adding Clicks not Code Extensibility to your Apex with Lightning Flow

Building solutions on the Lightning Platform is a highly collaborative process, due to its unique ability to allow Trailblazers in a team to operate in no code, low code and/or code environments. Lightning Flow is a Salesforce native tool for no code automation and Apex is the native programming language of the platform — the code!

A flow author is able to create no-code solutions using the Cloud Flow Designer tool that can query and manipulate records, post Chatter posts, manage approvals, and even make external callouts. Conversely using Salesforce DX, the Apex developer can, of course, do all these things and more! This blog post presents a way in which two Trailblazers (Meaning a flow author and an Apex developer) can consider options that allow them to share the work in both building and maintaining a solution.

Often a flow is considered the start of a process — typically and traditionally a UI wizard or more latterly, something that is triggered when a record is updated (via Process Builder). We also know that via invocable methods, flows and processes can call Apex. What you might not know is that the reverse is also true! Just because you have decided to build a process via Apex, you can still leverage flows within that Apex code. Such flows are known as autolaunched flows, as they have no UI.

Blog_Graphic_01_v01-02_abpx5x.png

I am honored to have this blog hosted on the Salesforce Blog site.  To continue reading the rest of this blog head on over to Salesforce.com blog post here.