Andy in the Cloud

From BBC Basic to Force.com and beyond…


4 Comments

Reducing Field Validation Boilerplate Code

Boilerplate code is code that we repeat often with little or no variation. When it comes to writing field validations in Apex, especially within Apex Triggers there are a number of examples of this. Especially when it comes to checking if a field value has changed and/or querying for related records, which also requires observing good bulkifcathion best practices. This blog presents a small proof of concept framework aimed at reducing such logic made possible in Winter’21 with a small but critical enhancement to the Apex runtime that allows developers to dynamically add field errors. Additionally, for those practicing unit testing, the enhancement also allows test assert such errors without DML!

This is a very basic demonstration of the new addError and getErrors methods.

Opportunity opp = new Opportunity();
opp.addError('Description', 'Error Message!'); 
List<Database.Error> errors = opp.getErrors();
System.assertEquals(1, errors.size());
System.assertEquals('Error Message!', errors[0].getMessage());
System.assertEquals('Description', errors[0].getFields()[0]);

However in order to really appreciate the value these two features bring to frameworks let us first review a use case and the traditional approach to coding such validation logic. Our requirements are:

  1. When updating an Opportunity validate the Description and AccountId fields
  2. If the StageName field changes to “Closed Won” and the Description field has changed ensure it is not null.
  3. If the AccountId field changes ensure that the NumberOfEmployees field on the Account is not null
  4. Ensure code is bulkified and queries are only performed when needed.

The following code implements the above requirements, but does contain some boilerplate code.

// Classic style validation
switch on Trigger.operationType {
    when AFTER_UPDATE {
        // Prescan to bulkify querying for related Accounts
        Set<Id> accountIds = new Set<Id>();
        for (Opportunity opp : newMap.values()) {
            Opportunity oldOpp = oldMap.get(opp.Id);
            if(opp.AccountId != oldOpp.AccountId) { // AccountId changed?
                accountIds.add(opp.AccountId);
            }
        }                
        // Query related Account records?
        Map<Id, Account> associatedAccountsById = accountIds.size()==0 ? 
            new Map<Id, Account>() : 
            new Map<Id, Account>([select Id, NumberOfEmployees from Account where Id = :accountIds]);
        // Validate
        for (Opportunity opp : newMap.values()) {
            Opportunity oldOpp = oldMap.get(opp.Id);
            if(opp.StageName != oldOpp.StageName) { // Stage changed?
                if(opp.StageName == 'Closed Won') { // Stage closed won?
                    if(opp.Description != oldOpp.Description) { // Description changed?               
                        if(opp.Description == null) { // Description null?
                            opp.Description.addError('Description must be specified when Opportunity is closed');
                        }
                    }
                }                                
            }
            if(opp.AccountId != oldOpp.AccountId) { // AccountId changed?
                Account acct = associatedAccountsById.get(opp.AccountId);
                if(acct!=null) { // Account queried?
                    if(acct.NumberOfEmployees==null) { // NumberOfEmployees null?
                        opp.AccountId.addError('Account does not have any employees');
                    }    
                }
            }
        }
    }
}               

Below is the same validation implemented using a framework built to reduce boilerplate code.

SObjectFieldValidator.build()            
  .when(TriggerOperation.AFTER_UPDATE)
    .field(Opportunity.Description).hasChanged().isNull().addError('Description must be specified when Opportunity is closed')
      .when(Opportunity.StageName).hasChanged().equals('Closed Won')
    .field(Opportunity.AccountId).whenChanged().addError('Account does not have any employees')
      .when(Account.NumberOfEmployees).isNull()
  .validate(operation, oldMap, newMap);

The SObjectFieldValidator framework uses the Fluent style design to its API and as such allows the validator to be dynamically constructed with ease. Additionally configured instances of it can be passed around and extended by other code paths and modules with the validation itself to be performed in one pass. The framework also attempts some smarts to bulkify queries (in this case related Accounts) and only do so if the target field or related fields have been modified – thus ensuring optimal processing time. The test code for either approaches can of course be written in the usual way as shown below.

// Given
Account relatedAccount = new Account(Name = 'Test', NumberOfEmployees = null);        
insert relatedAccount;
Opportunity opp = new Opportunity(Name = 'Test', CloseDate = Date.today(), StageName = 'Prospecting', Description = 'X', AccountId = null);
insert opp;
opp.StageName = 'Closed Won';
opp.Description = null;
opp.AccountId = relatedAccount.Id;
// When
Database.SaveResult saveResult = Database.update(opp, false);
// Then
List<Database.Error> errors = saveResult.getErrors();
System.assertEquals(2, errors.size());
System.assertEquals('Description', errors[0].getFields()[0]);
System.assertEquals('Description must be specified when Opportunity is closed', errors[0].getMessage());
System.assertEquals('AccountId', errors[1].getFields()[0]);
System.assertEquals('Account does not have any employees', errors[1].getMessage());

While you do still need code coverage for your Apex Trigger logic, those practicing unit testing may prefer to leverage the ability to avoid DML in order to assert more varied validation scenarios. The following code is entirely free of any SOQL and DML statements and thus better for test performance. It leverages the ability to inject related records rather than allowing the framework to query them on demand. The SObjectFieldValidator instance is constructed and configured in a separate class for reuse.

// Given
Account relatedAccount = 
    new Account(Id = TEST_ACCOUNT_ID, Name = 'Test', NumberOfEmployees = null);
Map<Id, SObject> oldMap = 
    new Map<Id, SObject> { TEST_OPPORTUNIT_ID => 
        new Opportunity(Id = TEST_OPPORTUNIT_ID, StageName = 'Prospecting', Description = 'X', AccountId = null)};
Map<Id, SObject> newMap = 
    new Map<Id, SObject> { TEST_OPPORTUNIT_ID => 
        new Opportunity(Id = TEST_OPPORTUNIT_ID, StageName = 'Closed Won', Description = null, AccountId = TEST_ACCOUNT_ID)};
Map<SObjectField, Map<Id, SObject>> relatedRecords = 
    new Map<SObjectField, Map<Id, SObject>> {
        Opportunity.AccountId => 
            new Map<Id, SObject>(new List<Account> { relatedAccount })};
// When
OpportunityTriggerHandler.getValidator()
  .validate(TriggerOperation.AFTER_UPDATE, oldMap, newMap, relatedRecords); 
// Then
List<Database.Error> errors = newMap.get(TEST_OPPORTUNIT_ID).getErrors();
System.assertEquals(2, errors.size());
System.assertEquals('AccountId', errors[0].getFields()[0]);
System.assertEquals('Account does not have any employees', errors[0].getMessage());
System.assertEquals('Description', errors[1].getFields()[0]);
System.assertEquals('Description must be specified when Opportunity is closed', errors[1].getMessage());

Finally it is worth noting that such a framework can of course only get you so far and there will be scenarios where you need to be more rich in your criteria. This is something that could be explored further through the SObjectFieldValidator.FieldValidationCondition base type that allows coded field validations to be added via the condition method. The framework is pretty basic as I really do not have a great deal of time these days to build it out more fully, so totally invite anyone interested to take it further.

Enjoy!


7 Comments

Apex Process Orchestration and Monitoring with Platform Events

When it comes to implementing asynchronous workloads in Apex developers have a number of options such as Batch Apex and Queueable, each can be driven by user or system driven actions. This blog focuses on some of the more advanced aspects of implementing async workloads using Platform Events and Apex.

In comparison to other approaches, implementing asynchronous workloads using Platform Events offers two unique features. The first helps you better dynamically calibrate and manage resources based on data volumes to stay within limits, while the second feature provides automatic retry capabilities when errors occur. Lastly I want to highlight an approach I used to add some custom realtime telemetry to the workload using Platform Events.

Side Note: Before getting into the details, the goal of this blog is not to say one async option is better than another, rather to highlight the above features further so you can better consider your options. I also include a short comparison at the end of this blog with Batch Apex.

Business Scenario

Lets imagine the following scenario to help illustrate the use of the features described below:

  • Business Process
    Imagine that your business is processing Invoice generation on the platform and that the Orders that drive this arrive and are updated constantly.
  • Continuous Processing
    In order to avoid backlogs or spikes of invoices being processed, you want to maintain a continuous flow of the overall process. For this you create a Platform Event called Generate Invoice. This event can easily be sent by an admin / declarative builders who have perhaps setup some rules on the Orders object using Process Builder.
  • Resource Management
    Orders arrive in all shapes and sizes, meaning the processing required to generate Invoices can also vary when you consider variables such as number of order lines, product regional discounts, currencies and tax rules etc. Processing each one at time per execution context is an obvious way to maximize use of available resources and is certainly an option, but if resources allow, processing multiple invoices in one execution context is more efficient.

Below is what the Generate Invoice Platform Event looks like, it simply has a reference to the Order Id (though it could equally reference an External Id on the Order object).

For the purposes of this blog we are not focusing on how the events are sent / published. You can publish events using programatic API’s on or off platform or using one of the platforms declarative tools, there are in fact many ways to send events. For this blog we will just use a basic Apex snippet to generate the events as shown below.

List<GenerateInvoice__e> events = new List<GenerateInvoice__e>();
for(Order order : 
       [select Id from Order 
          where Invoiced__c != true 
          order by OrderNumber asc]) {
   events.add(new GenerateInvoice__e(OrderId__c = order.Id));
}
EventBus.publish(events);        

Here is a basic Apex handler for the above Platform Event that delegates the processing to another Apex class:

trigger GenerateInvoiceSubscriber on GenerateInvoice__e (after insert) {
    Set<Id> orderIds = new Set<Id>();
    for (GenerateInvoice__e event : Trigger.New) {
        orderIds.add(event.OrderId__c);
    }
    OrderService.generateInvoices(orderIds);
}

Processing Chunks of Events and Handling Retries

The following diagram highlights how a more advanced version of the above Apex handler can be used to optimally work within the limits to process chunks of Orders based on their size/complexity and also retry those that result in some errors along the way.

In order to orchestrate things this way you need to use some Apex API’s in your handler logic to let the platform know a few things. At the end of this blog I also share how I added telemetry to better visualize this along with a video. So don’t worry at this juncture if its not 100% clear how what you are seeing below is possible, just keep reading and watching!

Controlling how many Events are passed to your handler

Imagine the above code snippet published 1000 events. The platform docs state that it can pass up to a maximum of 2000 events to an Apex event handler at once, meaning the above will be invoked once. If you have been on the platform a while you will know that 200 (not 2000) is common number used to expressed a minimum number of records you should use when testing Apex Triggers and general bulkkification best practice. So why 2000 in the case of platform event handlers? Well the main aim of the platform is to drain the Platform Event message queue quickly and so it attempts to give the handler as much as possible, just in case it can process it.

As we have set out in our scenario above, Orders can be quite variable in nature and thus while a batch of 1000 orders with only a few order lines might be possible within the execution limits, include few orders in that batch with 100’s or a few thousand line items and its more likely you will hit CPU or heap limits. Fortunately, unlike Batch Apex, you get to control the size of each individual chunk. This is done by effectively giving some of the 1000 block of events passed to your handler back to the platform to pass back in a separate handler invocation where the limits are reset.

Below is some basic code that illustrates how you might go about pre-scanning the Orders to determine complexity (by number of lines) and thus dynamically calibrating how many of the events your code can really process within the limits. The orderIds list that is passed to the service class is reset with the orders that can be processed. The key part here is the use of the setResumeCheckpoint method that tells the platform where to resume from after this handler has completed its processing.

trigger GenerateInvoiceSubscriber on GenerateInvoice__e (after insert) { 

    // Determine number overall order lines to process 
    //   vs maximum within limits (could be config)
    Integer maxLines = 40000;
    Set<Id> orderIds = new Set<Id>();
    for (GenerateInvoice__e event : Trigger.New) {
        orderIds.add(event.OrderId__c);
    }
    Map<Id, Integer> lineCountByOrderId = 
        new OrdersSelector().selectLineCountById(orderIds);

    // Bulkify events passed to the OrderService
    orderIds = new Set<Id>();
    Integer lineCount = 0;
    for (GenerateInvoice__e event : Trigger.New) {
        orderIds.add(event.OrderId__c);
        EventBus.TriggerContext.currentContext().setResumeCheckpoint(event.ReplayId);
        lineCount = lineCount + lineCountByOrderId.get(event.OrderId__c);
        if(lineCount>maxLines) { 
            break;
        }
    }

    OrderService.generateInvoices(orderIds);
}

You can read more about this approach in the formal documentation here.

Implementing Retry Logic

There are a number of reasons errors can occur when your handler code is running. For errors that represent transient situations, such as timeouts and row locks for example, you would normally have to ask the user to retry (via email or notification) or utilize a dynamically scheduled job to retry. With Platform Event handlers in Apex, when the system RetryableException is thrown the Platform will automatically retry a batch events after period of time, up to 9 times (the batch sizes may vary between attempts). It is generally recommended that you do not let your code retry above 6, since when the maximum is reached the platform deactivates the handler/trigger.

The following code is basic illustration of how to use this facility and track the number of retries before reaching the max. In this example if the soft maximum is reached the events are effectively just lost in this case or if needed you can write them to a staging custom object for resubmission or just simply have some code such as the above scan for unprocessed Orders and resubmit events.

    // Invoke OrderService, support retries
    try {
        OrderService.generateInvoices(orderIds);
    } catch (Exception e) {
        // Only retry so many times, before giving up (thus avoid disabling the trigger)
        if (EventBus.TriggerContext.currentContext().retries < 6) {
            throw new EventBus.RetryableException(e.getMessage());
        }
        // In this case its ok to let the events drain away... 
        //   since new events for unprocessed Orders can always be re-generated
    }    

Using Platform Events to monitor activity

I used Platform Events to publish telemetry about the execution of the above handlers, by creating another Platform Event called Subscriber Telemetry and used a Lightning Web Component to monitor the events in realtime. Because Platform Events can be declared as being outside the standard Apex Transaction they are sent immediately using the “Publish Immediately” setting, they are sent even if an error occurs.

To publish to this event I simply added the following snippet of code to the top of my handler.

// Emit telemetry
EventBus.publish(
    new SubscriberTelemetry__e(
        Topic__c = 'GenerateInvoice__e', 
        ApexTrigger__c = 'GenerateInvoiceSubscriber',
        Position__c = 
           [select Position from EventBusSubscriber
              where Topic = 'GenerateInvoice__e'][0].Position,
        BatchSize__c = Trigger.new.size(),
        Retries__c = EventBus.TriggerContext.currentContext().retries,
        LastError__c = EventBus.TriggerContext.currentContext().lastError));

The following video shows me clicking a button to publish a batch of 1000 events, then monitoring the effects on my chunking logic and retry logic. The video actually includes me fixing some data errors in order to highlight the retry capabilities. The errors shown are contrived by some deliberately bad code to illustrate the retry logic, hence the fix to the Order records looks a bit odd, so please ignore that. Finally note that the platform chose to send my handler 83 events first then larger chunks thereafter, but in other tests I got 1000 events in the first chunk.

Batch Apex vs Platform Events

Batch Apex also provides a means to orchestration sequentially the processing of records in chunks, so I thought I would end here with a summary of some of the other differences. As you can see one of the key ones to consider is the user identity the code runs as. This is not impossible to workaround in the platform event handler case, but requires some coding to explicitly set the OwnerId field on records if that information is important to you. Overall though I do feel that Platform Events offers some useful options for switching to a more continuous mode of operation vs batch, so long as your aware of the differences this might be a good fit for you.

Side Note: For Apex Queueable handlers you will soon have the option to implement so called Transaction Finalizers that allow you to implement retry or logging logic.


4 Comments

FinancialForce Apex Common Community Updates

This short blog highlights a batch of new features recently merged to the FinancialForce Apex Common library aka fflib. In addition to the various Dreamforce and blog resources linked from the repo, fans of Trailhead can also find modules relating to the library here and here. But please read this blog first before heading out to the trails to hunt down badges! It’s really pleasing to see it continue to get great contributions so here goes…

Added methods for detecting changed records with given fields in the Domain layer (fflib_SObjectDomain)

First up is a great new optimization feature for your Domain class methods from Nathan Pepper aka MayTheSForceBeWithYou based on a suggestion by Daniel Hoechst. Where applicable its a good optimization practice to considering comparing the old and new values of fields relating to processing you are doing in your Domain methods to avoid unnecessary overheads. The new fflib_SObjectDomain.getChangedRecords method can be used as an alternative to the Records property to just the records that have changed based on the field list passed to the method.

// Returns a list of Account where the Name or AnnaulRevenue has changed
List<Account> accounts =
  (List<Account>) getChangedRecords(
     new List<SObjectField> { Account.Name, Account.AnnualRevenue });

Supporting EventBus.publish(list<SObject>) in Unit of Work (fflib_SObjectUnitOfWork)

Platform Events are becoming ever popular in many situations. If you regard them as logically part of the unit of work your code is performing, this enhancement from Chris Mail is for you! You can now register platform events to be sent based on various scenarios. Chris has also provided bulkified versions of the following methods, nice!

    /**
     * Register a newly created SObject (Platform Event) instance to be published when commitWork is called
     *
     * @param record A newly created SObject (Platform Event) instance to be inserted during commitWork
     **/
    void registerPublishBeforeTransaction(SObject record);
    /**
     * Register a newly created SObject (Platform Event) instance to be published when commitWork has successfully
     * completed
     *
     * @param record A newly created SObject (Platform Event) instance to be inserted during commitWork
     **/
    void registerPublishAfterSuccessTransaction(SObject record);
    /**
     * Register a newly created SObject (Platform Event) instance to be published when commitWork has caused an error
     *
     * @param record A newly created SObject (Platform Event) instance to be inserted during commitWork
     **/
    void registerPublishAfterFailureTransaction(SObject record);

Add custom DML for Application.UnitOfWork.newInstance call (fflib_Application)

It’s been possible for a while now to override the default means by which the fflib_SObjectUnitOfWork.commitWork method performs DML operations (for example if you wanted to do some additional pre/post processing or logging). However, if you have been using the Application class pattern to access your UOW (shorthand and helps with mocking) then this has not been possible. Thanks to William Velzeboer you can now get the best of both worlds!

fflib_SObjectUnitOfWork.IDML myDML = new MyCustomDMLImpl();
fflib_ISObjectUnitOfWork uow = Application.UnitOfWork.newIntance(myDML);

Added methods to Unit of Work to be able to register record for upsert (fflib_SObjectUnitOfWork)

Unit Of Work is a very popular class and receives yet another enhancement in this batch from Yury Bondarau. These two methods allow you to register records that will either be inserted or updated as automatically determined by the records having an Id populated or not, aka a UOW upsert.

    /**
     * Register a new or existing record to be inserted or updated during the commitWork method
     *
     * @param record An new or existing record
     **/
    void registerUpsert(SObject record);
    /**
     * Register a list of mix of new and existing records to be upserted during the commitWork method
     *
     * @param records A list of mix of existing and new records
     **/
    void registerUpsert(List&lt;SObject&gt; records);
    /**
     * Register an existing record to be deleted during the commitWork method
     *
     * @param record An existing record
     **/

Alleviates unit-test exception when Org’s email service is limited

Finally, long term mega fan of the library John Storey comes in with an ingenious fix to an Apex test failure which occurs when the org’s email deliverability’s ‘Access Level’ setting is not ‘All Email’. John leveraged an extensibility feature in the Unit Of Work to avoid the test being dependent on this org config all while not losing any code coverage, sweet!

Last but not least, thank you Christian Coleman for fixing those annoying typos in the docs! 🙂


34 Comments

Managing Dependency Injection within Salesforce

When developing within Salesforce, dependencies are formed in many ways, not just those made explicitly when writing code, but those formed by using declarative tools. Such as defining Actions and Layouts for example. This blog introduces a new open source library I have been working on called Force DI. The goal is to simplify and more importantly consolidate where and how to configure at runtime certain dependencies between Apex, Visualforce or Lightning component code.

Forming dependencies at runtime instead of explicitly during development can be very advantageous. So whether you are attempting to decompose a large org into multiple DX packages or building a highly configurable solution, hopefully, you will find this library useful!

So what does the DI bit stand for?

The DI bit in Force DI stands for Dependency Injection, which is a form of IoC (Inversion of Control). Both are well-established patterns for providing the runtime glue between two points, basically the bit in the middle. Let’s start with an Apex example. In order to use DI, you need to forgo the use of the “new” operator at the point where you want to do the injection. For example, consider the following code:-

PaymentEngine engine = new PayPal();

In the above example, you are explicitly expressing a dependency.  Which not only means you have to deploy or package all your payment engines together, but you have hardcoded a finite set you support and thus also forgone extensibility. With Force DI you can instead write

PaymentEngine engine = (PaymentEngine) di_Injector.Org.getInstance(PaymentEngine.class);

How does it know which class to instantiate then?

Whats happening here is the Injector class is using binding configuration (also dynamically discovered) to find out which class to actually instantiate. This binding configuration can be admin controlled, packaged (e.g. “PayPal Package”) and/or defined dynamically via code. Setting up binding config via code enables dynamic binding by reading other configuration (e.g. the user’s payment preference) and binding accordingly.

The key goal of DI is that calling code is not concerning itself with how an instance is obtained, only what it does with it. The following shows how a declarative binding is expressed via the libraries Binding Custom Metadata Type:-

If this all seems a bit indirect, that’s the point! Because of this indirection, you can now choose to deploy/package other payment gateway implementations independently from each other as well as be sure that everywhere your other code needs a PaymentEngine the implementation is resolved consistently. For a more advanced OOP walkthrough see the code sample here.

Can this help me with other kinds of dependencies?

Yes! Let’s take an example of Lightning Component used as an Action Override. Typically you would create a Lightning Component and associate it directly with an action override. However, this means that the object metadata, action override and the Lightning code (as well as whatever is dependent on that) must travel around together. Rather than, for example, in separate DX packages. It also means that if you want to offer different variations of this action you would need to code all of that into the single component as well.

As before let’s review what the Lightning Component Action Override looks like without DI:-

<aura:component implements="lightning:actionOverride,force:hasSObjectName">
   <lightning:cardtitle="Widget">
     <p class="slds-p-horizontal_small">Custom UI to Create a Widget ({!v.sObjectName})</p>
   </lightning:card>
</aura:component>

This component (and all its dependencies) would be directly referenced in the Action Override below:-

Now let us take a look at this again but using the Lightning c:injector component in its place:-

<aura:component implements="lightning:actionOverride,force:hasSObjectName">
   <c:di_injector bindingName="lc_actionWidgetNew">
      <c:di_injectorAttribute name="sObjectName" value="{!v.sObjectName}"/>
   </c:di_injector>
</aura:component>

To make things clearer when reviewing Lightning Components in the org, the above component follows a generic naming convention, such as actionWidgetNew. This component is instead bound to the Action Override, not the above one and now looks like this:-

The binding configuration looks like this:-

Finally, the injected Lightning Component widgetWizard looks like this:-

<aura:component>
   <aura:attribute name="sObjectName"type="String"/>
   <lightning:card title="Widget">
     <p class="slds-p-horizontal_small">Custom UI to Create a Widget ({!v.sObjectName})</p>
   </lightning:card>
</aura:component>

Note: You have the ability to pass context through to the bound Lightning Component just as the sObjectName attribute value was passed above. The c:injector component can be used in many other places such as Quick Actions, Lightning App Builder Pages, and Utility Bar. Check out this example page in the repo for another example.

What about my Visualforce page content can I inject that?

Visualforce used by Actions and in Layouts can be injected in much the same way as above, with a VF page acting as the injector proxy using the Visualforce c:injector component. We will skip showing what things looked like before DI, as things follow much the same general pattern as the Lightning Component approach.

The following example shows the layoutWidgetInfo page, which is again somewhat generically named to indicate its an injector proxy and not a real page. It is this page that is referenced in the Widget objects Layout:-

<apex:page standardController="Widget__c" extensions="di_InjectorController">
   <c:di_injector bindingName="vf_layoutWidgetInfo" parameters="{!standardController}"/>
</apex:page>
The following shows an alternative means to express binding configuration via code. The ForceApp3Module class defines the bindings for a module/package of code where the Visualforce Component that actually implements the UI is stored. Note that the binding for vf_layoutWidgetInfo points to an Apex class in the controller, not the actual VF component to inject. The Provider inner class actually creates the specific component (via Dynamic Visualforce).
public class ForceApp3Module extends di_Module {

    public override void configure() {

        // Example named binding to a Visualforce component (via Provider)
        bind('vf_layoutWidgetInfo').visualforceComponent().to(WidgetInfoController.Provider.class);

        // Example SObject binding (can be used by trigger frameworks, see force-di-demo-trigger)
        bind(Account.getSObjectType()).apex().sequence(20).to(CheckBalanceAccountTrigger.class);

        // Example named binding to a Lightning component
        bind('lc_actionWidgetManage').lightningComponent().to('c:widgetManager');
    }
}

NOTE: The above binding configuration module class is itself injected into the org-wide Injector by a corresponding custom metadata Binding record here. You can also see in the above example other bindings being configured, see below for more on this.

The actual implementation of the injected Visualforce Component widgetInfo looks like this:-

<apex:component controller="WidgetInfoController">
  <apex:attribute name="standardController"
     type="ApexPages.StandardController"
     assignTo="{!StandardControllerValue}" description=""/>
  <h1>Success I have been injected! {!standardController.Id}</h1>
</apex:component>

Decomposition Examples

The examples, shown above and others are contained in the sample repo. Each of the root package directories, force-app-1, force-app-2, and force-app-3 helps illustrate how the point of injection vs the runtime binding can be split across the boundaries of a DX package, thus aiding decomposition. The force-di-trigger-demo (not shown below) also contains a sample trigger handler framework using the libraries ability to resolve multiple bindings (to trigger handlers) in a given sequence, thus supporting the best practice of a single trigger per object.

Further Background and Features

I must confess when I started to research Java Dependency Injection (mainly via Java Guice) I was skeptical as to how much I could get done without custom annotation and reflection support in Apex. However, I am pretty pleased with the result and how it has woven in with features like Custom Metadata Types and how the Visualforce and Lightning Component injectors have turned out. A plan to write future Wiki pages on the associated GitHub repo to share more details on the Force DI API. Meanwhile here is a rundown of some of the more advanced features.

  • Provider Support
    Injectors by default only return one instance of the bound object, hence getInstance. Bindings that point to a class implementing the Provider interface (see inner interface) can override this. Which also allows for the construction of classes that do not have default constructors or types not supported by Type.forName. This feature also works in conjunction with the ability to pass a parameter via the Apex Injector, e.g. Injector.Org.getInstance(PaymentEngine.cls, someData);
  • Parameters
    Each of the three Injectors permits the passing of parameter/context information into the bound class or component. The examples above illustrate this.
  • Modules, Programmatic Binding Configuration and Injector Scopes
    Binding Modules group programmatic bindings and allow you to hook programmatically into the initialization of the Injector. Modules use the Fluent style interface to express bindings very clearly. The force-app-3 package in the repo uses this approach to define the bindings shown in the VF example above. You can also take a look at a worked example here of how local (one-off) Injectors can be used and here for a more complex OO example of conditional bindings works.
  • StandardController Passthrough
    For Visualforce Component injections the frameworks parameter passing capabilities supports passing through the instance of the StandardController from the hosting page into the injected component, as can be seen in the example above.
  • Binding Discovery by SObject vs Name
    The examples above utilize single bindings by a unique name. However, it is becoming quite common to adapt trigger frameworks to support DI and thus allow a single trigger to dynamically reach out to one or more handlers (perhaps installed in separate DX packages). This example shows how Force DI could be used in such a scenario.

Conclusion

This blog has hopefully wet your appetite to learn more! If so, head over to the repo and have a look through the samples in this blog and others. My next step is to wrap this up in a DX package to make it easier to get your hands on it, for now, download the repo and deploy via DX. I am also keen to explore what other aspects of Java Guice might make sense, such as the Linked Bindings feature.

Meanwhile, I would love feedback on the sample code and library thus far. Last but not least I would like to give a shout out to John Daniel and Doug Ayers for their great feedback during the initial development of the library and this blog. Enjoy!

 


7 Comments

Disabling Trigger Events in Apex Enterprise Patterns

Iautobat.jpeg‘m proud to host my first guest bloggerChris Mail or Autobat as he is known on GitHub. Take it away Chris….

How to put the safety on…

Being an architect in a professional services organisation is a funny game. Each project is either a shiny new Salesforce instance without a fingerprint on it or an unknown vault of code and configuration that we must navigate through.

I have been using the fflib pattern now for some time, and more of our teams are adopting it for our programs of work. My latest addition is something that an architect might wonder why we need; the ability to turn off triggers via a simple interface on all domains.

In an ever growing complex environment, perhaps multiple projects over time delivering iterative enhancements I was noticing a common piece of code being developed within the Domain layer. It looked something along the lines of this:

public override void onAfterInsert()
{
    // if this is set we are already in a loop and want to exit!
    if(bProhibitAfterInsertTrigger)
    {
        return;
    }
    // down here we do something, maybe insert an Account!
}

While small and inconspicuous it allowed our code base to become inconsistent as there was no control over the exposure of these controlling flags and worse, we were repeating ourselves in every domain!

The solution was simple, a fluent style API within fflib_SObjectDomain. Any code can now simply set the control flags for any domain class:

fflib_SObjectDomain.getTriggerEvent(YourDomain.class).disableAll(); // dont fire anything
fflib_SObjectDomain.getTriggerEvent(YourDomain.class).disableAllBefore();
fflib_SObjectDomain.getTriggerEvent(YourDomain.class).disableAllAfter();

fflib_SObjectDomain.getTriggerEvent(YourDomain.class).disableBeforeInsert();
fflib_SObjectDomain.getTriggerEvent(YourDomain.class).disableBeforeUpdate();
fflib_SObjectDomain.getTriggerEvent(YourDomain.class).disableBeforeDelete();

fflib_SObjectDomain.getTriggerEvent(YourDomain.class).disableAfterInsert();
fflib_SObjectDomain.getTriggerEvent(YourDomain.class).disableAfterUpdate();
fflib_SObjectDomain.getTriggerEvent(YourDomain.class).disableAfterDelete();
fflib_SObjectDomain.getTriggerEvent(YourDomain.class).disableAfterUndelete();

To enable, just call the inverse e.g. .enableAfterInsert(); etc.

While not every code base will need to use these flags, they allow you to control quickly and easily your trigger execution with a single line of code that all your development team can reuse and follow.


1 Comment

Apex Sharing and applying to Apex Enterprise Patterns

Apex Sharing can be a bit of mystery to new developers as well as seasoned ones from other platforms. This blog is not for those wanting to understand sharing as such, there are plenty of excellent articles and Salesforce docs on that. Here i wanted to talk about how first I came to understand it and how it fits into Apex Enterprise Patterns.

I recall one really basic thing that took me by surprise was the name, Sharing? Of course this is an end user based way of describing what as an engineer, I effectively understood as row level security. I was also blown away to know that this applied in and outside of code, for example when reporting is used, very cool! Row level security is certainly for me a more accurate way to describe it and certainly helps when i have been talking to others new to the platform but have experience on other platforms.

The second thing that i learn is that in order to control it, it is required to be considered in the way one annotates code at design time. And less so about a default runtime or a configured at runtime context. Since sharing is not enabled by default in Apex (except for Anonymous Apex contexts), it needs to be enabled via opt-in by the developer. Salesforce helps remind us of this through tools like the Salesforce Security Scanner and best practices here, well worth a read.

You may have noticed that the Apex Enterprise Patterns classes providing implementations of your Service layer always have with sharing specified. This sets the default context for all code, in the Domain, Selector or other classes that are executed from then on to run in this mode. Such classes do not need to and should not generally need to qualify any with sharing or without sharing keywords either.

global with sharing class OpportunitiesService 
{		
	global static void applyDiscounts(<ID> opportunityIds, Decimal discountPercentage)
	{
		// This code and any it calls runs as 'with sharing'
	}
}

So what happens if you really want to run without sharing (great article here on reasons for this)? Do you apply it to your Domain or Selector class definition? Well actually neither, since not all the code in these classes may warrant sharing being disabled for example. What i prefer to do is keep the execution of running in this mode as short and contained as possible, to avoid any other inadvertent execution of other code running in this mode.

The basic approach is to leverage an inner class that contains just the code that needs to run without sharing. Typically this code would run in the Selector layer, though can be used elsewhere inside a service method implementation or domain class method. The point is its scoped to a method or specific execution path.

public class OpportunitiesSelector extends fflib_SObjectSelector
{
    public List<Opportunity> selectById(Set<Id> idSet) {
        // This method simply runs in the sharing context of the caller
        // ...
        return opportunities;
    }

    public List<OpportunityInfo> selectOpportunityInfo(Set<Id> idSet) {	
        // Explicitly run the query in a 'without sharing' context
        return new SelectOpportunityInfo().selectOpportunityInfo(this, idSet);
    }

    private without sharing class SelectOpportunityInfo {
        public List<OpportunitiesSelector.OpportunityInfo> 
                 selectOpportunityInfo(OpportunitiesSelector selector, Set<Id> idSet) {
            // Execute the query as normal
            // ...
           return opportunityInfos;				
        }
    }
}

So do we still need it specify with sharing elsewhere? Well yes, controllers for sure is still good practice, indeed Selectors can end up being called from these. I personally also consider any class that is invoked as an Apex entry point, such as Invocable Methods, Batch Apex, Scheduled Apex etc in this category.

If your following a service orientated design most of these entry points delegate to the Service layer, so it feels like your doubling up at times, but thats no bad thing when security is concerned. Finally keep in mind, if you choose to expose your Service layer as an API, it feels equally important to ensure the default sharing mode is enabled regardless of what mode the caller is running in.

The general approach here, is enable sharing, then make the code, developer and business/solution analyst justify why it needs to be switched off for a system level operation that requires it. If you put aside the Apex Enterprise Patterns, this is in fact not that different from the general guideline of having with sharing on all your controllers, the main difference here is by putting it on your service layer, your ensuring not just your controller entry points are covered.

 


1 Comment

Pillars of Enterprise Development

During 2014 i authored my first full book, entitled Force.com Enterprise Development, it was a long process taking over 8 months, so certainly if your considering such a thing yourself be prepared for a big investment! The opening paragraph is as follows…

2994EN_ Salesforce1 Platform Enterprise Architecture_0Enterprise organisations have complex processes and integration requirements that typically span multiple locations around the world. They seek out the best in class applications that support their needs today and in the future. The ability to adapt an application to their practices, terminology, and integrations with other existing applications or processes is key to them. They invest as much in your application as they do in you as the vendor capable of delivering an application strategy that will grow with them.

Motivation and background to the book

The Salesforce community is diverse, consisting of package developers, in house developers and consultants, each with varying degrees of technical knowledge. While Salesforce documentation can be found to address the needs of each of these types of developers, it is often more of a reference style in nature and can be hard to contextualise. Meaning a clear path to an architecture for enterprise developers to refer can be hard to find and peace together.

I wanted the book to act as flow for all that a developer needs to get the best out of the platform while laying down a strong foundation of development practices and patterns, to allow their application to scale and evolve with the same rapid pace of the platform itself (see some examples of where this has been achieved below).  Existing enterprise Java or .Net developers considering the platform will find some well known enterprise patterns. There has been a significant increase in architecture and best practice questions over the last 2-3 years on the Salesforce forums such as StackExchange and Salesforce Community Answers.

Pillars of Enterprise Development

PillarsWhen planning the outline for the book and thinking about Enterprise development on the platform in general, i had three core beliefs in mind that i wanted to seed within the book.

  • Embrace the whole Platform, The first tenant of the Force.com platform is to combine the power of the declarative programming style with the traditional source code based programming style. Doing so ensures not only the developer is as effective as possible, focusing on coding only where needed, but also the resulting application has strong ‘native’ feel to it, giving its end users access to the platforms rich set of customization, configuration and integration capabilities that enterprise customers demand. I wanted the reader to have a key awareness of the benefits of being ‘native’ on the platform, to keep it in mind always and realize the benefits this combined development approach can be bring to end users.

  • Build Strong Foundations, Enterprise applications are expected to serve their customers for many years to come, as customers build solutions and processes around them, which become a critical part of their businesses. As applications grow in complexity the code base especially needs to not buckle under the pressure, be that the addition features or general maintenance of existing features, made by existing or new developer resources. A strong foundation will ensure that the code base endures this type of change with minimal impact on the rest of the system and the users. I wanted the reader to get a strong sense of the meaning of Separation of Concerns and how it applies to enterprise applications built on the Force.com platform.

    • Lightning Experience was obviously not around at the time and Lightning Components only in Pilot, now they are both GA, but are Services still as applicable to Lighting controllers? You bet!

  • Your Application as Platform, Enterprise customers demand high levels of integration, customization and integration from your application, as they either repurpose or consume the application within a larger business process or integration. So i wanted the reader to gain an understanding of the Force.com platform features they can leverage to ensure that these aspects are considered from an architecture perspective and thus baked into each new application function, such that the resulting application becomes in essence a part of the platform itself.

    • Lightning Process Builder was but a gleam in someones eye a year ago, but can Services be exposed via Invocable Methods to this tool? You bet!

I’m so pleased and proud to see some great reviews of the book and also some great feedback on Twitter. If your interested in taking a deeper look check out the sidebar of my blog, you can get your hands on it both digitally and in good old paper back! Enjoy!