Andy in the Cloud

From BBC Basic to Force.com and beyond…


2 Comments

FinancialForce Apex Common Community Updates

This short blog highlights a batch of new features recently merged to the FinancialForce Apex Common library aka fflib. In addition to the various Dreamforce and blog resources linked from the repo, fans of Trailhead can also find modules relating to the library here and here. But please read this blog first before heading out to the trails to hunt down badges! It’s really pleasing to see it continue to get great contributions so here goes…

Added methods for detecting changed records with given fields in the Domain layer (fflib_SObjectDomain)

First up is a great new optimization feature for your Domain class methods from Nathan Pepper aka MayTheSForceBeWithYou based on a suggestion by Daniel Hoechst. Where applicable its a good optimization practice to considering comparing the old and new values of fields relating to processing you are doing in your Domain methods to avoid unnecessary overheads. The new fflib_SObjectDomain.getChangedRecords method can be used as an alternative to the Records property to just the records that have changed based on the field list passed to the method.

// Returns a list of Account where the Name or AnnaulRevenue has changed
List<Account> accounts =
  (List<Account>) getChangedRecords(
     new List<SObjectField> { Account.Name, Account.AnnualRevenue });

Supporting EventBus.publish(list<SObject>) in Unit of Work (fflib_SObjectUnitOfWork)

Platform Events are becoming ever popular in many situations. If you regard them as logically part of the unit of work your code is performing, this enhancement from Chris Mail is for you! You can now register platform events to be sent based on various scenarios. Chris has also provided bulkified versions of the following methods, nice!

    /**
     * Register a newly created SObject (Platform Event) instance to be published when commitWork is called
     *
     * @param record A newly created SObject (Platform Event) instance to be inserted during commitWork
     **/
    void registerPublishBeforeTransaction(SObject record);
    /**
     * Register a newly created SObject (Platform Event) instance to be published when commitWork has successfully
     * completed
     *
     * @param record A newly created SObject (Platform Event) instance to be inserted during commitWork
     **/
    void registerPublishAfterSuccessTransaction(SObject record);
    /**
     * Register a newly created SObject (Platform Event) instance to be published when commitWork has caused an error
     *
     * @param record A newly created SObject (Platform Event) instance to be inserted during commitWork
     **/
    void registerPublishAfterFailureTransaction(SObject record);

Add custom DML for Application.UnitOfWork.newInstance call (fflib_Application)

It’s been possible for a while now to override the default means by which the fflib_SObjectUnitOfWork.commitWork method performs DML operations (for example if you wanted to do some additional pre/post processing or logging). However, if you have been using the Application class pattern to access your UOW (shorthand and helps with mocking) then this has not been possible. Thanks to William Velzeboer you can now get the best of both worlds!

fflib_SObjectUnitOfWork.IDML myDML = new MyCustomDMLImpl();
fflib_ISObjectUnitOfWork uow = Application.UnitOfWork.newIntance(myDML);

Added methods to Unit of Work to be able to register record for upsert (fflib_SObjectUnitOfWork)

Unit Of Work is a very popular class and receives yet another enhancement in this batch from Yury Bondarau. These two methods allow you to register records that will either be inserted or updated as automatically determined by the records having an Id populated or not, aka a UOW upsert.

    /**
     * Register a new or existing record to be inserted or updated during the commitWork method
     *
     * @param record An new or existing record
     **/
    void registerUpsert(SObject record);
    /**
     * Register a list of mix of new and existing records to be upserted during the commitWork method
     *
     * @param records A list of mix of existing and new records
     **/
    void registerUpsert(List&lt;SObject&gt; records);
    /**
     * Register an existing record to be deleted during the commitWork method
     *
     * @param record An existing record
     **/

Alleviates unit-test exception when Org’s email service is limited

Finally, long term mega fan of the library John Storey comes in with an ingenious fix to an Apex test failure which occurs when the org’s email deliverability’s ‘Access Level’ setting is not ‘All Email’. John leveraged an extensibility feature in the Unit Of Work to avoid the test being dependent on this org config all while not losing any code coverage, sweet!

Last but not least, thank you Christian Coleman for fixing those annoying typos in the docs! 🙂


19 Comments

Managing Dependency Injection within Salesforce

When developing within Salesforce, dependencies are formed in many ways, not just those made explicitly when writing code, but those formed by using declarative tools. Such as defining Actions and Layouts for example. This blog introduces a new open source library I have been working on called Force DI. The goal is to simplify and more importantly consolidate where and how to configure at runtime certain dependencies between Apex, Visualforce or Lightning component code.

Forming dependencies at runtime instead of explicitly during development can be very advantageous. So whether you are attempting to decompose a large org into multiple DX packages or building a highly configurable solution, hopefully, you will find this library useful!

So what does the DI bit stand for?

The DI bit in Force DI stands for Dependency Injection, which is a form of IoC (Inversion of Control). Both are well-established patterns for providing the runtime glue between two points, basically the bit in the middle. Let’s start with an Apex example. In order to use DI, you need to forgo the use of the “new” operator at the point where you want to do the injection. For example, consider the following code:-

PaymentEngine engine = new PayPal();

In the above example, you are explicitly expressing a dependency.  Which not only means you have to deploy or package all your payment engines together, but you have hardcoded a finite set you support and thus also forgone extensibility. With Force DI you can instead write

PaymentEngine engine = (PaymentEngine) di_Injector.Org.getInstance(PaymentEngine.class);

How does it know which class to instantiate then?

Whats happening here is the Injector class is using binding configuration (also dynamically discovered) to find out which class to actually instantiate. This binding configuration can be admin controlled, packaged (e.g. “PayPal Package”) and/or defined dynamically via code. Setting up binding config via code enables dynamic binding by reading other configuration (e.g. the user’s payment preference) and binding accordingly.

The key goal of DI is that calling code is not concerning itself with how an instance is obtained, only what it does with it. The following shows how a declarative binding is expressed via the libraries Binding Custom Metadata Type:-

If this all seems a bit indirect, that’s the point! Because of this indirection, you can now choose to deploy/package other payment gateway implementations independently from each other as well as be sure that everywhere your other code needs a PaymentEngine the implementation is resolved consistently. For a more advanced OOP walkthrough see the code sample here.

Can this help me with other kinds of dependencies?

Yes! Let’s take an example of Lightning Component used as an Action Override. Typically you would create a Lightning Component and associate it directly with an action override. However, this means that the object metadata, action override and the Lightning code (as well as whatever is dependent on that) must travel around together. Rather than, for example, in separate DX packages. It also means that if you want to offer different variations of this action you would need to code all of that into the single component as well.

As before let’s review what the Lightning Component Action Override looks like without DI:-

<aura:component implements="lightning:actionOverride,force:hasSObjectName">
   <lightning:cardtitle="Widget">
     <p class="slds-p-horizontal_small">Custom UI to Create a Widget ({!v.sObjectName})</p>
   </lightning:card>
</aura:component>

This component (and all its dependencies) would be directly referenced in the Action Override below:-

Now let us take a look at this again but using the Lightning c:injector component in its place:-

<aura:component implements="lightning:actionOverride,force:hasSObjectName">
   <c:di_injector bindingName="lc_actionWidgetNew">
      <c:di_injectorAttribute name="sObjectName" value="{!v.sObjectName}"/>
   </c:di_injector>
</aura:component>

To make things clearer when reviewing Lightning Components in the org, the above component follows a generic naming convention, such as actionWidgetNew. This component is instead bound to the Action Override, not the above one and now looks like this:-

The binding configuration looks like this:-

Finally, the injected Lightning Component widgetWizard looks like this:-

<aura:component>
   <aura:attribute name="sObjectName"type="String"/>
   <lightning:card title="Widget">
     <p class="slds-p-horizontal_small">Custom UI to Create a Widget ({!v.sObjectName})</p>
   </lightning:card>
</aura:component>

Note: You have the ability to pass context through to the bound Lightning Component just as the sObjectName attribute value was passed above. The c:injector component can be used in many other places such as Quick Actions, Lightning App Builder Pages, and Utility Bar. Check out this example page in the repo for another example.

What about my Visualforce page content can I inject that?

Visualforce used by Actions and in Layouts can be injected in much the same way as above, with a VF page acting as the injector proxy using the Visualforce c:injector component. We will skip showing what things looked like before DI, as things follow much the same general pattern as the Lightning Component approach.

The following example shows the layoutWidgetInfo page, which is again somewhat generically named to indicate its an injector proxy and not a real page. It is this page that is referenced in the Widget objects Layout:-

<apex:page standardController="Widget__c" extensions="di_InjectorController">
   <c:di_injector bindingName="vf_layoutWidgetInfo" parameters="{!standardController}"/>
</apex:page>
The following shows an alternative means to express binding configuration via code. The ForceApp3Module class defines the bindings for a module/package of code where the Visualforce Component that actually implements the UI is stored. Note that the binding for vf_layoutWidgetInfo points to an Apex class in the controller, not the actual VF component to inject. The Provider inner class actually creates the specific component (via Dynamic Visualforce).
public class ForceApp3Module extends di_Module {

    public override void configure() {

        // Example named binding to a Visualforce component (via Provider)
        bind('vf_layoutWidgetInfo').visualforceComponent().to(WidgetInfoController.Provider.class);

        // Example SObject binding (can be used by trigger frameworks, see force-di-demo-trigger)
        bind(Account.getSObjectType()).apex().sequence(20).to(CheckBalanceAccountTrigger.class);

        // Example named binding to a Lightning component
        bind('lc_actionWidgetManage').lightningComponent().to('c:widgetManager');
    }
}

NOTE: The above binding configuration module class is itself injected into the org-wide Injector by a corresponding custom metadata Binding record here. You can also see in the above example other bindings being configured, see below for more on this.

The actual implementation of the injected Visualforce Component widgetInfo looks like this:-

<apex:component controller="WidgetInfoController">
  <apex:attribute name="standardController"
     type="ApexPages.StandardController"
     assignTo="{!StandardControllerValue}" description=""/>
  <h1>Success I have been injected! {!standardController.Id}</h1>
</apex:component>

Decomposition Examples

The examples, shown above and others are contained in the same repo as the library (for now). Each of the root package directories, force-app-1, force-app-2, and force-app-3 helps illustrate how the point of injection vs the runtime binding can be split across the boundaries of a DX package, thus aiding decomposition. The force-di-trigger-demo (not shown below) also contains a sample trigger handler framework using the libraries ability to resolve multiple bindings (to trigger handlers) in a given sequence, thus supporting the best practice of a single trigger per object.

Further Background and Features

I must confess when I started to research Java Dependency Injection (mainly via Java Guice) I was skeptical as to how much I could get done without custom annotation and reflection support in Apex. However, I am pretty pleased with the result and how it has woven in with features like Custom Metadata Types and how the Visualforce and Lightning Component injectors have turned out. A plan to write future Wiki pages on the associated GitHub repo to share more details on the Force DI API. Meanwhile here is a rundown of some of the more advanced features.

  • Provider Support
    Injectors by default only return one instance of the bound object, hence getInstance. Bindings that point to a class implementing the Provider interface (see inner interface) can override this. Which also allows for the construction of classes that do not have default constructors or types not supported by Type.forName. This feature also works in conjunction with the ability to pass a parameter via the Apex Injector, e.g. Injector.Org.getInstance(PaymentEngine.cls, someData);
  • Parameters
    Each of the three Injectors permits the passing of parameter/context information into the bound class or component. The examples above illustrate this.
  • Modules, Programmatic Binding Configuration and Injector Scopes
    Binding Modules group programmatic bindings and allow you to hook programmatically into the initialization of the Injector. Modules use the Fluent style interface to express bindings very clearly. The force-app-3 package in the repo uses this approach to define the bindings shown in the VF example above. You can also take a look at a worked example here of how local (one-off) Injectors can be used and here for a more complex OO example of conditional bindings works.
  • StandardController Passthrough
    For Visualforce Component injections the frameworks parameter passing capabilities supports passing through the instance of the StandardController from the hosting page into the injected component, as can be seen in the example above.
  • Binding Discovery by SObject vs Name
    The examples above utilize single bindings by a unique name. However, it is becoming quite common to adapt trigger frameworks to support DI and thus allow a single trigger to dynamically reach out to one or more handlers (perhaps installed in separate DX packages). This example shows how Force DI could be used in such a scenario.

Conclusion

This blog has hopefully wet your appetite to learn more! If so, head over to the repo and have a look through the samples in this blog and others. My next step is to wrap this up in a DX package to make it easier to get your hands on it, for now, download the repo and deploy via DX. I am also keen to explore what other aspects of Java Guice might make sense, such as the Linked Bindings feature.

Meanwhile, I would love feedback on the sample code and library thus far. Last but not least I would like to give a shout out to John Daniel and Doug Ayers for their great feedback during the initial development of the library and this blog. Enjoy!

 


5 Comments

Disabling Trigger Events in Apex Enterprise Patterns

Iautobat.jpeg‘m proud to host my first guest bloggerChris Mail or Autobat as he is known on GitHub. Take it away Chris….

How to put the safety on…

Being an architect in a professional services organisation is a funny game. Each project is either a shiny new Salesforce instance without a fingerprint on it or an unknown vault of code and configuration that we must navigate through.

I have been using the fflib pattern now for some time, and more of our teams are adopting it for our programs of work. My latest addition is something that an architect might wonder why we need; the ability to turn off triggers via a simple interface on all domains.

In an ever growing complex environment, perhaps multiple projects over time delivering iterative enhancements I was noticing a common piece of code being developed within the Domain layer. It looked something along the lines of this:

public override void onAfterInsert()
{
    // if this is set we are already in a loop and want to exit!
    if(bProhibitAfterInsertTrigger)
    {
        return;
    }
    // down here we do something, maybe insert an Account!
}

While small and inconspicuous it allowed our code base to become inconsistent as there was no control over the exposure of these controlling flags and worse, we were repeating ourselves in every domain!

The solution was simple, a fluent style API within fflib_SObjectDomain. Any code can now simply set the control flags for any domain class:

fflib_SObjectDomain.getTriggerEvent(YourDomain.class).disableAll(); // dont fire anything
fflib_SObjectDomain.getTriggerEvent(YourDomain.class).disableAllBefore();
fflib_SObjectDomain.getTriggerEvent(YourDomain.class).disableAllAfter();

fflib_SObjectDomain.getTriggerEvent(YourDomain.class).disableBeforeInsert();
fflib_SObjectDomain.getTriggerEvent(YourDomain.class).disableBeforeUpdate();
fflib_SObjectDomain.getTriggerEvent(YourDomain.class).disableBeforeDelete();

fflib_SObjectDomain.getTriggerEvent(YourDomain.class).disableAfterInsert();
fflib_SObjectDomain.getTriggerEvent(YourDomain.class).disableAfterUpdate();
fflib_SObjectDomain.getTriggerEvent(YourDomain.class).disableAfterDelete();
fflib_SObjectDomain.getTriggerEvent(YourDomain.class).disableAfterUndelete();

To enable, just call the inverse e.g. .enableAfterInsert(); etc.

While not every code base will need to use these flags, they allow you to control quickly and easily your trigger execution with a single line of code that all your development team can reuse and follow.


1 Comment

Apex Sharing and applying to Apex Enterprise Patterns

Apex Sharing can be a bit of mystery to new developers as well as seasoned ones from other platforms. This blog is not for those wanting to understand sharing as such, there are plenty of excellent articles and Salesforce docs on that. Here i wanted to talk about how first I came to understand it and how it fits into Apex Enterprise Patterns.

I recall one really basic thing that took me by surprise was the name, Sharing? Of course this is an end user based way of describing what as an engineer, I effectively understood as row level security. I was also blown away to know that this applied in and outside of code, for example when reporting is used, very cool! Row level security is certainly for me a more accurate way to describe it and certainly helps when i have been talking to others new to the platform but have experience on other platforms.

The second thing that i learn is that in order to control it, it is required to be considered in the way one annotates code at design time. And less so about a default runtime or a configured at runtime context. Since sharing is not enabled by default in Apex (except for Anonymous Apex contexts), it needs to be enabled via opt-in by the developer. Salesforce helps remind us of this through tools like the Salesforce Security Scanner and best practices here, well worth a read.

You may have noticed that the Apex Enterprise Patterns classes providing implementations of your Service layer always have with sharing specified. This sets the default context for all code, in the Domain, Selector or other classes that are executed from then on to run in this mode. Such classes do not need to and should not generally need to qualify any with sharing or without sharing keywords either.

global with sharing class OpportunitiesService 
{		
	global static void applyDiscounts(<ID> opportunityIds, Decimal discountPercentage)
	{
		// This code and any it calls runs as 'with sharing'
	}
}

So what happens if you really want to run without sharing (great article here on reasons for this)? Do you apply it to your Domain or Selector class definition? Well actually neither, since not all the code in these classes may warrant sharing being disabled for example. What i prefer to do is keep the execution of running in this mode as short and contained as possible, to avoid any other inadvertent execution of other code running in this mode.

The basic approach is to leverage an inner class that contains just the code that needs to run without sharing. Typically this code would run in the Selector layer, though can be used elsewhere inside a service method implementation or domain class method. The point is its scoped to a method or specific execution path.

public class OpportunitiesSelector extends fflib_SObjectSelector
{
    public List<Opportunity> selectById(Set<Id> idSet) {
        // This method simply runs in the sharing context of the caller
        // ...
        return opportunities;
    }

    public List<OpportunityInfo> selectOpportunityInfo(Set<Id> idSet) {	
        // Explicitly run the query in a 'without sharing' context
        return new SelectOpportunityInfo().selectOpportunityInfo(this, idSet);
    }

    private without sharing class SelectOpportunityInfo {
        public List<OpportunitiesSelector.OpportunityInfo> 
                 selectOpportunityInfo(OpportunitiesSelector selector, Set<Id> idSet) {
            // Execute the query as normal
            // ...
           return opportunityInfos;				
        }
    }
}

So do we still need it specify with sharing elsewhere? Well yes, controllers for sure is still good practice, indeed Selectors can end up being called from these. I personally also consider any class that is invoked as an Apex entry point, such as Invocable Methods, Batch Apex, Scheduled Apex etc in this category.

If your following a service orientated design most of these entry points delegate to the Service layer, so it feels like your doubling up at times, but thats no bad thing when security is concerned. Finally keep in mind, if you choose to expose your Service layer as an API, it feels equally important to ensure the default sharing mode is enabled regardless of what mode the caller is running in.

The general approach here, is enable sharing, then make the code, developer and business/solution analyst justify why it needs to be switched off for a system level operation that requires it. If you put aside the Apex Enterprise Patterns, this is in fact not that different from the general guideline of having with sharing on all your controllers, the main difference here is by putting it on your service layer, your ensuring not just your controller entry points are covered.

 


1 Comment

Pillars of Enterprise Development

During 2014 i authored my first full book, entitled Force.com Enterprise Development, it was a long process taking over 8 months, so certainly if your considering such a thing yourself be prepared for a big investment! The opening paragraph is as follows…

2994EN_ Salesforce1 Platform Enterprise Architecture_0Enterprise organisations have complex processes and integration requirements that typically span multiple locations around the world. They seek out the best in class applications that support their needs today and in the future. The ability to adapt an application to their practices, terminology, and integrations with other existing applications or processes is key to them. They invest as much in your application as they do in you as the vendor capable of delivering an application strategy that will grow with them.

Motivation and background to the book

The Salesforce community is diverse, consisting of package developers, in house developers and consultants, each with varying degrees of technical knowledge. While Salesforce documentation can be found to address the needs of each of these types of developers, it is often more of a reference style in nature and can be hard to contextualise. Meaning a clear path to an architecture for enterprise developers to refer can be hard to find and peace together.

I wanted the book to act as flow for all that a developer needs to get the best out of the platform while laying down a strong foundation of development practices and patterns, to allow their application to scale and evolve with the same rapid pace of the platform itself (see some examples of where this has been achieved below).  Existing enterprise Java or .Net developers considering the platform will find some well known enterprise patterns. There has been a significant increase in architecture and best practice questions over the last 2-3 years on the Salesforce forums such as StackExchange and Salesforce Community Answers.

Pillars of Enterprise Development

PillarsWhen planning the outline for the book and thinking about Enterprise development on the platform in general, i had three core beliefs in mind that i wanted to seed within the book.

  • Embrace the whole Platform, The first tenant of the Force.com platform is to combine the power of the declarative programming style with the traditional source code based programming style. Doing so ensures not only the developer is as effective as possible, focusing on coding only where needed, but also the resulting application has strong ‘native’ feel to it, giving its end users access to the platforms rich set of customization, configuration and integration capabilities that enterprise customers demand. I wanted the reader to have a key awareness of the benefits of being ‘native’ on the platform, to keep it in mind always and realize the benefits this combined development approach can be bring to end users.

  • Build Strong Foundations, Enterprise applications are expected to serve their customers for many years to come, as customers build solutions and processes around them, which become a critical part of their businesses. As applications grow in complexity the code base especially needs to not buckle under the pressure, be that the addition features or general maintenance of existing features, made by existing or new developer resources. A strong foundation will ensure that the code base endures this type of change with minimal impact on the rest of the system and the users. I wanted the reader to get a strong sense of the meaning of Separation of Concerns and how it applies to enterprise applications built on the Force.com platform.

    • Lightning Experience was obviously not around at the time and Lightning Components only in Pilot, now they are both GA, but are Services still as applicable to Lighting controllers? You bet!

  • Your Application as Platform, Enterprise customers demand high levels of integration, customization and integration from your application, as they either repurpose or consume the application within a larger business process or integration. So i wanted the reader to gain an understanding of the Force.com platform features they can leverage to ensure that these aspects are considered from an architecture perspective and thus baked into each new application function, such that the resulting application becomes in essence a part of the platform itself.

    • Lightning Process Builder was but a gleam in someones eye a year ago, but can Services be exposed via Invocable Methods to this tool? You bet!

I’m so pleased and proud to see some great reviews of the book and also some great feedback on Twitter. If your interested in taking a deeper look check out the sidebar of my blog, you can get your hands on it both digitally and in good old paper back! Enjoy!