Andy in the Cloud

From BBC Basic to Force.com and beyond…


14 Comments

Managing Dependency Injection within Salesforce

When developing within Salesforce, dependencies are formed in many ways, not just those made explicitly when writing code, but those formed by using declarative tools. Such as defining Actions and Layouts for example. This blog introduces a new open source library I have been working on called Force DI. The goal is to simplify and more importantly consolidate where and how to configure at runtime certain dependencies between Apex, Visualforce or Lightning component code.

Forming dependencies at runtime instead of explicitly during development can be very advantageous. So whether you are attempting to decompose a large org into multiple DX packages or building a highly configurable solution, hopefully, you will find this library useful!

So what does the DI bit stand for?

The DI bit in Force DI stands for Dependency Injection, which is a form of IoC (Inversion of Control). Both are well-established patterns for providing the runtime glue between two points, basically the bit in the middle. Let’s start with an Apex example. In order to use DI, you need to forgo the use of the “new” operator at the point where you want to do the injection. For example, consider the following code:-

PaymentEngine engine = new PayPal();

In the above example, you are explicitly expressing a dependency.  Which not only means you have to deploy or package all your payment engines together, but you have hardcoded a finite set you support and thus also forgone extensibility. With Force DI you can instead write

PaymentEngine engine = (PaymentEngine) di_Injector.Org.getInstance(PaymentEngine.class);

How does it know which class to instantiate then?

Whats happening here is the Injector class is using binding configuration (also dynamically discovered) to find out which class to actually instantiate. This binding configuration can be admin controlled, packaged (e.g. “PayPal Package”) and/or defined dynamically via code. Setting up binding config via code enables dynamic binding by reading other configuration (e.g. the user’s payment preference) and binding accordingly.

The key goal of DI is that calling code is not concerning itself with how an instance is obtained, only what it does with it. The following shows how a declarative binding is expressed via the libraries Binding Custom Metadata Type:-

If this all seems a bit indirect, that’s the point! Because of this indirection, you can now choose to deploy/package other payment gateway implementations independently from each other as well as be sure that everywhere your other code needs a PaymentEngine the implementation is resolved consistently. For a more advanced OOP walkthrough see the code sample here.

Can this help me with other kinds of dependencies?

Yes! Let’s take an example of Lightning Component used as an Action Override. Typically you would create a Lightning Component and associate it directly with an action override. However, this means that the object metadata, action override and the Lightning code (as well as whatever is dependent on that) must travel around together. Rather than, for example, in separate DX packages. It also means that if you want to offer different variations of this action you would need to code all of that into the single component as well.

As before let’s review what the Lightning Component Action Override looks like without DI:-

<aura:component implements="lightning:actionOverride,force:hasSObjectName">
   <lightning:cardtitle="Widget">
     <p class="slds-p-horizontal_small">Custom UI to Create a Widget ({!v.sObjectName})</p>
   </lightning:card>
</aura:component>

This component (and all its dependencies) would be directly referenced in the Action Override below:-

Now let us take a look at this again but using the Lightning c:injector component in its place:-

<aura:component implements="lightning:actionOverride,force:hasSObjectName">
   <c:di_injector bindingName="lc_actionWidgetNew">
      <c:di_injectorAttribute name="sObjectName" value="{!v.sObjectName}"/>
   </c:di_injector>
</aura:component>

To make things clearer when reviewing Lightning Components in the org, the above component follows a generic naming convention, such as actionWidgetNew. This component is instead bound to the Action Override, not the above one and now looks like this:-

The binding configuration looks like this:-

Finally, the injected Lightning Component widgetWizard looks like this:-

<aura:component>
   <aura:attribute name="sObjectName"type="String"/>
   <lightning:card title="Widget">
     <p class="slds-p-horizontal_small">Custom UI to Create a Widget ({!v.sObjectName})</p>
   </lightning:card>
</aura:component>

Note: You have the ability to pass context through to the bound Lightning Component just as the sObjectName attribute value was passed above. The c:injector component can be used in many other places such as Quick Actions, Lightning App Builder Pages, and Utility Bar. Check out this example page in the repo for another example.

What about my Visualforce page content can I inject that?

Visualforce used by Actions and in Layouts can be injected in much the same way as above, with a VF page acting as the injector proxy using the Visualforce c:injector component. We will skip showing what things looked like before DI, as things follow much the same general pattern as the Lightning Component approach.

The following example shows the layoutWidgetInfo page, which is again somewhat generically named to indicate its an injector proxy and not a real page. It is this page that is referenced in the Widget objects Layout:-

<apex:page standardController="Widget__c" extensions="di_InjectorController">
   <c:di_injector bindingName="vf_layoutWidgetInfo" parameters="{!standardController}"/>
</apex:page>
The following shows an alternative means to express binding configuration via code. The ForceApp3Module class defines the bindings for a module/package of code where the Visualforce Component that actually implements the UI is stored. Note that the binding for vf_layoutWidgetInfo points to an Apex class in the controller, not the actual VF component to inject. The Provider inner class actually creates the specific component (via Dynamic Visualforce).
public class ForceApp3Module extends di_Module {

    public override void configure() {

        // Example named binding to a Visualforce component (via Provider)
        bind('vf_layoutWidgetInfo').visualforceComponent().to(WidgetInfoController.Provider.class);

        // Example SObject binding (can be used by trigger frameworks, see force-di-demo-trigger)
        bind(Account.getSObjectType()).apex().sequence(20).to(CheckBalanceAccountTrigger.class);

        // Example named binding to a Lightning component
        bind('lc_actionWidgetManage').lightningComponent().to('c:widgetManager');
    }
}

NOTE: The above binding configuration module class is itself injected into the org-wide Injector by a corresponding custom metadata Binding record here. You can also see in the above example other bindings being configured, see below for more on this.

The actual implementation of the injected Visualforce Component widgetInfo looks like this:-

<apex:component controller="WidgetInfoController">
  <apex:attribute name="standardController"
     type="ApexPages.StandardController"
     assignTo="{!StandardControllerValue}" description=""/>
  <h1>Success I have been injected! {!standardController.Id}</h1>
</apex:component>

Decomposition Examples

The examples, shown above and others are contained in the same repo as the library (for now). Each of the root package directories, force-app-1, force-app-2, and force-app-3 helps illustrate how the point of injection vs the runtime binding can be split across the boundaries of a DX package, thus aiding decomposition. The force-di-trigger-demo (not shown below) also contains a sample trigger handler framework using the libraries ability to resolve multiple bindings (to trigger handlers) in a given sequence, thus supporting the best practice of a single trigger per object.

Further Background and Features

I must confess when I started to research Java Dependency Injection (mainly via Java Guice) I was skeptical as to how much I could get done without custom annotation and reflection support in Apex. However, I am pretty pleased with the result and how it has woven in with features like Custom Metadata Types and how the Visualforce and Lightning Component injectors have turned out. A plan to write future Wiki pages on the associated GitHub repo to share more details on the Force DI API. Meanwhile here is a rundown of some of the more advanced features.

  • Provider Support
    Injectors by default only return one instance of the bound object, hence getInstance. Bindings that point to a class implementing the Provider interface (see inner interface) can override this. Which also allows for the construction of classes that do not have default constructors or types not supported by Type.forName. This feature also works in conjunction with the ability to pass a parameter via the Apex Injector, e.g. Injector.Org.getInstance(PaymentEngine.cls, someData);
  • Parameters
    Each of the three Injectors permits the passing of parameter/context information into the bound class or component. The examples above illustrate this.
  • Modules, Programmatic Binding Configuration and Injector Scopes
    Binding Modules group programmatic bindings and allow you to hook programmatically into the initialization of the Injector. Modules use the Fluent style interface to express bindings very clearly. The force-app-3 package in the repo uses this approach to define the bindings shown in the VF example above. You can also take a look at a worked example here of how local (one-off) Injectors can be used and here for a more complex OO example of conditional bindings works.
  • StandardController Passthrough
    For Visualforce Component injections the frameworks parameter passing capabilities supports passing through the instance of the StandardController from the hosting page into the injected component, as can be seen in the example above.
  • Binding Discovery by SObject vs Name
    The examples above utilize single bindings by a unique name. However, it is becoming quite common to adapt trigger frameworks to support DI and thus allow a single trigger to dynamically reach out to one or more handlers (perhaps installed in separate DX packages). This example shows how Force DI could be used in such a scenario.

Conclusion

This blog has hopefully wet your appetite to learn more! If so, head over to the repo and have a look through the samples in this blog and others. My next step is to wrap this up in a DX package to make it easier to get your hands on it, for now, download the repo and deploy via DX. I am also keen to explore what other aspects of Java Guice might make sense, such as the Linked Bindings feature.

Meanwhile, I would love feedback on the sample code and library thus far. Last but not least I would like to give a shout out to John Daniel and Doug Ayers for their great feedback during the initial development of the library and this blog. Enjoy!

 


6 Comments

Adding User Feedback to your Package

featurefeedbackFeature Management has become GA in Winter’18 and with it the ability to have finer control and visibility over how users of your package consume its functionality. It provides an Apex API and corresponding objects in your License Management Org (LMO). With these objects, you can switch features on and off or even extend the capacity or duration of existing ones. For features that you simply want to monitor adoption on you can also track adoption and send metrics back to your LMO as well. This is all done within the platform, no HTTP callouts are required.

This blog focuses on a tracking and metrics use case and presents a Lightning Component to allow users to activate a given feature and after that contribute to an aggregated scoring of that feature sent back to you the package owner!

Consider this scenario. The latest Widgets App package has added a new feature to its Widgets Overview tab. The ability to analyze widgets! Using Feature Management they can now track who activates this feature and ratings their customers give it.

featureactivate

Trying it Out: You can find the full source code for this sample app and component here. You can also easily give it a try via the handy Deploy to SFDX button! Do not worry though, it will not write back to your production org, it’s totally isolated in this mode. Unless you choose to package it up and associate your LMA etc.

withoutmanagefeatures.pngYour customers may not want every user to have the ability to activate and deactivate features as shown above. The sample application associated with this blog includes a Manage Featurescustom permission that controls this ability. Users without this custom permission assigned only see the feedback portion of the component. If the feature has not been activated a message is displayed informing the user to consult with their admin.

The component only sends the average score per feature per customer back to you. However, a user’s individual score is captured in the subscriber org via custom objects. Enabling you to pick up the phone and dig a bit deeper together with customers showing particularly low or high scores.

featureadoption.png

The following shows what you get back in your License Management Org LMO (typically your companies production org). By using standard reporting you can easily see what features have been activated, their average score and how many users contributed to the rating.

featurereport

NOTE: The average score returned to the production org is represented as a value from 1 to 50.

So let’s now dig into the code and the architecture of this solution. Firstly we need some feature parameters, then some corresponding Apex API to read and write to them. You define the parameters via XML under the/featureParameters folder.

FeatureParam

NOTE: In this blog, we are dealing with parameter values that flow from your customers org back into your production org, hence the SubscriberToLmo usage. Also keep in mind that per the docs, updates to your LMO may take up to 24 hrs.

You can also create package feature parameters via a new tab on the Package Details page in your packaging org. However, if you are using Scratch Orgs you define them via metadata directly. The Feedback component needs you to define three parameters per feature. To support our scenario, the following parameters are defined. WidgetAnalysis (Boolean), WidgetAnalysisCount (Integer), and WidgetAnalysisScore (Integer).

featuremanagementapi

NOTE: When your code sets parameter values, mixed DML rules apply. So typically you would set these via an async context such as @future or a queueable.

To use the Feedback Component included in the sample code for this blog, you set two attributes, the name of your feature parameter and an attribute that the rest of your component code can use to conditionally display your new feature! The following shows how in the above scenario, the new Widget Analysis component has used c:featureFeedback component to activate the feature, get a user rating and control the display of the new graph to the user.

feedbackcomponent.png

Take a deeper look at the Feature Feedback Component Apex controller to see the above Feature Management API in action, as well as how it aggregates the scores.

Back over in your License Management Org, the above report was based on one of the four new custom objects provided by a Salesforce package. The Feature Parameter object represents the package parameters defined above. The Feature Parameter Date, Feature Parameter Integer, and Feature Parameter Boolean are the values set via code running in the subscriber org associated with the License record. Per the documentation, it can take up to 24hrs for these to update.

featureparamsschema

Summary

Knowing more about how your customers consume applications you build on the platform is a big part of customer success and your own. You should also check out the Usage Metrics feature if you have not already.

There is a quite a bit more to Feature Management to explore. Such as controlling feature activation exclusively from your side, useful for pilots or enabling paid features. Of special note is the ability tohide related Custom Objects until features are activated. This is certainly a welcome feature to your customer’s admins when working with large packages since they only see in Object Manager objects relating to activated features.

Finally, I recommend you read the excellent documentation in the ISVForce guide very carefully before going ahead and trying this out in your main package. Since this feature integrates with your live production data. The documentation recommends to try this out in a temporary package first and discuss rollout with your legal team.

P.S. You can also use the FeatureManagement.checkPermission method to check if the user has been assigned a given custom permission without SOQL. Very useful!


2 Comments

Highlights from TrailheaDX 2017

IMG_2857.JPGThis was my first TrailheaDX and what an event it was! With my Field Guide in hand i set out into the wilderness! In this blog i’ll share some of my highlights, thoughts and links to the latest resources. Many of the newly announced things you can actually get your hands on now which is amazing!

Overall the event felt well organized, if a little frantic at times. With smaller sessions of 30 minutes each, often 20 mins after intros and questions, each was bite sized, but quite well tuned with demos and code samples being shown.

SalesforceDX, Salesforce announced the public beta of this new technology aimed at improving the developer experience on the platform. SalesforceDX consist of several modules that will be extended further over time. Salesforce has done a great job at providing a wealth of Trailhead resources to get you started.

Einstein, Since its announcement, myself and other developers have been waiting to get access to more custom tools and API’s, well now that wait is well and truly over. As per my previous blogs we’ve had the Einstein Vision API for a little while now. Announced at the event where no less than three other new Einstein related tools and API’s.

  • Einstein Discovery. Salesforce demonstrated a very slick looking tool that allows you to point and click your way through to analyzing various data sets, including those obtained from your custom objects! They have provided a Trailhead module on it here and i plan on digging in! Pricing and further info is here.
  • Einstein Sentiment API. Allows you to interpret text strings for terms that indicate if its a positive, neutral or negative statement / sentiment. This can be used to scan case comments, forum posts, feedback, twitter posts etc in an automated way and respond or be alerted accordingly to what is being said.
  • Einstein Intent API.  Allows you to interpret text strings for meanings, such as instructions or requests. Routing case comments or even implementing bots that can help automate or propose actions to be taken without human interpretation.
  • Einstein Object Detection API. Is an extension of the Einstein Vision API, that allows for individual items in a picture to be identified. For example a pile of items on a coffee table, such as a mug, magazine, laptop or pot plant! Each can then be recognized and classified to build up more intel on whats in the picture for further processing and analysis.
  • PredictionIO on Heroku. Finally, if you want to go below the declarative tools or intentional simplified Einstein API’s, you can build your own machine learning models using Heroku and the PredictionIO build pack!

Platform Events. These allow messages to be published and subscribed to using a new object known as an Event object, suffixed with __e. Once created you can use a new Apex API’s or REST API’s to send messages and use either Apex Triggers or Streaming API to receive them. There is also a new Process Builder action or Flow element to send messages. You can keep your messages within Force.com or use them to integrate between other cloud platforms or the browser. The possibilities are quite endless here, aysnc processing, inter app comms, logging, ui notifications…. i’m sure myself and other bloggers will be exploring them in the coming months!

External Services. If you find a cool API you want to consume in Force.com you currently have to write some code. No longer! If you have a schema that describes that API you use the External Services wizard to generate some Apex code that will call out to the API. Whats special about this, is the Apex code is accessible via Process Builder and Flow. Making clicks not code API integration possible. Its an excellent way to integrate with complementary services you or others might develop on platforms such as Heroku. I have just submitted a session to Dreamforce 2017 to explore this further, fingers crossed it gets selected! You can read more about it here in the meantime.

Sadly i did have to make a key decision to focus on the above topics and not so much on Lightning. I heard from other attendees these sessions where excellent and i did catch a brief sight of dynamic component rendering in Lightning App Builder, very cool!

Thanks Salesforce for filling my blog backlog for the next year or so! 😉

 

 


Leave a comment

Packaging and Installing Rollups

Since v2.0 of the Declarative Rollup Tool, it is now possible to use Custom Metadata to store rollup configurations. Not only does this make it easy to transfer those you have setup in Sandbox via Change Sets, you can also use Salesforce packaging to capture those you’ve created and install them over and over into other orgs, like a reusable Change Set!

InstallRollups

Create and set up your Packaging Org

To do this, you need a Developer Edition org from Salesforce, this will act as your master org for your common rollup definitions. Here you can install the rollup tool package itself as minimum. Then also create common fields or objects used by your rollups if you wish. You can also install any other packages, such as the NPSP packages. Basically prepare and test everything here as you would normally in Sandbox or Production.

DESignup

Creating your Package

Under the Setup menu navigation to Create and then Packages. Create a new Package with an appropriate name. Click Add to components, components are anything from Custom Objects, Fields, Apex Triggers and Apex Tests to the Rollup definitions themselves. As a minimum you will need to add the following components for rollups.

  • Apex Classes for the rollup/s (starting dlrs)
  • Apex Triggers for the rollup/s (starting dlrs)
  • Lookup Rollup Summary definition/s

The platform adds a few other dependent things as you can see in the screenshot below, but don’t worry about these. You should have something like this…

PackagingRollups

Tip: After creating your rollups, go to Setup and Custom Metadata, and find your rollup definition, edit it and locate the Protected checkbox, then hit Save (at present this checkbox is not shown in the custom UI for rollups). This checkbox prevents users or other admins from changing your rollup definition once installed. You also need to use the managed package route though, as described below.

Choosing Unmanaged or Managed Packages

This choice depends on if you want to manage several updates / releases to your rollups over time, adding or updating as you improve things. An umanaged package is basically like just installing a template of your rollups, once installed thats it, they are no longer linked to your package. If however you want to install updates to them and/or stop people from editing your rollups (see protected mode above), you want a managed package.

Setting a namespace for your Managed Package

If you decided you only want an unmanaged package, skip this bit. Otherwise on the Packages page, under Setup then Create. Click Edit button and follow the prompts to enter a namespace, this is a short mnemonic that describes your package, like a unique ID. Once set it cannot be changed. Next find your Package detail and click Edit, selecting the Managed checkbox on hitting Save.

Uploading your Rollup Package!

PackagesRollups

This process basically builds your installer, by actually placing a copy of what you’ve done in the AppExchange servers. All be it as a private listing. This gives you an install link to use over and over as much as you like. Install it in the sandbox and production as needed.

Click on your package and then the Upload button, give it a name and version of your choose. Be sure to select the Release mode, in most cases for this packaged content worrying about Beta vs Release mode is not a concern. Press Upload and wait.

PackageUpload.png

RollupPackageSummary

Rinse and Repeat?

Keep in mind if you went with the managed package route. You can go back to your packaging org, make some improvements, add more rollups etc, then repeat the upload process to get a new version of your package. Simply then go to any existing org with your prior release installed and upgrade.

NOTE: To ensure your rollup definitions are upgraded they must be marked as protected. If you don’t care about this and only want new rollups to be added, retaining subscriber (installed org) changes, don’t worry about Protected rollups.