Andy in the Cloud

From BBC Basic to Force.com and beyond…


Leave a comment

Building Beyond What You Know: Apex Extensibility

Extensibility is a means for others to extend, for customisation purposes, your application operations in a clear and defined way – it’s a key consideration for supporting unique customisation use cases both known and unknown. It’s an approach many Salesforce products leverage, an Apex Trigger being the most obvious – though limited to data manipulation use cases – but what about your UIs and processes?

This blog will take a look at two approaches to providing extensibility points within application logic, including automatically discovering compatible integrations – allowing you to make it easier and less error-prone for admins when configuring your application. For ease our use case is inspired by the calculator app on the Mac that allows you to pick modes suitable for different user types – imagine you have built a basic Salesforce calculator app – how do you ensure it can be extended further after you deliver it?

The first approach is via Apex interfaces, the second approach helps Admins extend your code with no code tools via Actions (created via Flow). There are many places and ways in which configuration is performed, here we will explore customising Lightning App builder to render dynamic drop down lists vs the traditional input fields when configuring your Calculators Lightning Web Component properties. Finally I created a small helper class that encapsulates the discovery logic – should you wish to take this further it might helpful – it comes complete with test coverage as well.

Apex Interfaces and Discovery

The principle here is straight forward, firstly identify places where you want to allow extensibility, for example calculation, validation or display logic and then define the information exchange required via an Apex Interface. Depending on where your calculator is being used you might use Custom Metadata Types or other configuration means such as configuration data stored in Flows and Lightning Page metadata. In the later two cases Salesforce tools also offer extensibility points to allow custom UIs to be rendered. Take the following Apex Interface and implementation:

// A means to add new buttons to a web calculator
public interface ICalculatorFunctions {

    // Declare the buttons    
    List<CalculatorButton> getButtons();

    // Do the relevant calculations
    Object calculate(CalculatorButton button, CalculatorState stage);
}

// A customisation to add scientific calculations to the calculator 
public class ScientificCalculator implements ICalculatorFunctions {

    // Additional buttons to display
    public List<CalculatorButton> getButtons() {
        List<CalculatorButton> buttons = new List<CalculatorButton>();        
        // Row 1: Memory functions, AC, +/-, %, Division
        buttons.add(new CalculatorButton('(', 'function', 'btn-function'));
        buttons.add(new CalculatorButton(')', 'function', 'btn-function'));
        buttons.add(new CalculatorButton('mc', 'memory', 'btn-memory'));
        buttons.add(new CalculatorButton('m+', 'memory', 'btn-memory'));
        buttons.add(new CalculatorButton('m-', 'memory', 'btn-memory'));
        // ...
    }

    // ...
}

// A customisation to add developer calculations to the calculator 
public class DeveloperCalculator implements ICalculatorFunctions {

    public List<CalculatorButton> getButtons() {
        // ...
    }

    // ...
}

Putting aside how additional implementations are configured for the moment then this is a basic way to loosely create an instance of a known implementation of the ICalculatorFunctions interface.

ICalculatorFunctions calculatorFunctions = 
    (ICalculatorFunctions) type.forName('ScientificCalculator').newInstance();
List<CalculatorButton> buttons = calculatorFunctions.getButtons();

Of course in reality ScientificCalculator is not hard coded as shown above, as mentioned above, some form of configuration storage is used to let Admins configure the specific class name. This is typically a string field that stores the class name. In the the example in this blog our Calculator Lightning Web Component property configuration is stored within the Lightning page the component is placed on.

Using a simple text field for the property is basically asking Admins to remember or search for class names is not the best of experiences, and so custom configuration UIs can be built to perform the searching and discovery for them. Key to this in the case of an Apex interface is using the ApexTypeImplementor object, which allows you to dynamically query for implementations of ICalculatorFunctions. The following SOQL query will return the names of the above two class names, ScientificCalculator and DeveloperCalculator.

            SELECT Id, ClassName, ClassNamespacePrefix, InterfaceName, InterfaceNamespacePrefix, IsConcrete, ApexClass.IsValid, ApexClass.Status
            FROM ApexTypeImplementor 
            WHERE InterfaceName = 'ICalculatorFunctions'
              AND IsConcrete = true
              AND ApexClass.IsValid = true 
              AND ApexClass.Status = 'Active'
            WITH USER_MODE
            ORDER BY ClassName                

You can read more about ApexTypeImplementor and various usage considerations here. In your application, you can choose where to place your configuration UIs, your own custom UI, or one already provided by the platform. In this latter case, we are providing a Calculate LWC component to administrators and wish to offer a means to extend it with additional Apex code using Apex Interfaces. Here, we expose a text property to allow the administrator to specify which implementing Apex class name to use based on their needs. Fortunately, we can do better than this and annotate the LWC property with another Apex class that dynamically retrieves a list of only Apex classes implementing that interface, as shown below.

The following shows the LWC component metadata configuration and the Apex class using the ApexTypeImplementor object we used above to only show Apex classes implementing the ICalculatorFunctions interface. The source code for this component is included in the GitHub repository linked below. By using the datasource attribute on the targetConfig property element Salesforce will render a drop down list instead of simple text box.

<?xml version="1.0" encoding="UTF-8"?>
<LightningComponentBundle xmlns="http://soap.sforce.com/2006/04/metadata">
    <apiVersion>64.0</apiVersion>
    <isExposed>true</isExposed>
    <targets>
        <target>lightning__RecordPage</target>
        <target>lightning__AppPage</target>
        <target>lightning__HomePage</target>
        <target>lightning__UtilityBar</target>
    </targets>
    <targetConfigs>
        <targetConfig targets="lightning__RecordPage,lightning__AppPage,lightning__HomePage,lightning__UtilityBar">
            <property 
                name="usage" 
                type="String" 
                label="Calculator Usage" 
                description="Select the calculator type to determine available buttons"
                datasource="apex://CalculatorUsagePickList"
                default=""
            />
        </targetConfig>
    </targetConfigs>
</LightningComponentBundle>

The following code implements the CalculatorUsagePickList class referenced above by extending the VisualEditor.DynamicPickList base class to dynamically discover and render the available implementations of the interface. It uses small library class, Extensions, I built for this blog that wraps the SOQL shown above for the ApexTypeImplementor object. It also allows for a richer more type safe way to specify the interface and format the results in way that helps make the class names more readable.

public class CalculatorUsagePickList extends VisualEditor.DynamicPickList {

    public override VisualEditor.DataRow getDefaultValue() {
        VisualEditor.DataRow defaultValue = new VisualEditor.DataRow('', 'Basic Calculator');
        return defaultValue;
    }
    
    public override VisualEditor.DynamicPickListRows getValues() {
        VisualEditor.DynamicPickListRows picklistValues = new VisualEditor.DynamicPickListRows();
                
        // Use Extensions.find to get all ICalculatorFunctions implementations
        Extensions extensions = new Extensions();
        Extensions.ApexExtensionsFindResults results = 
                 extensions.find(ICalculatorFunctions.class);

        // Add basic calculator option (no additional buttons) and any dynamicly discovered implementations
        VisualEditor.DataRow basicOption = new VisualEditor.DataRow('Basic Calculator', '');
        picklistValues.addRow(basicOption);
        List<Extensions.ApexExtensionsFindResult> names = results.toNames();        
        for (Extensions.ApexExtensionsFindResult name : names) {
            VisualEditor.DataRow value = 
                  new VisualEditor.DataRow(name.label, name.name);
            picklistValues.addRow(value);
        }
        
        return picklistValues;
    }
}

Of course Apex is not the only way to implement logic on the Salesforce platform, we can also use Flow and although slightly different in approach the above principles can also be applied to allow users to customise your application logic with Flow as well – just like other platform features offer.

Actions and Discovery

Actions are now a standard means of defining reusable tasks for many platform tools – with Salesforce providing many standard actions to access data, send emails, perform approvals and more. The ability for Admins to create custom actions via Flow is the key means for using no-code to extend other Flows, Lightning UIs and Agentforce. It is also possible to have your Salesforce applications offer Flow extensibility by using the Apex Invocable API. The following Apex code shows how to invoke a Flow action from Apex – once again though hard coded here imagine the Flow name comes from a configuration store, a Custom Metadata Type or property configuration as shown above.

Invocable.Action action = Invocable.Action.createCustomAction('Flow', 'HelloWorld');
action.setInvocationParameter('Name', 'Codey');
List<Invocable.Action.Result> results = action.invoke();
System.debug(results[0].getOutputParameters().get('Message'));

If you have used platform tools like Flow, Lightning App Builder, Buttons, Agent Builder, and more, you will notice that they allow Admins to search for actions – there is no need to remember action names. This can be achieved in your own configuration UIs by using the Standard and Custom Actions list APIs. The snag here is this API is not directly available to Apex; you have to call the Salesforce REST API from Apex.

        String orgDomainUrl = URL.getOrgDomainUrl().toExternalForm(); // Org Domain scoped callouts do not require named credentials
        String sessionId = UserInfo.getSessionId();        
        HttpRequest req = new HttpRequest();
        req.setEndpoint(actionType == 'standard' 
            ? orgDomainUrl + '/services/data/v64.0/actions/standard'
            : orgDomainUrl + '/services/data/v64.0/actions/custom/' + actionType);
        req.setMethod('GET');
        req.setHeader('Authorization', 'Bearer ' + sessionId);
        req.setHeader('Content-Type', 'application/json');        
        Http http = new Http();        
        HttpResponse res = http.send(req);        

This returns the following response

{
  "actions": [
    {
      "label": "Hello World",
      "name": "HelloWorld",
      "type": "FLOW",
      "url": "/services/data/v64.0/actions/custom/flow/HelloWorld"
    }
}

I also integrated this approach in the the Extensions helper I used in the Apex Interface example:

// Find all Flow custom actions
List<Extensions.ActionExtensionsFindResult> flowActions = 
    new Extensions().find(Extensions.ActionTypes.FLOW);
System.debug('Found ' + flowActions.size() + ' Flow custom actions:');
for (Extensions.ActionExtensionsFindResult action : flowActions) {
    System.debug('Action: ' + action.label);
    System.debug('  Name: ' + action.name);
    System.debug('  Qualified Label: ' + action.labelQualified);
    System.debug('  Action Type: ' + action.action.getType());
    System.debug('---');
}

Apex Interfaces vs Actions

Both of these approaches allow you to invoke logic written in either code or no-code from within your Apex code – but which one should you use? Certainly, performance considerations are a key factor, especially if the code you’re adding extensibility to is deep in your core logic and/or tied to processes that operate in bulk or in the background against large volumes of data. Another factor is the information being exchanged: is it simple native values (numbers, strings) or lists, or more complex nested structures? Basically the extensibility context does play a role in your choice – as does use case.

In general, if you’re concerned about performance (trigger context included here) and/or the use case may involve more than moderate calculations/if/then/else logic, I would go with Apex interfaces. Actions (typically implemented in Flow) can offer an easy way for admins to customize your UI logic, search results, add new button handling, or inject additional content on your page/component. Also worth keeping mind, Actions do come in other forms such as those from Salesforce; even Agents are Actions – so simply allowing Admins to reuse Standard Actions within your logic is a potential value to consider – and might be more optimal than them attaching to record changes for example.

Types of Extensibility, Motivations and Value

Carefully consider and validate extensibility use cases before embedding them in your Apex code; in some cases, an admin may find it more natural to use Flow and/or Lightning App Builder to orchestrate a brand-new alternative UI/process to the one you provide rather than extend it from within. By reusing existing objects and/or Apex Invocable actions, you are effectively building around your application logic vs. extending it from within, as per the patterns above. Both patterns are valid, though.

You might also wonder how important providing extensibility is to your users – especially if you have not been asked to include it. I once worked on an API enablement initiative with a Product Manager who had a strong affinity for providing integration facilities as standard. In a prior software purchasing role, they recognized the value as a form of insurance, as they could always build around or extend the application logic if they later found a feature gap.

My experience has also given me an appreciation that strong ecosystems thrive on customization abilities, and strong ecosystems strengthen the value of an offering—allowing influencers, customers, and partners to innovate further. And in case you’re wondering, is this just of interest for ISVs building AppExchange packages, the answer is no; it’s as important when building internal solutions; internal ecosystems and ease of customization are still important here, especially in larger businesses.

Here is a GitHub repository with the sample code and Extensions help class referenced in this blog. Let me know in the comments your thoughts and what ideas this prompted for your solutions?


Leave a comment

Infinite Data Scrolling with Apex Cursors Beta

The infinite scrolling feature of the Lightning Datatable component allows practically an unlimited amount of data to be loaded incrementally as the user scrolls. This is a common web UI approach to load pages faster and without consuming unnecessary amounts of database and compute resources retrieving records some users may not even view. The current recommended approach to retrieve records is to use SOQL OFFSET and LIMIT – however, ironically, this approach is limited to 2,000 max records. Substituting this with the new Apex Cursors feature, as you can see in the screenshot below, we have already gone past this limit! Actually, the limit for Apex Cursors is 50 million records – that said, I would seriously question the sanity of a requirement to load this many! This blog gives an overview of the Apex Cursor feature and how it can be adapted to work with LWC components.

If you want to go ahead and deploy this demo checkout the GitHub repo here. Also please keep in mind that as this is a Beta release – Salesforce does not recommend use in production at this time.

What are Apex Cursors?

If you’re not familiar, the Apex Cursors feature (currently in Beta), it enables you to provide a SOQL statement to the platform and for it to return a means for you to incrementally fetch chunks of records from the result set – this feels similar to the way Batch Apex works – except that it’s much more flexible as you decide when to retrieve the records and, in fact, in any order or chunk size. The standard documentation and much of what you’ll read elsewhere online focuses on using it to drive your own custom Apex async workloads using Apex Queueable as an alternative to Batch Apex – however, because it’s just an Apex API, it can be used for other use cases such as the one featured in this blog.

Usage is simple; first, you create a cursor with Database.getCursor (or getCursorWithBinds) giving your desired SOQL. Then, with the returned Cursor object, call the fetch method with the desired position and count. Unlike Batch Apex, you can actually go back and forth using the position parameter, as this is not an iterator interface. You can also determine the total record size via getNumRecords. Before we dive into the full demo code, let’s use some basic Apex to explore the feature to query 5000 accounts.

Database.Cursor cursor = Database.getCursor(
      'SELECT Id, Name, Industry, Type, BillingCity, Phone 
       FROM Account 
       WHERE Name LIKE \'TEST%\' WITH USER_MODE ORDER BY Name');

System.debug('***** BEFORE FETCH *****');
System.debug('Total Records: ' + cursor.getNumRecords());
System.debug('Limit Queries: ' + Limits.getLimitQueries());
System.debug('Limit Query Rows: ' + Limits.getLimitQueryRows());
System.debug('Limit Aggregate Queries: ' + Limits.getLimitAggregateQueries());
System.debug('Limit Apex Cursor Rows: ' + Limits.getLimitApexCursorRows());
System.debug('Limit Fetch Calls On Apex Cursor: ' + Limits.getLimitFetchCallsOnApexCursor());

List<Account> accounts = cursor.fetch(1, 500);

System.debug('***** AFTER FETCH *****');
System.debug('Accounts Read: ' + accounts.size());
System.debug('Limit Queries: ' + Limits.getLimitQueries());
System.debug('Limit Query Rows: ' + Limits.getLimitQueryRows());
System.debug('Limit Aggregate Queries: ' + Limits.getLimitAggregateQueries());
System.debug('Limit Apex Cursor Rows: ' + Limits.getLimitApexCursorRows());
System.debug('Limit Fetch Calls On Apex Cursor: ' + Limits.getLimitFetchCallsOnApexCursor());

The following code gives us the following debug output:

DEBUG|***** BEFORE FETCH *****
DEBUG|Total Records: 5000
DEBUG|Limit Queries: 100
DEBUG|Limit Query Rows: 50000
DEBUG|Limit Aggregate Queries: 300
DEBUG|Limit Apex Cursor Rows: 50000000
DEBUG|Limit Fetch Calls On Apex Cursor: 10
DEBUG|***** AFTER FETCH *****
DEBUG|Accounts Read: 500
DEBUG|Limit Queries: 100
DEBUG|Limit Query Rows: 50000
DEBUG|Limit Aggregate Queries: 300
DEBUG|Limit Apex Cursor Rows: 50000000
DEBUG|Limit Fetch Calls On Apex Cursor: 10

The findings, at least for Beta, show that retrieving or counting records does not count against the traditional limits for SOQL; all the common ones are still untouched. However, before you get excited, this is not an alternative – it has its own limits! Some of which you can see above, which for the Beta do not seem to be getting updated, see further discussion here. The most important one, however, is not exposed via the Limits class but is documented here, which states you can only create 10,000 cursors per org per day. The other important aspect is that cursors can also span multiple requests, so long as you can find a way to persist them; however, they will get deleted after 48 hours. This is also stated to align with the sister feature Salesforce REST API Cursors.

Using Cursors with LWC Components

Cursors when used in the context of Queueable are persisted in the class state – for LWC components while they don’t give an error – are at time of this Beta not serialisable between the LWC client and the Apex Controller – I suspect for security reasons, so I posted here to confirm. This however is not the end of the story, as we do have other forms of statement management between Apex and LWC, specifically Apex Session cache – as the Apex Controller code below demonstrates.

public with sharing class ApexCursorDemoController {
    
    @AuraEnabled(cacheable=false)
    public static LoadMoreRecordsResult loadMoreRecords(Integer offset, Integer batchSize) {
        try {
            Database.Cursor cursor = (Database.Cursor) Cache.Session.get('testaccounts');
            if(cursor == null) {
                cursor = Database.getCursor(
                         'SELECT Id, Name, Industry, Type, BillingCity, Phone FROM Account 
                          WHERE Name LIKE \'TEST%\' ORDER BY Name', AccessLevel.USER_MODE);
                Cache.Session.put('testaccounts', cursor);
            }            
            LoadMoreRecordsResult result = new LoadMoreRecordsResult();
            result.records = cursor.fetch(offset, batchSize);
            result.offset = offset + batchSize;
            result.totalRecords = cursor.getNumRecords();
            result.hasMore = result.offset < result.totalRecords;
            return result;
        } catch (Exception e) {
            Cache.Session.remove('allaccounts');
            throw new AuraHandledException('Error loading records: ' + e.getMessage());
        }
    }
    
    public class LoadMoreRecordsResult {
        @AuraEnabled public List<Account> records;
        @AuraEnabled public Integer offset;
        @AuraEnabled public Boolean hasMore;
        @AuraEnabled public Integer totalRecords;
    }
}

Because the use of Apex Cursors is purely contained within the Apex Controller the LWC component HTML and JavaScript controller are as per traditional implementation of the DataTable component using the infinite scrolling feature – you can click here to see the code.

Thoughts and Community Findings

This feature is a welcome, modern addition to Apex and I look forward to the full GA that allows us to use this confidently in a production scenario. Here is a short summary of some usage guidelines to be aware and some very good points already raised by the community in the Apex Previews and Betas group.

  • Cursor Sharing and Security Considerations
    As you can see in the above, user mode is not the default, but is enabled per best practice via use of AccessLevel.USER_MODE. As it is possible (at least for Beta) to share cursors it is important to make sure you consider who/where else you share the cursor as the user and sharing rules could be different. In this case the above code uses the users session cache so its explicitly scoped to the current user only. I suspect (see below) sharing, CRUD and FLS maybe getting re-evaluated anyway on fetch – but just in case (or you explicitly used the system mode, or the default) – this is something to keep in mind. On the flip side, another use case to explore might indeed by that in some cases it might be optimal to have a shared record set evaluated and maintain across a set of users.
  • Partial Cached Results
    The result set appears to be cached when the cursor is created, but I suspect only the Ids since when changing the record field values – changes are seen in the results on refresh. When a record is deleted however it is not returned from the corresponding fetch call – so worth noting the count you ask for when calling fetch may not be what you get back – though the position indexing and total records remains the same. Likewise if you create a record, even if its matches the original criteria it is not returned. These nuances and others are further discussed here – hopefully the GA documentation can formally confirm the behaviours.
  • Ability to Tidy up
    Given the 10k daily limit, while seemingly generous it would be useful to be able to explicitly delete cursors as well. Suggestion posted here.
  • Just because you can show more data – does not mean you should
    What I mean by this – is just because you can now expose more data does not always mean you should – always consider accordingly the fields and rows your users really need as you would any list / data table view your building – in most cases not having any filter is likely to be an anti-pattern. Additionally don’t forget Salesforce has a number of analytical tools that can help users further when moving through large amounts of data.

Additional Resources

Here is a list of useful resources I found:


Leave a comment

Five ways Heroku AppLink Enhances Salesforce Development Capabilities

Over the years through this blog I have enjoyed covering various advancements in Salesforce APIs, Apex, Flow, and more recently, Agentforce. While I have featured Heroku quite a bit – despite it being a Salesforce offering, the reality has been that access to Heroku for a Salesforce developer has felt like plugging in another platform – not just because on the surface its DX is different from SFDX, but because in a more material sense, it has not been integrated with the rest of the platform and its existing tools – in the same way Apex and Flow are. Requiring you to do the integration plumbing before you can access its value.

Now with a new “free” Heroku AppLink add-on, Heroku has now been tangibly integrated by Salesforce into the Salesforce platform – it and code deployed to it even sits under the Setup menu. So now it is finally time to reflect on what Heroku brings to the party!

This blog starts a series on what this new capability means for Salesforce development. Choosing the best tool for the job is crucial for maximizing the holistic development approach the Salesforce platform offers. Apex, Flow, LWC, etc., are still important tools in your toolkit. In my next blog in this series, I’ll share hands-on content, but for now, let’s explore five reasons to be aware of what Heroku and Heroku AppLink can do for Salesforce development:

1. Seamlessly and Securely Attach Unlimited Compute to your Orgs

At times, certain automations or user interactions demand improved response times and/or abiility handle increasing data volumes. While Apex and Flow have a number of options here, they are inherently always constrained by the multi-tenant nature of the core platform that runs their respective logic. The core platform’s first priority is a stable environment for all – thus, largely we see the continued realities of the infamous governor limits. Going beyond some, though not all, of the governor limits that either stop or at least slow things down is now possible – and without leaving the Salesforce family of services.

You can deploy code to Heroku with git push heroku main, which works in much the same way as sf project deploy to upload your code and run it, then you declaratively assign your compute needs, and attach (using the publish command) access to it for use within your Flow, Apex, LWC or Agentforce implementations – across as many orgs as you like using the connect command.

Heroku supports both credit card (pay as you go) and contract billing for compute usage – with the smallest plan at $5 a month already able to run complex and lengthy compute tasks easily – though milage of course varies on usecase.

2. Tap into the worlds most popular languages and frameworks within Salesforce

Salesforce has some history of embracing industry languages such as Node.js for Lightning Web Components and, with that, taps into wider skill set pools, and also commercial and open-source web component libraries. With Heroku AppLink, this is now true for backend logic – and in fact, it extends language support to Python, .NET, Ruby, Java, and many more languages, all with their own rich communities, libraries, and frameworks. Does this mean I am suggesting you port all your Apex code to other languages? No – remember this is a best tool for the job mindset – so if what you need can be better served with existing skills, code, or libraries available in such languages, then with AppLink you can now tap into these while staying within the Salesforce services.

Note: Heroku AppLink provides additional SDK support presently only for Node.js and Python. That said its API is available to any language – and is fully documented. Java samples included with AppLink illustrate how to access the API directly – along with existing Salesforce API libraries.

You may also think that with this flexibility comes more complexity; well, like the rest of Salesforce, Heroku keeps things powerful yet simple. Its buildpacks and simple CLI command git push heroku main remove the heavy lifting of building, deploying, and scaling your code that would otherwise require skills in AWS, GCP, or other platforms – what’s more, Heroku also curates the latest operating system versions, and build tools for you.

3. More power to existing Apex, Flow and Agentforce investments

As we are practicing choosing the best tool for the job – for complex solutions it’s typically not a case of one size fits all – that’s why we have a spectrum of tools to choose from – while one solution, for example, might be mostly delivered through Flow, the most complex parts of it might depend on some code – and thus having interoperability between each approach is important.

Heroku AppLink draws on the use of platform actions – which has over the years become the de facto means to decompose logic/automations – allow reusable logic to be built in code via Apex or declaratively in Flow. Now with Heroku AppLink, you can also effectively write actions in any of the aforementioned languages and also if needed scale that code beyond traditional limits such as CPU timeout and heap – while benefiting from increased execution times.

What is also critical to such code, is user context, so that data access honors the current user, both in terms of their object, field, sharing permissions but also audit information retaining a trail of who did what and when to the data. Thus, Heroku AppLink has the ability to run code in Salesforce “user mode” – much like Apex and Flow – this means the same – your SOQL and DML all operate in this mode – in fact, that’s the default – no need to qualify it as with Apex. This approach follows the industry pattern of the Principle of Least Privilege – there is also a way to selectively elevate permissions as needed using permission sets.

4. Make External Integrations more Secure

Heroku is also known to the wider world as a Paas (Platform-as-a-Service) providing easy to use compute, data and more recently AI services without the infrastructure hassle. This leads to Heroku customers building practically any application or service they desire – again in any language they desire. For example, a web/mobile experience can be hosted along with required data storage – both able to scale to global event needs. Heroku AppLink, joins Heroku Connect to start a family of add-ons that also help such consumer facing or even internal experiences tap into Salesforce org or even Data Cloud data – by effectively managing the connection details securely in one place – elevating the complexity of managing oAuth, JWT, certifications etc.

5. Leverage additional data and AI services

If all your data resides within a Salesforce org or Data Cloud, Heroku AppLink provides an easy to use SDK to make using the most popular APIs easy and even provides a long time favorite of mine, Martin Fowlers, Unit of Work over the relatively complex composite Salesforce APIs to manage multi-object updates within a single transaction.

Beyond this, you can also take advantage of Heroku Postgres to store additional data that does not need to be in the org but needs to be close at hand to your code – likewise attach to data services elsewhere in AWS, DynamoDB for example. Heroku also provides new AI services that provide a set of simple to use AI tooling primitives on top of the latest industry LLM’s. All these Heroku services exist with the same trust and governance as other Salesforce services and thus leveraging them means you don’t have to move data or compute outside of Salesforce if thats something your business is particularly sensitive to.

Summary

Salesforce continues to bring new innovations to its no-code and code tools that are exciting but yet broaden the burden of choice and making the right choice. With Heroku AppLink, this has indeed added to the mix – and expands the classic vs. question – to – when to use Flow vs. Apex vs. Heroku?

I’ve noticed that the Flow vs. Apex debate is still strong at community events this year. When it comes to “code,” whether it’s Apex, Python, Java, or .NET—excluding Triggers, which AppLink doesn’t support—my opinion on no-code versus code remains the same – consider wisely use of one or both accordingly. In respect to coded needs, I still would still generally recommend Apex first, that is unless your project needs align with the points above – then it’s worth further discussion. Ultimately, it’s about finding a suitable mix instead with all three supporting actions, it’s easier to blend and evolve as needed.

As I hinted at the start of this blog, I plan to get into more hands-on blogs on Heroku AppLink and some reflections on ISV usage. Between these, I also have other Apex-related topics I want to explore—such as the new Apex Cursors feature. In the meantime, here below are some useful links about Heroku AppLink available as of the time of this blog.

Thanks for readying, hope it was useful!


1 Comment

New Book – Salesforce Platform Enterprise Architecture 4th Edition

It has been nearly 10 years back in 2014 when the first edition of my book Force.com Enterprise Architecture was first published and clearly a lot has moved on since then and not just the platform capabilities. Keeping pace with branding changes resulted in the second edition of the book being called Lightning Platform Enterprise Architecture. This latest 4th edition, Salesforce Platform Enterprise Architecture has probably the most accurate and I hope enduring title! This 4th edition is also nearly twice the size of the first edition, coming in at 681 pages. So much so, its 16 chapters is now split over 4 parts to make digesting the book easier. As is tradition by now, this blog will cover some of latest updates and additions and what to expect in general from the book.

Links: Amazon | Packt Publishing

Motivation and what to expect…

I first came to the platform with a lot of experience and understanding of architecture principles from other platforms, including leveraging platform agnostic patterns such as those of Martin Fowler. Applying patterns to the platform requires some nuance at times, thus this book is about enterprise architecture considerations as applied to the Salesforce Platform and much more in respect to its capabilities that accelerate traditional application development concerns and tasks for you – and can hinder if not considered upfront in your architecture design.

The book dedicates 3 of its 16 chapters to Apex code separation of concerns, how to layout your code, while the rest covers the full spectrum of app concerns such as designing for code and database performance, profiling, testing, data storage design, building apis, extensibility and more and of course how to write much less code by harnessing the declarative aspects. All this is driven throughout the book via a fictitious reference application known as FormulaForce based on Formula1 motor racing – full source is included. Also included are numerous tips and tricks and info blocks pitted among the pages that highlight learnings from my own experiences, and latterly that of the many amazing reviewers.

As good as the platform is at abstracting a huge amount of the complexity of managing and securing modern day applications for you, there is no escaping having to fully understand good architecture principles that would, and do, in most cases apply elsewhere, such as query optimization and indexing.

If you are developing a lot on the Salesforce Platform, have a few years experience and are passionate about creating applications that are enduring and empathetic to the platform itself – this book is for you. Read on for further highlights!

Is it for ISVs or anyone?

This edition is the first to split out the motivations of packaging in respect to internal distribution needs within your own company vs distributing a commercial solution on AppExchange. Regardless of which motivation you fit into, the chapters will callout which bits are relevant vs not. Even if you are not into packaging at all right now, much of the content on Apex, Lightning, Flow, APIs etc is of course agnostic to how you choose to distribute your application. Regardless who you are sharing it with, the book gives you an appreciation of advantages and limits of packaged based distribution and in my view is a key aspect of your overall architecture considerations checklist.

Have the Apex coding patterns evolved given recent features?

The advent of user mode capabilities has had a big impact on the approach taken within the sample application of the book when it comes to using the Apex Enterprise Patterns Library (fflib), specifically in how the Unit of Work and Selector patterns are used when updating and querying for data respectively.

Additionally, the Domain pattern chapter now reflects on how Domain logic, can, if desired allow the developer to split Domain and Trigger logic into separate classes and thus introduces a new pattern (SDCL) to captured this approach. Thats not to say the original model has gone, it is still very much present, but it felt appropriate to include something in the book that reflects how the library usage has also evolved over the years – thanks to my good friend John M. Daniel for pushing for this!

Apply skills in other languages and additional scaling options

A brand new chapter in the book is dedicated to how Heroku and Functions can be used to leverage existing skills or indeed investments in open source or internally for code written in other languages such as Java or Node.js. These technologies also open up further scaling options. The chapter breaks down the approaches using the books sample application as a basis. Thus it continues the Formula1 motor racing theme by introducing Web and API interfaces for race fans and vendors (authentication scenario) to interact with the application backend APIs. Later in the book the integration chapter doubles down further on authoring APIs via OpenAPI standards and how that simplifies integration with the platform.

Lightning Web Components and Lightning Experience expansion

Since the third book, all but one of the Aura components have now been removed as many only existed at the time as workarounds to the lack of LWC enabled integration points. Two large chapters are dedicated to the Lightning framework itself, how it handles separation of concerns, eventing, security and its alignment with the industry Web Component standard. Completing with a section covering the ever expanding possibilities to extend vs build when thinking about your applications user experience, regardless if thats desktop, mobile or driven through the native UIs and tools such as Lightning Pages or Flow. And yes, I still love the Utility Bar integration the most – as you can see from the number of historic posts and tools on the topic here.

APIs, APIs and APIs…

Many years ago a product manager once thanked me for pushing an API strategy internally, since they appreciated first hand as a past buyer that APIs are effectively an insurance policy for buyers when it comes to unexpected feature gaps post or during implementation. Thus several of the chapters highlight the many Salesforce APIs in contextual ways.

Though its not until the integration and extensibility chapter the book really doubles down on building your own application APIs, versioning, naming and design and the value they bring to your fellow developers, customers and/or partner ecosystems. External Services is one of the most impressive aspects of integrating with the Salesforce Platform for me and it was my pleasure to update the book with an updated Heroku based API leveraging OpenAPI to get the full power of the feature – the two are a perfect combo!

And …

This blog is already in danger of becoming chapter size itself, so I will leave it here with the above highlights and before I close share my deepest thanks to this editions reviewers John M. Daniel, Sebastiano Costanzo, Arvind Narasimhan and foreword author Daniel J. Peter.

I will be popping up in the not to distant future in ApexHours (https://www.apexhours.com/) and perhaps other locations to talk in more depth about the book and likely other side topics of interest. I will update this blog as those resources arrive. Thank you all in the meantime, the Salesforce community is the best and remains very close to my heart, enjoy!

AndyInTheCloud

P.S. Suffice to say this 4th edition is an evolution of the prior three books! I would like to also thank the following fine folks for their support as past reviewers and foreword authors!

Past Reviewers and Forewords:
Matt Bingham, Steven Herod, Matt Lacey, Andrew Smith, Rohit Arora, Karanraj Samkaranarayanan, Jitendra Zaa, Joshua Burk, Peter Knolle, John Leen, Aaron Slettenhaugh, Avrom Roy-Faderman, Michael Salem, Mohith Shrivastava and Wade Wegner


3 Comments

The Third Edition

bookI’m proud to announce the third edition of my book has now been released. Back in March this year I took the plunge start updates to many key areas and add two brand new chapters. Between the 2 years and 8 months since the last edition there has been several platform releases and an increasing number of new features and innovations that made this the biggest update ever! This edition also embraces the platforms rebranding to Lightning, hence the book is now entitled Salesforce Lightning Platform Enterprise Architecture.

You can purchase this book direct from Packt or of course from Amazon among other sellers.  As is the case every year Salesforce events such as Dreamforce and TrailheaDX this book and many other awesome publications will be on sale. Here are some of the key update highlights:

  • Automation and Tooling Updates
    Throughout the book SFDX CLI, Visual Studio Code and 2nd Generation Packaging are leverage. While the whole book is certainly larger, certain chapters of the book actually reduced in size as steps previously reflecting clicks where replaced with CLI commands! At one point in time I was quite a master in Ant Scripts and Marcos, they have also given way to built in SFDX commands.
  • User Interface Updates
    Lightning Web Components is a relative new kid on the block, but benefits greatly from its standards compliance, meaning there is plenty of fun to go around exploring industry tools like Jest in the Unit Testing chapter. All of the books components have been re-written to the Web Component standard.
  • Big Data and Async Programming
    Big data was once a future concern for new products, these days it is very much a concern from the very start. The book covers Big Objects and Platform Events more extensibility with worked examples, including ingest and calculations driven by Platform Events and Async Apex Triggers. Event Driven Architecture is something every Lightning developer should be embracing as the platform continues to evolve around more and more standard platforms and features that leverage them.
  • Integration and Extensibility
    A particularly enjoyed exploring the use of Platform Events as another means by which you can expose API’s from your packages to support more scalable invocation of your logic and asynchronous plugins.
  • External Integrations and AI
    External integrations with other cloud services are a key part to application development and also the implementation of your solution, thus one of two brand new chapters focuses on Connected Apps, Named Credentials, External Services and External Objects, with worked examples of existing services or sample Heroku based services. Einstein has an ever growing surface area across Salesforce products and the platform. While this topic alone is worth an entire book, I took the time in the second new chapter, to enumerate Einstein from the perspective of the developer and customer configurations. The Formula1 motor racing theme continued with the ingest of historic race data that you can run AI over.
  • Other Updates
    Among other updates is a fairly extensive update to the CI/CD chapter which still covers Jenkins, but leverages the new Jenkins Pipeline feature to integrate SFDX CLI. The Unit Testing chapter has also been extended with further thoughts on unit vs integration testing and a focus on Lightening Web Component testing.

The above is just highlights for this third edition, you can see a full table of contents here. A massive thanks to everyone involving for providing the inspiration and support for making this third edition happen! Enjoy!


5 Comments

Custom Keyboard Shortcuts with Lightning Background Utilities

kbsexample2

As readers of my blog will know I am a big fan of the rich features the Lightning Experience UI provides to developers. Having blogged several times about the amazing Utility Bar, I have been keen to explore new possibilities with the new Background Utility feature. These are utilities that have no UI so do not use up space in the Utility Bar. Instead, they sit in the background monitoring things like other events generated by the user. One such documented use case is the possibility to monitor keyboard events! And so the Custom Keyboard Shortcut Component has been born! This component effectively runs Flows based on keyboard shortcuts defined by the admin! More on this later…

standardshortcuts.png

You may or may not know that Lightning Experience actually already provides some standard keyboard shortcut cuts? Just press Cmd+/ (Mac) or Ctrl+/ (Windows) to get a nice summary of them!

However, per the standard shortcut documentation, it’s not possible to add custom ones. By using the new lightning:backgroundUtilityItem interface we can rectify this. This blog explains a basic hardcoded example component and also introduces an open source component (installable package provided) that links admin defined keyboard shortcuts to Flows and certain navigation events.

In just a few lines of markup and JavaScript code you can get a basic example up and running.

<aura:component implements="lightning:backgroundUtilityItem" >
	<aura:handler name="init" value="{!this}" action="{!c.init}" />
</aura:component>

The component controller simply uses the standard addEventListener method. You can also inspect the keydown event properties to determine what keys are pressed, such as Shift or Control plus another key. This example simply determines if H is pressed and navigates to Home.

({
   init: function(component, event, helper) {
      window.addEventListener('keydown', function(e) {
      if (e.key === 'H') {
         $A.get('e.force:navigateToURL').fire({ url: '/lightning/page/home' })
      }
    });
  }
})

Once deployed go to the App Manager under Setup and add the component to the Utility Items list and that’s it! Note that the component has a different icon indicating it’s a non-visual component. Neat!

Of course, I could not simply leave things like this, so I set about making a more dynamic version. The configuration of the Custom Keyboard Shortcut component is shown at the top of this blog. It’s leveraging the fact that when you configure a Utility Bar component the App Manager inspects the .design file for the component to understand what attributes the component needs the user to configure. At runtime, the controller logic then parses the 9 attributes containing the keyboard shortcuts entered by the user into an internal map that is used by the keyboard event handler to match actions against keyboard activity.

Once you have installed the component either via a package install (admin friendly) or via sfdx force:source:deploy (devs). Add the component within the App Manager to configure keyboard shortcuts.

Through configuration you can connect keyboard shortcuts to the following:-

  • Open a UI Flow in a modal popup
  • Run an Autolaunch Flow
  • Display popup messages communicating the actions taken by the flow
  • Navigate the user to the Home tab
  • Navigate the user to records created by the flow

Further details on configuring the component can be found in the README here. Finally, you may recall that I used a Background Utility in this years Dreamforce presentation. In this case, it was using the new Streaming Component to listen to Platform Events. You can find the source code here.

Have fun!

 


34 Comments

Managing Dependency Injection within Salesforce

When developing within Salesforce, dependencies are formed in many ways, not just those made explicitly when writing code, but those formed by using declarative tools. Such as defining Actions and Layouts for example. This blog introduces a new open source library I have been working on called Force DI. The goal is to simplify and more importantly consolidate where and how to configure at runtime certain dependencies between Apex, Visualforce or Lightning component code.

Forming dependencies at runtime instead of explicitly during development can be very advantageous. So whether you are attempting to decompose a large org into multiple DX packages or building a highly configurable solution, hopefully, you will find this library useful!

So what does the DI bit stand for?

The DI bit in Force DI stands for Dependency Injection, which is a form of IoC (Inversion of Control). Both are well-established patterns for providing the runtime glue between two points, basically the bit in the middle. Let’s start with an Apex example. In order to use DI, you need to forgo the use of the “new” operator at the point where you want to do the injection. For example, consider the following code:-

PaymentEngine engine = new PayPal();

In the above example, you are explicitly expressing a dependency.  Which not only means you have to deploy or package all your payment engines together, but you have hardcoded a finite set you support and thus also forgone extensibility. With Force DI you can instead write

PaymentEngine engine = (PaymentEngine) di_Injector.Org.getInstance(PaymentEngine.class);

How does it know which class to instantiate then?

Whats happening here is the Injector class is using binding configuration (also dynamically discovered) to find out which class to actually instantiate. This binding configuration can be admin controlled, packaged (e.g. “PayPal Package”) and/or defined dynamically via code. Setting up binding config via code enables dynamic binding by reading other configuration (e.g. the user’s payment preference) and binding accordingly.

The key goal of DI is that calling code is not concerning itself with how an instance is obtained, only what it does with it. The following shows how a declarative binding is expressed via the libraries Binding Custom Metadata Type:-

If this all seems a bit indirect, that’s the point! Because of this indirection, you can now choose to deploy/package other payment gateway implementations independently from each other as well as be sure that everywhere your other code needs a PaymentEngine the implementation is resolved consistently. For a more advanced OOP walkthrough see the code sample here.

Can this help me with other kinds of dependencies?

Yes! Let’s take an example of Lightning Component used as an Action Override. Typically you would create a Lightning Component and associate it directly with an action override. However, this means that the object metadata, action override and the Lightning code (as well as whatever is dependent on that) must travel around together. Rather than, for example, in separate DX packages. It also means that if you want to offer different variations of this action you would need to code all of that into the single component as well.

As before let’s review what the Lightning Component Action Override looks like without DI:-

<aura:component implements="lightning:actionOverride,force:hasSObjectName">
   <lightning:cardtitle="Widget">
     <p class="slds-p-horizontal_small">Custom UI to Create a Widget ({!v.sObjectName})</p>
   </lightning:card>
</aura:component>

This component (and all its dependencies) would be directly referenced in the Action Override below:-

Now let us take a look at this again but using the Lightning c:injector component in its place:-

<aura:component implements="lightning:actionOverride,force:hasSObjectName">
   <c:di_injector bindingName="lc_actionWidgetNew">
      <c:di_injectorAttribute name="sObjectName" value="{!v.sObjectName}"/>
   </c:di_injector>
</aura:component>

To make things clearer when reviewing Lightning Components in the org, the above component follows a generic naming convention, such as actionWidgetNew. This component is instead bound to the Action Override, not the above one and now looks like this:-

The binding configuration looks like this:-

Finally, the injected Lightning Component widgetWizard looks like this:-

<aura:component>
   <aura:attribute name="sObjectName"type="String"/>
   <lightning:card title="Widget">
     <p class="slds-p-horizontal_small">Custom UI to Create a Widget ({!v.sObjectName})</p>
   </lightning:card>
</aura:component>

Note: You have the ability to pass context through to the bound Lightning Component just as the sObjectName attribute value was passed above. The c:injector component can be used in many other places such as Quick Actions, Lightning App Builder Pages, and Utility Bar. Check out this example page in the repo for another example.

What about my Visualforce page content can I inject that?

Visualforce used by Actions and in Layouts can be injected in much the same way as above, with a VF page acting as the injector proxy using the Visualforce c:injector component. We will skip showing what things looked like before DI, as things follow much the same general pattern as the Lightning Component approach.

The following example shows the layoutWidgetInfo page, which is again somewhat generically named to indicate its an injector proxy and not a real page. It is this page that is referenced in the Widget objects Layout:-

<apex:page standardController="Widget__c" extensions="di_InjectorController">
   <c:di_injector bindingName="vf_layoutWidgetInfo" parameters="{!standardController}"/>
</apex:page>
The following shows an alternative means to express binding configuration via code. The ForceApp3Module class defines the bindings for a module/package of code where the Visualforce Component that actually implements the UI is stored. Note that the binding for vf_layoutWidgetInfo points to an Apex class in the controller, not the actual VF component to inject. The Provider inner class actually creates the specific component (via Dynamic Visualforce).
public class ForceApp3Module extends di_Module {

    public override void configure() {

        // Example named binding to a Visualforce component (via Provider)
        bind('vf_layoutWidgetInfo').visualforceComponent().to(WidgetInfoController.Provider.class);

        // Example SObject binding (can be used by trigger frameworks, see force-di-demo-trigger)
        bind(Account.getSObjectType()).apex().sequence(20).to(CheckBalanceAccountTrigger.class);

        // Example named binding to a Lightning component
        bind('lc_actionWidgetManage').lightningComponent().to('c:widgetManager');
    }
}

NOTE: The above binding configuration module class is itself injected into the org-wide Injector by a corresponding custom metadata Binding record here. You can also see in the above example other bindings being configured, see below for more on this.

The actual implementation of the injected Visualforce Component widgetInfo looks like this:-

<apex:component controller="WidgetInfoController">
  <apex:attribute name="standardController"
     type="ApexPages.StandardController"
     assignTo="{!StandardControllerValue}" description=""/>
  <h1>Success I have been injected! {!standardController.Id}</h1>
</apex:component>

Decomposition Examples

The examples, shown above and others are contained in the sample repo. Each of the root package directories, force-app-1, force-app-2, and force-app-3 helps illustrate how the point of injection vs the runtime binding can be split across the boundaries of a DX package, thus aiding decomposition. The force-di-trigger-demo (not shown below) also contains a sample trigger handler framework using the libraries ability to resolve multiple bindings (to trigger handlers) in a given sequence, thus supporting the best practice of a single trigger per object.

Further Background and Features

I must confess when I started to research Java Dependency Injection (mainly via Java Guice) I was skeptical as to how much I could get done without custom annotation and reflection support in Apex. However, I am pretty pleased with the result and how it has woven in with features like Custom Metadata Types and how the Visualforce and Lightning Component injectors have turned out. A plan to write future Wiki pages on the associated GitHub repo to share more details on the Force DI API. Meanwhile here is a rundown of some of the more advanced features.

  • Provider Support
    Injectors by default only return one instance of the bound object, hence getInstance. Bindings that point to a class implementing the Provider interface (see inner interface) can override this. Which also allows for the construction of classes that do not have default constructors or types not supported by Type.forName. This feature also works in conjunction with the ability to pass a parameter via the Apex Injector, e.g. Injector.Org.getInstance(PaymentEngine.cls, someData);
  • Parameters
    Each of the three Injectors permits the passing of parameter/context information into the bound class or component. The examples above illustrate this.
  • Modules, Programmatic Binding Configuration and Injector Scopes
    Binding Modules group programmatic bindings and allow you to hook programmatically into the initialization of the Injector. Modules use the Fluent style interface to express bindings very clearly. The force-app-3 package in the repo uses this approach to define the bindings shown in the VF example above. You can also take a look at a worked example here of how local (one-off) Injectors can be used and here for a more complex OO example of conditional bindings works.
  • StandardController Passthrough
    For Visualforce Component injections the frameworks parameter passing capabilities supports passing through the instance of the StandardController from the hosting page into the injected component, as can be seen in the example above.
  • Binding Discovery by SObject vs Name
    The examples above utilize single bindings by a unique name. However, it is becoming quite common to adapt trigger frameworks to support DI and thus allow a single trigger to dynamically reach out to one or more handlers (perhaps installed in separate DX packages). This example shows how Force DI could be used in such a scenario.

Conclusion

This blog has hopefully wet your appetite to learn more! If so, head over to the repo and have a look through the samples in this blog and others. My next step is to wrap this up in a DX package to make it easier to get your hands on it, for now, download the repo and deploy via DX. I am also keen to explore what other aspects of Java Guice might make sense, such as the Linked Bindings feature.

Meanwhile, I would love feedback on the sample code and library thus far. Last but not least I would like to give a shout out to John Daniel and Doug Ayers for their great feedback during the initial development of the library and this blog. Enjoy!