Andy in the Cloud

From BBC Basic to Force.com and beyond…


Leave a comment

Building Beyond What You Know: Apex Extensibility

Extensibility is a means for others to extend, for customisation purposes, your application operations in a clear and defined way – it’s a key consideration for supporting unique customisation use cases both known and unknown. It’s an approach many Salesforce products leverage, an Apex Trigger being the most obvious – though limited to data manipulation use cases – but what about your UIs and processes?

This blog will take a look at two approaches to providing extensibility points within application logic, including automatically discovering compatible integrations – allowing you to make it easier and less error-prone for admins when configuring your application. For ease our use case is inspired by the calculator app on the Mac that allows you to pick modes suitable for different user types – imagine you have built a basic Salesforce calculator app – how do you ensure it can be extended further after you deliver it?

The first approach is via Apex interfaces, the second approach helps Admins extend your code with no code tools via Actions (created via Flow). There are many places and ways in which configuration is performed, here we will explore customising Lightning App builder to render dynamic drop down lists vs the traditional input fields when configuring your Calculators Lightning Web Component properties. Finally I created a small helper class that encapsulates the discovery logic – should you wish to take this further it might helpful – it comes complete with test coverage as well.

Apex Interfaces and Discovery

The principle here is straight forward, firstly identify places where you want to allow extensibility, for example calculation, validation or display logic and then define the information exchange required via an Apex Interface. Depending on where your calculator is being used you might use Custom Metadata Types or other configuration means such as configuration data stored in Flows and Lightning Page metadata. In the later two cases Salesforce tools also offer extensibility points to allow custom UIs to be rendered. Take the following Apex Interface and implementation:

// A means to add new buttons to a web calculator
public interface ICalculatorFunctions {

    // Declare the buttons    
    List<CalculatorButton> getButtons();

    // Do the relevant calculations
    Object calculate(CalculatorButton button, CalculatorState stage);
}

// A customisation to add scientific calculations to the calculator 
public class ScientificCalculator implements ICalculatorFunctions {

    // Additional buttons to display
    public List<CalculatorButton> getButtons() {
        List<CalculatorButton> buttons = new List<CalculatorButton>();        
        // Row 1: Memory functions, AC, +/-, %, Division
        buttons.add(new CalculatorButton('(', 'function', 'btn-function'));
        buttons.add(new CalculatorButton(')', 'function', 'btn-function'));
        buttons.add(new CalculatorButton('mc', 'memory', 'btn-memory'));
        buttons.add(new CalculatorButton('m+', 'memory', 'btn-memory'));
        buttons.add(new CalculatorButton('m-', 'memory', 'btn-memory'));
        // ...
    }

    // ...
}

// A customisation to add developer calculations to the calculator 
public class DeveloperCalculator implements ICalculatorFunctions {

    public List<CalculatorButton> getButtons() {
        // ...
    }

    // ...
}

Putting aside how additional implementations are configured for the moment then this is a basic way to loosely create an instance of a known implementation of the ICalculatorFunctions interface.

ICalculatorFunctions calculatorFunctions = 
    (ICalculatorFunctions) type.forName('ScientificCalculator').newInstance();
List<CalculatorButton> buttons = calculatorFunctions.getButtons();

Of course in reality ScientificCalculator is not hard coded as shown above, as mentioned above, some form of configuration storage is used to let Admins configure the specific class name. This is typically a string field that stores the class name. In the the example in this blog our Calculator Lightning Web Component property configuration is stored within the Lightning page the component is placed on.

Using a simple text field for the property is basically asking Admins to remember or search for class names is not the best of experiences, and so custom configuration UIs can be built to perform the searching and discovery for them. Key to this in the case of an Apex interface is using the ApexTypeImplementor object, which allows you to dynamically query for implementations of ICalculatorFunctions. The following SOQL query will return the names of the above two class names, ScientificCalculator and DeveloperCalculator.

            SELECT Id, ClassName, ClassNamespacePrefix, InterfaceName, InterfaceNamespacePrefix, IsConcrete, ApexClass.IsValid, ApexClass.Status
            FROM ApexTypeImplementor 
            WHERE InterfaceName = 'ICalculatorFunctions'
              AND IsConcrete = true
              AND ApexClass.IsValid = true 
              AND ApexClass.Status = 'Active'
            WITH USER_MODE
            ORDER BY ClassName                

You can read more about ApexTypeImplementor and various usage considerations here. In your application, you can choose where to place your configuration UIs, your own custom UI, or one already provided by the platform. In this latter case, we are providing a Calculate LWC component to administrators and wish to offer a means to extend it with additional Apex code using Apex Interfaces. Here, we expose a text property to allow the administrator to specify which implementing Apex class name to use based on their needs. Fortunately, we can do better than this and annotate the LWC property with another Apex class that dynamically retrieves a list of only Apex classes implementing that interface, as shown below.

The following shows the LWC component metadata configuration and the Apex class using the ApexTypeImplementor object we used above to only show Apex classes implementing the ICalculatorFunctions interface. The source code for this component is included in the GitHub repository linked below. By using the datasource attribute on the targetConfig property element Salesforce will render a drop down list instead of simple text box.

<?xml version="1.0" encoding="UTF-8"?>
<LightningComponentBundle xmlns="http://soap.sforce.com/2006/04/metadata">
    <apiVersion>64.0</apiVersion>
    <isExposed>true</isExposed>
    <targets>
        <target>lightning__RecordPage</target>
        <target>lightning__AppPage</target>
        <target>lightning__HomePage</target>
        <target>lightning__UtilityBar</target>
    </targets>
    <targetConfigs>
        <targetConfig targets="lightning__RecordPage,lightning__AppPage,lightning__HomePage,lightning__UtilityBar">
            <property 
                name="usage" 
                type="String" 
                label="Calculator Usage" 
                description="Select the calculator type to determine available buttons"
                datasource="apex://CalculatorUsagePickList"
                default=""
            />
        </targetConfig>
    </targetConfigs>
</LightningComponentBundle>

The following code implements the CalculatorUsagePickList class referenced above by extending the VisualEditor.DynamicPickList base class to dynamically discover and render the available implementations of the interface. It uses small library class, Extensions, I built for this blog that wraps the SOQL shown above for the ApexTypeImplementor object. It also allows for a richer more type safe way to specify the interface and format the results in way that helps make the class names more readable.

public class CalculatorUsagePickList extends VisualEditor.DynamicPickList {

    public override VisualEditor.DataRow getDefaultValue() {
        VisualEditor.DataRow defaultValue = new VisualEditor.DataRow('', 'Basic Calculator');
        return defaultValue;
    }
    
    public override VisualEditor.DynamicPickListRows getValues() {
        VisualEditor.DynamicPickListRows picklistValues = new VisualEditor.DynamicPickListRows();
                
        // Use Extensions.find to get all ICalculatorFunctions implementations
        Extensions extensions = new Extensions();
        Extensions.ApexExtensionsFindResults results = 
                 extensions.find(ICalculatorFunctions.class);

        // Add basic calculator option (no additional buttons) and any dynamicly discovered implementations
        VisualEditor.DataRow basicOption = new VisualEditor.DataRow('Basic Calculator', '');
        picklistValues.addRow(basicOption);
        List<Extensions.ApexExtensionsFindResult> names = results.toNames();        
        for (Extensions.ApexExtensionsFindResult name : names) {
            VisualEditor.DataRow value = 
                  new VisualEditor.DataRow(name.label, name.name);
            picklistValues.addRow(value);
        }
        
        return picklistValues;
    }
}

Of course Apex is not the only way to implement logic on the Salesforce platform, we can also use Flow and although slightly different in approach the above principles can also be applied to allow users to customise your application logic with Flow as well – just like other platform features offer.

Actions and Discovery

Actions are now a standard means of defining reusable tasks for many platform tools – with Salesforce providing many standard actions to access data, send emails, perform approvals and more. The ability for Admins to create custom actions via Flow is the key means for using no-code to extend other Flows, Lightning UIs and Agentforce. It is also possible to have your Salesforce applications offer Flow extensibility by using the Apex Invocable API. The following Apex code shows how to invoke a Flow action from Apex – once again though hard coded here imagine the Flow name comes from a configuration store, a Custom Metadata Type or property configuration as shown above.

Invocable.Action action = Invocable.Action.createCustomAction('Flow', 'HelloWorld');
action.setInvocationParameter('Name', 'Codey');
List<Invocable.Action.Result> results = action.invoke();
System.debug(results[0].getOutputParameters().get('Message'));

If you have used platform tools like Flow, Lightning App Builder, Buttons, Agent Builder, and more, you will notice that they allow Admins to search for actions – there is no need to remember action names. This can be achieved in your own configuration UIs by using the Standard and Custom Actions list APIs. The snag here is this API is not directly available to Apex; you have to call the Salesforce REST API from Apex.

        String orgDomainUrl = URL.getOrgDomainUrl().toExternalForm(); // Org Domain scoped callouts do not require named credentials
        String sessionId = UserInfo.getSessionId();        
        HttpRequest req = new HttpRequest();
        req.setEndpoint(actionType == 'standard' 
            ? orgDomainUrl + '/services/data/v64.0/actions/standard'
            : orgDomainUrl + '/services/data/v64.0/actions/custom/' + actionType);
        req.setMethod('GET');
        req.setHeader('Authorization', 'Bearer ' + sessionId);
        req.setHeader('Content-Type', 'application/json');        
        Http http = new Http();        
        HttpResponse res = http.send(req);        

This returns the following response

{
  "actions": [
    {
      "label": "Hello World",
      "name": "HelloWorld",
      "type": "FLOW",
      "url": "/services/data/v64.0/actions/custom/flow/HelloWorld"
    }
}

I also integrated this approach in the the Extensions helper I used in the Apex Interface example:

// Find all Flow custom actions
List<Extensions.ActionExtensionsFindResult> flowActions = 
    new Extensions().find(Extensions.ActionTypes.FLOW);
System.debug('Found ' + flowActions.size() + ' Flow custom actions:');
for (Extensions.ActionExtensionsFindResult action : flowActions) {
    System.debug('Action: ' + action.label);
    System.debug('  Name: ' + action.name);
    System.debug('  Qualified Label: ' + action.labelQualified);
    System.debug('  Action Type: ' + action.action.getType());
    System.debug('---');
}

Apex Interfaces vs Actions

Both of these approaches allow you to invoke logic written in either code or no-code from within your Apex code – but which one should you use? Certainly, performance considerations are a key factor, especially if the code you’re adding extensibility to is deep in your core logic and/or tied to processes that operate in bulk or in the background against large volumes of data. Another factor is the information being exchanged: is it simple native values (numbers, strings) or lists, or more complex nested structures? Basically the extensibility context does play a role in your choice – as does use case.

In general, if you’re concerned about performance (trigger context included here) and/or the use case may involve more than moderate calculations/if/then/else logic, I would go with Apex interfaces. Actions (typically implemented in Flow) can offer an easy way for admins to customize your UI logic, search results, add new button handling, or inject additional content on your page/component. Also worth keeping mind, Actions do come in other forms such as those from Salesforce; even Agents are Actions – so simply allowing Admins to reuse Standard Actions within your logic is a potential value to consider – and might be more optimal than them attaching to record changes for example.

Types of Extensibility, Motivations and Value

Carefully consider and validate extensibility use cases before embedding them in your Apex code; in some cases, an admin may find it more natural to use Flow and/or Lightning App Builder to orchestrate a brand-new alternative UI/process to the one you provide rather than extend it from within. By reusing existing objects and/or Apex Invocable actions, you are effectively building around your application logic vs. extending it from within, as per the patterns above. Both patterns are valid, though.

You might also wonder how important providing extensibility is to your users – especially if you have not been asked to include it. I once worked on an API enablement initiative with a Product Manager who had a strong affinity for providing integration facilities as standard. In a prior software purchasing role, they recognized the value as a form of insurance, as they could always build around or extend the application logic if they later found a feature gap.

My experience has also given me an appreciation that strong ecosystems thrive on customization abilities, and strong ecosystems strengthen the value of an offering—allowing influencers, customers, and partners to innovate further. And in case you’re wondering, is this just of interest for ISVs building AppExchange packages, the answer is no; it’s as important when building internal solutions; internal ecosystems and ease of customization are still important here, especially in larger businesses.

Here is a GitHub repository with the sample code and Extensions help class referenced in this blog. Let me know in the comments your thoughts and what ideas this prompted for your solutions?


3 Comments

Improving User Response Time with Heroku AppLink

An app is often judged by its features, but equally important is its durability, confidence, and predictability in the tasks it performs – especially as a business grows; without these, you risk frustrating users with unpredictable response times or worse random timeouts. As Apex developers we can reach out to Queuables and Batch Apex for more power – though usage of these can also be required purely to work around the lower interactive governor limits – making programming interactive code more complex. I believe you should still include Apex in your Salesforce architecture considerations – however now we have an additional option to consider! This blog revisits Heroku AppLink and how it can help and without having to move wholesale away from Apex as your primary language!

This blog comes with full source code and setup instructions here.

Why Heroku AppLink?

In my prior blog I covered Five ways Heroku AppLink Enhances Salesforce Development Capabilities – if you have not read that and need a primer please check it out. Heroku AppLink has a flexible points of integration with Salesforce, among those is a way to stay within a flow of control driven by Apex code (or Flow for that matter), yet seamlessly offload certain code execution to Heroku, once complete revert back to Apex control. In contrast to Apex async workloads, this allows code to run immediately and uninterrupted until complete. In this mode there is no competing with Apex CPU, heap, or batch chunking constraints. As a result the overall flow of execution can be simpler to design and completion times are faster, largely only impacted by org data access times (no escaping slow Trigger logic). For the end user and overall business the application scales better, is more predictable and timely – and critically, grows more smoothly in relation to business data volumes.

Staying within Apex flow of control, allows you to leverage existing investments and skills in Apex, while when needed hooking into additional skills and Heroku’s more performant compute layer. All while maintaining the correct flow of the user identity (including their permissions) and critically without leaving the Salesforce (inclusive of Heroku) DX tool chains and overall fully managed services. The following presents two examples, one expanding what can be done in an interactive (synchronous) use case and the second moving to a full background (asynchronous) use case.

Improving Interactive Tasks – Synchronous Invocation

In this interactive (synchronous) example we are converting an Opportunity to a Quote – a task that can, depending on discount rules, size of the opportunity and additional regional tweaks, become quite compute heavy – sometimes in Apex hitting CPU or Heap limits. The sequence diagram below illustrates the flow of control from the User, through Apex, Heroku and back again. As always full code is supplied, but for now lets dig into the key code snippets below.

We start out with an Apex Controller that is attached to the “Create Quote” LWC button on the Opportunity page. This Apex Controller calls the Heroku AppLink exposed conversion logic (in this case written in Node.js – more on this later) – and waits for a response before returning control back to Lighting Experience to redirect the user to the newly created Quote. As you can see the HerokuAppLink namespace contains dynamically generated types for the service.

    @AuraEnabled(cacheable=false)
    public static QuoteResponse createQuote(String opportunityId) {
        try {
            // Create the Heroku service instance
            HerokuAppLink.QuoteService service = new HerokuAppLink.QuoteService();            
            // Create the request
            HerokuAppLink.QuoteService.createQuote_Request request = 
               new HerokuAppLink.QuoteService.createQuote_Request();
            request.body = new HerokuAppLink.QuoteService_CreateQuoteRequest();
            request.body.opportunityId = opportunityId;    
            // Call the Heroku service
            HerokuAppLink.QuoteService.createQuote_Response response = 
               service.createQuote(request);            
            if (response != null && response.Code200 != null) {
                QuoteResponse quoteResponse = new QuoteResponse();
                quoteResponse.opportunityId = opportunityId;
                quoteResponse.quoteId = response.Code200.quoteId;
                quoteResponse.success = true;
                quoteResponse.message = 'Quote generated successfully';                
                return quoteResponse;
            } else {
                throw new AuraHandledException('No response received from quote service');
            }            
        } catch (HerokuAppLink.QuoteService.createQuote_ResponseException e) {
            // Handle specific Heroku service errors
            // ...
        } catch (Exception e) {
            // Handle any other exceptions
            throw new AuraHandledException('Error generating quote: ' + e.getMessage());
        }
    }

The Node.js logic (show below) to convert the quote uses the Fastify library to expose the code via a HTTP endpoint (secure by Heroku AppLink). In the generateQuote method the Heroku AppLink SDK is used to access the Opportunity records and create the Quote records – notably in one transaction via its Unit Of Work interface. Again it is important to note that none of this requires handling authentication thats all done for you – just like Apex – and just like Apex (when you apply USER _MODE) – the SOQL and DML has permissions applied.

// Synchronous quote creation
  fastify.post('/createQuote', {
    schema: createQuoteSchema,
    handler: async (request, reply) => {
      const { opportunityId } = request.body;
      try {
        const result = await generateQuote({ opportunityId }, request.salesforce);
        return result;
      } catch (error) {
        reply.code(error.statusCode || 500).send({
          error: true,
          message: error.message
        });
      }
    }
  });
//
// Generate a quote for a given opportunity
// @param {Object} request - The quote generation request
// @param {string} request.opportunityId - The opportunity ID
// @param {import('@heroku/applink').AppLinkClient} client - The Salesforce client
// @returns {Promise<Object>} The generated quote response
//
export async function generateQuote (request, client) {
  try {
    const { context } = client;
    const org = context.org;
    const dataApi = org.dataApi;
    // Query Opportunity to get CloseDate for ExpirationDate calculation
    const oppQuery = `SELECT Id, Name, CloseDate FROM Opportunity WHERE Id = '${request.opportunityId}'`;
    const oppResult = await dataApi.query(oppQuery);
    if (!oppResult.records || oppResult.records.length === 0) {
      const error = new Error(`Opportunity not found for ID: ${request.opportunityId}`);
      error.statusCode = 404;
      throw error;
    }    
    const opportunity = oppResult.records[0].fields;
    const closeDate = opportunity.CloseDate;
    // Query opportunity line items
    const soql = `SELECT Id, Product2Id, Quantity, UnitPrice, PricebookEntryId FROM OpportunityLineItem WHERE OpportunityId = '${request.opportunityId}'`;
    const queryResult = await dataApi.query(soql);
    if (!queryResult.records.length) {
      const error = new Error(`No OpportunityLineItems found for Opportunity ID: ${request.opportunityId}`);
      error.statusCode = 404;
      throw error;
    }
    // Calculate discount based on hardcoded region (matching createQuotes.js logic)
    const discount = getDiscountForRegion('NAMER'); // Use hardcoded region 'NAMER'
    // Create Quote using Unit of Work
    const unitOfWork = dataApi.newUnitOfWork();
    // Add Quote
    const quoteName = 'New Quote';
    const expirationDate = new Date(closeDate);
    expirationDate.setDate(expirationDate.getDate() + 30); // Quote expires 30 days after CloseDate
    const quoteRef = unitOfWork.registerCreate({
      type: 'Quote',
      fields: {
        Name: quoteName, 
        OpportunityId: request.opportunityId,
        Pricebook2Id: standardPricebookId,
        ExpirationDate: expirationDate.toISOString().split('T')[0],
        Status: 'Draft'
      }
    });
    // Add QuoteLineItems
    queryResult.records.forEach(record => {
      const quantity = parseFloat(record.fields.Quantity);
      const unitPrice = parseFloat(record.fields.UnitPrice);
      // Apply discount to QuoteLineItem UnitPrice (matching createQuotes.js exactly)
      const originalUnitPrice = unitPrice;
      const calculatedDiscountedPrice = originalUnitPrice != null 
                                        ? originalUnitPrice * (1 - discount)
                                        : originalUnitPrice; // Default to original if calculation fails
      unitOfWork.registerCreate({
        type: 'QuoteLineItem',
        fields: {
          QuoteId: quoteRef.toApiString(),
          PricebookEntryId: record.fields.PricebookEntryId,
          Quantity: quantity,
          UnitPrice: calculatedDiscountedPrice
        }
      });
    });
    // Commit all records in one transaction
    try {
      const results = await dataApi.commitUnitOfWork(unitOfWork);
      // Get the Quote result using the reference
      const quoteResult = results.get(quoteRef);
      if (!quoteResult) {
        throw new Error('Quote creation result not found in response');
      }
      return { quoteId: quoteResult.id };
    } catch (commitError) {
      // Salesforce API errors will be formatted as "ERROR_CODE: Error message"
      const error = new Error(`Failed to create quote: ${commitError.message}`);
      error.statusCode = 400; // Bad Request for validation/data errors
      throw error;
    }
  } catch (error) {
    // ...
  }
}

This is a secure way to move from Apex to Node.js and back. Note certain limits still apply: callout timeout is 120 seconds max (applicable when calling Heroku per above) – additionally, the Node.js code is leveraging the Salesforce API, so API limits still apply. Despite the 120 seconds timeout, you get practically unlimited CPU, heap, and the speed of the latest industry language runtimes – in the case of Java – compilation to the machine code level if needed!

The decision to use AppLink here really depends on identifying the correct bottle neck; if some Apex logic is bounded (constrained to grow) by CPU, memory, execution time, or even language, then this is a good approach consider – without going off doing integration plumbing and risking security. For example, if you’re doing so much processing in memory you’re hitting Apex CPU limits – then even with the 120-second callout limit to Heroku – the alternative Node.js (or other lang) code will likely run much faster – keeping you in the simpler synchronous mode for longer as your compute and data requirements grow.

Improving Background Jobs – Asynchronous Invocation

When processing needs to operate over a number of records (user selected or filtered) we can apply the same expansion of the Apex control flow – by having Node.js do the heavy lifting in the middle and then once complete passing control back to Apex to complete user notifications, logging, or even further non-compute heavy work. The diagram shows two processes; the first is the user interaction, in this case, selecting the records that Apex passes over to Heroku to enqueue a job to handle the processing. Heroku compute is your org’s own compute, so will begin execution immediately and run until it’s done. Thus, in the second flow, we see the worker taking over, completing the task, and then using an AppLink Apex callback, sending control back to the org where a user notification is sent.

In this example we have the Create Quotes button that allows the user to select which Opportunities to be converted to Quotes. The Apex Controller shown below takes the record Ids and passes those over to Node.js code for processing in Heroku – however in this scenario it also passes an Apex class that implements a callback interface – more on this later. Note you can also invoke via Apex Scheduled jobs or other means such as Change Data Capture.

    public PageReference generateQuotesForSelected() {
        try {
            // Get the selected opportunities
            List<Opportunity> selectedOpps = (List<Opportunity>) this.stdController.getSelected();
            // Extract opportunity IDs
            List<String> opportunityIds = new List<String>(selectedOpps.keySet());
            // Call the Quotes service with an Apex callback
            try {
                HerokuAppLink.QuoteService service = new HerokuAppLink.QuoteService();
                HerokuAppLink.QuoteService.createQuotes_Request request = new HerokuAppLink.QuoteService.createQuotes_Request();
                request.body = new HerokuAppLink.QuoteService_CreateQuotesRequest();
                request.body.opportunityIds = opportunityIds;                
                // Create callback handler for notifications
                CreateQuotesCallback callbackHandler = new CreateQuotesCallback();                
                // Set callback timeout to 10 minutes from now (max 24hrs)
                DateTime callbackTimeout = DateTime.now().addMinutes(10);                
                // Call the service with callback
                HerokuAppLink.QuoteService.createQuotes_Response response = 
                   service.createQuotes(request, callbackHandler, callbackTimeout);                
                if (response != null && response.Code201 != null) {
                    // Show success message
                    // ....
            } catch (HerokuAppLink.QuoteService.createQuotes_ResponseException e) {
                // Handle specific service errors
                // ...
            }            
        } catch (Exception e) {
            // Show error message
            //  ...
        }        
        return null;
    }

Note: You may have noticed the above Apex Controller is that of a Visualforce page controller and not LWC! Surprisingly it seems (as far as I can see) this is still the only way to implement List View buttons with selection. Please do let me know of other native alternatives. Meanwhile the previous button is a modern LWC based button, but this is only supported on detail pages.

As before you can see Fastify used to expose the Node.js code invoked from the Apex controller – except that it is returning immediately to the caller (your Apex code) rather than waiting for the work to complete. This is because the work has been spun off in this case into another Heroku process known as a Worker. This pattern means that control returns to the Apex Controller and to the user immediately while the process continues in the background. Note that the callbackURL is automatically supplied by AppLink you just need to retain it for later.

// Asynchronous batch quote creation
  fastify.post('/createQuotes', {
    schema: createQuotesSchema,
    handler: async (request, reply) => {
      const { opportunityIds, callbackUrl } = request.body;
      const jobId = crypto.randomUUID();
      const jobPayload = JSON.stringify({
        jobId,
        jobType: 'quote',
        opportunityIds,
        callbackUrl
      });
      try {
        // Pass the work to the worker and respond with HTTP 201 to indicate the job has been accepted
        const receivers = await redisClient.publish(JOBS_CHANNEL, jobPayload);
        request.log.info({ jobId, channel: JOBS_CHANNEL, payload: { jobType: 'quote', opportunityIds, callbackUrl }, receivers }, `Job published to Redis channel ${JOBS_CHANNEL}. Receivers: ${receivers}`);
        return reply.code(201).send({ jobId }); // Return 201 Created with Job ID
      } catch (error) {
        request.log.error({ err: error, jobId, channel: JOBS_CHANNEL }, 'Failed to publish job to Redis channel');
        return reply.code(500).send({ error: 'Failed to publish job.' });
      }
    }
  });

The following Node.js is running in the Heroku Worker and performs the same work as the example above, querying Opportunities and using the Unit Of Work to create the Quotes. However in this case when it completes it calls the Apex Callback handler. Note that you can support different types of callbacks – such as an error state callback.

/**
 * Handles quote generation jobs.
 * @param {object} jobData - The job data object from Redis.
 * @param {object} logger - A logger instance.
 */
async function handleQuoteMessage (jobData, logger) {
  const { jobId, opportunityIds, callbackUrl } = jobData;
    try {
    // Get named connection from AppLink SDK
    logger.info(`Getting 'worker' connection from AppLink SDK for job ${jobId}`);
    const sfContext = await sdk.addons.applink.getAuthorization('worker');      
    // Query Opportunities 
    const opportunityIdList = opportunityIds.map(id => `'${id}'`).join(',');
    const oppQuery = `
      SELECT Id, Name, AccountId, CloseDate, StageName, Amount,
             (SELECT Id, Product2Id, Quantity, UnitPrice, PricebookEntryId FROM OpportunityLineItems)
      FROM Opportunity
      WHERE Id IN (${opportunityIdList})
    // ... 
    logger.info(`Processing ${opportunities.length} Opportunities`);
    const unitOfWork = dataApi.newUnitOfWork();
    // Create the Quotes and commit Unit Of Work
    // ...
    const commitResult = await dataApi.commitUnitOfWork(unitOfWork);
    // Callback to Apex Callback class
    if (callbackUrl) {
      try {
        const callbackResults = {
          jobId,
          opportunityIds,
          quoteIds: Array.from(quoteRefs.values()).map(ref => {
            const result = commitResult.get(ref);
            return result?.id || null;
          }).filter(id => id !== null),
          status: failureCount === 0 ? 'completed' : 'completed_with_errors',
          errors: failureCount > 0 ? [`${failureCount} quotes failed to create`] : []
        };
        const requestOptions = {
          method: 'POST',
          body: JSON.stringify(callbackResults),
          headers: { 'Content-Type': 'application/json' }
        };
        const response = await sfContext.request(callbackUrl, requestOptions);
        logger.info(`Callback executed successfully for Job ID: ${jobId}`);
      } catch (callbackError) {
        logger.error({ err: callbackError, jobId }, `Failed to execute callback for Job ID: ${jobId}`);
      }
    }
  } catch (error) {
    logger.error({ err: error }, `Error executing batch for Job ID: ${jobId}`);
  }
}

Finally the following code shows us what the CreateQuotesCallback Apex Callback (provided in the Apex controller logic) is doing. For this example its using the custom notifications to notify the user via UserInfo.getUserId(). It can do this because it is running as the original user that started the work. Also meaning that if it needed to do any further SOQL or DML these run in context of the correct user. Also worth noting that handler is bulkified – indicating that Salesforce will likely batch up callbacks if they arrive in close timing.

/**
 * Apex Callback handler for createQuotes asynchronous operations
 * Extends the generated AppLink callback interface to handle responses
 */
public class CreateQuotesCallback 
      extends HerokuAppLink.QuoteService.createQuotes_Callback {

    /**
     * Handles the callback response from the Heroku worker
     * Sends a custom notification to the user with the results
     */
    public override void createQuotesResponse(List<HerokuAppLink.QuoteService.createQuotes_createQuotesResponse_Callback> callbacks) {
        // Send custom notification to the user
        for (herokuapplink.QuoteService.createQuotes_createQuotesResponse_Callback callback : callbacks) {
            if (callback.response != null && callback.response.body != null) {
                Messaging.CustomNotification notification = new Messaging.CustomNotification();
                notification.setTitle('Quote Generation Complete');
                notification.setNotificationTypeId(notificationTypeId);                
                String message = 'Job ' + callback.response.body.jobId + ' completed with status: ' + callback.response.body.status;
                if (callback.response.body.quoteIds != null && !callback.response.body.quoteIds.isEmpty()) {
                    message += '. Created ' + callback.response.body.quoteIds.size() + ' quotes.';
                }
                if (callback.response.body.errors != null && !callback.response.body.errors.isEmpty()) {
                    message += ' Errors: ' + String.join(callback.response.body.errors, ', ');
                }                                
                notification.setBody(message);
                notification.setTargetId(UserInfo.getUserId());                    
                notification.send(new Set<String>{ UserInfo.getUserId() });
            }
        }
    }
}

Configuration and Monitoring

In general the Node.js code runs as the user invoking the actions – which is very Apex like and gives you confidence your code only does what the user is permitted. There is also an elevation mode thats out the scope of this blog – but is covered in the resources listed below. The technical notes section in the README covers an exception to running as the user – whereby the asynchronous Heroku worker logic is running as a named user. Note that the immediate Node.js logic and Apex Callbacks both still run as the invoking user so if needed you can do “user mode” work in those contexts. You can read more about the rational for this this in the README for this project.

Additionally there are subsections in the README that cover the technical implementation of Heroku AppLink asynchronous callbacks. Configuration for Heroku App Async Callbacks provides the OpenAPI YAML structure required for callback definitions, including dynamic callback URLs and response schemas that Salesforce uses to generate the callback interface. Monitoring and Other Considerations explains AppLink’s External Services integration architecture, monitoring through the BackgroundOperation object, and the 24-hour callback validity constraint with Platform Event alternatives for extended processing times or in progress updates.

Summary

As always I have shared the code, along with a more detailed README file on how to set the above demos up for yourself. This is just one of many ways to use Heroku AppLink, others are covered in the sample patterns here – including using Platform Events to trigger Heroku workers and transition control back to Apex or indeed Flow. This Apex Callback pattern is unique to using Heroku AppLink with Apex and is not yet that deeply covered in the official docs and samples – you can also find more information about this feature by studying the general External Services callback documentation.

Finally, the most important thing here is that this is not a DIY integration like you may have experienced in the past – though I omitted here the CLI commands (you can see them in the README) – Salesforce and Heorku are taking on a lot more management now. And overall this is getting more and more “Apex” like with user mode context explicitly available to your Heroku code. This blog was inspired by feedback on my last blog, so please keep it coming! There is much more to explore still – I plan to get more into the DevOps integration side of things and explore ways to automate the setup using the AppLink API.

Meanwhile, enjoy some additional resources!


Leave a comment

Infinite Data Scrolling with Apex Cursors Beta

The infinite scrolling feature of the Lightning Datatable component allows practically an unlimited amount of data to be loaded incrementally as the user scrolls. This is a common web UI approach to load pages faster and without consuming unnecessary amounts of database and compute resources retrieving records some users may not even view. The current recommended approach to retrieve records is to use SOQL OFFSET and LIMIT – however, ironically, this approach is limited to 2,000 max records. Substituting this with the new Apex Cursors feature, as you can see in the screenshot below, we have already gone past this limit! Actually, the limit for Apex Cursors is 50 million records – that said, I would seriously question the sanity of a requirement to load this many! This blog gives an overview of the Apex Cursor feature and how it can be adapted to work with LWC components.

If you want to go ahead and deploy this demo checkout the GitHub repo here. Also please keep in mind that as this is a Beta release – Salesforce does not recommend use in production at this time.

What are Apex Cursors?

If you’re not familiar, the Apex Cursors feature (currently in Beta), it enables you to provide a SOQL statement to the platform and for it to return a means for you to incrementally fetch chunks of records from the result set – this feels similar to the way Batch Apex works – except that it’s much more flexible as you decide when to retrieve the records and, in fact, in any order or chunk size. The standard documentation and much of what you’ll read elsewhere online focuses on using it to drive your own custom Apex async workloads using Apex Queueable as an alternative to Batch Apex – however, because it’s just an Apex API, it can be used for other use cases such as the one featured in this blog.

Usage is simple; first, you create a cursor with Database.getCursor (or getCursorWithBinds) giving your desired SOQL. Then, with the returned Cursor object, call the fetch method with the desired position and count. Unlike Batch Apex, you can actually go back and forth using the position parameter, as this is not an iterator interface. You can also determine the total record size via getNumRecords. Before we dive into the full demo code, let’s use some basic Apex to explore the feature to query 5000 accounts.

Database.Cursor cursor = Database.getCursor(
      'SELECT Id, Name, Industry, Type, BillingCity, Phone 
       FROM Account 
       WHERE Name LIKE \'TEST%\' WITH USER_MODE ORDER BY Name');

System.debug('***** BEFORE FETCH *****');
System.debug('Total Records: ' + cursor.getNumRecords());
System.debug('Limit Queries: ' + Limits.getLimitQueries());
System.debug('Limit Query Rows: ' + Limits.getLimitQueryRows());
System.debug('Limit Aggregate Queries: ' + Limits.getLimitAggregateQueries());
System.debug('Limit Apex Cursor Rows: ' + Limits.getLimitApexCursorRows());
System.debug('Limit Fetch Calls On Apex Cursor: ' + Limits.getLimitFetchCallsOnApexCursor());

List<Account> accounts = cursor.fetch(1, 500);

System.debug('***** AFTER FETCH *****');
System.debug('Accounts Read: ' + accounts.size());
System.debug('Limit Queries: ' + Limits.getLimitQueries());
System.debug('Limit Query Rows: ' + Limits.getLimitQueryRows());
System.debug('Limit Aggregate Queries: ' + Limits.getLimitAggregateQueries());
System.debug('Limit Apex Cursor Rows: ' + Limits.getLimitApexCursorRows());
System.debug('Limit Fetch Calls On Apex Cursor: ' + Limits.getLimitFetchCallsOnApexCursor());

The following code gives us the following debug output:

DEBUG|***** BEFORE FETCH *****
DEBUG|Total Records: 5000
DEBUG|Limit Queries: 100
DEBUG|Limit Query Rows: 50000
DEBUG|Limit Aggregate Queries: 300
DEBUG|Limit Apex Cursor Rows: 50000000
DEBUG|Limit Fetch Calls On Apex Cursor: 10
DEBUG|***** AFTER FETCH *****
DEBUG|Accounts Read: 500
DEBUG|Limit Queries: 100
DEBUG|Limit Query Rows: 50000
DEBUG|Limit Aggregate Queries: 300
DEBUG|Limit Apex Cursor Rows: 50000000
DEBUG|Limit Fetch Calls On Apex Cursor: 10

The findings, at least for Beta, show that retrieving or counting records does not count against the traditional limits for SOQL; all the common ones are still untouched. However, before you get excited, this is not an alternative – it has its own limits! Some of which you can see above, which for the Beta do not seem to be getting updated, see further discussion here. The most important one, however, is not exposed via the Limits class but is documented here, which states you can only create 10,000 cursors per org per day. The other important aspect is that cursors can also span multiple requests, so long as you can find a way to persist them; however, they will get deleted after 48 hours. This is also stated to align with the sister feature Salesforce REST API Cursors.

Using Cursors with LWC Components

Cursors when used in the context of Queueable are persisted in the class state – for LWC components while they don’t give an error – are at time of this Beta not serialisable between the LWC client and the Apex Controller – I suspect for security reasons, so I posted here to confirm. This however is not the end of the story, as we do have other forms of statement management between Apex and LWC, specifically Apex Session cache – as the Apex Controller code below demonstrates.

public with sharing class ApexCursorDemoController {
    
    @AuraEnabled(cacheable=false)
    public static LoadMoreRecordsResult loadMoreRecords(Integer offset, Integer batchSize) {
        try {
            Database.Cursor cursor = (Database.Cursor) Cache.Session.get('testaccounts');
            if(cursor == null) {
                cursor = Database.getCursor(
                         'SELECT Id, Name, Industry, Type, BillingCity, Phone FROM Account 
                          WHERE Name LIKE \'TEST%\' ORDER BY Name', AccessLevel.USER_MODE);
                Cache.Session.put('testaccounts', cursor);
            }            
            LoadMoreRecordsResult result = new LoadMoreRecordsResult();
            result.records = cursor.fetch(offset, batchSize);
            result.offset = offset + batchSize;
            result.totalRecords = cursor.getNumRecords();
            result.hasMore = result.offset < result.totalRecords;
            return result;
        } catch (Exception e) {
            Cache.Session.remove('allaccounts');
            throw new AuraHandledException('Error loading records: ' + e.getMessage());
        }
    }
    
    public class LoadMoreRecordsResult {
        @AuraEnabled public List<Account> records;
        @AuraEnabled public Integer offset;
        @AuraEnabled public Boolean hasMore;
        @AuraEnabled public Integer totalRecords;
    }
}

Because the use of Apex Cursors is purely contained within the Apex Controller the LWC component HTML and JavaScript controller are as per traditional implementation of the DataTable component using the infinite scrolling feature – you can click here to see the code.

Thoughts and Community Findings

This feature is a welcome, modern addition to Apex and I look forward to the full GA that allows us to use this confidently in a production scenario. Here is a short summary of some usage guidelines to be aware and some very good points already raised by the community in the Apex Previews and Betas group.

  • Cursor Sharing and Security Considerations
    As you can see in the above, user mode is not the default, but is enabled per best practice via use of AccessLevel.USER_MODE. As it is possible (at least for Beta) to share cursors it is important to make sure you consider who/where else you share the cursor as the user and sharing rules could be different. In this case the above code uses the users session cache so its explicitly scoped to the current user only. I suspect (see below) sharing, CRUD and FLS maybe getting re-evaluated anyway on fetch – but just in case (or you explicitly used the system mode, or the default) – this is something to keep in mind. On the flip side, another use case to explore might indeed by that in some cases it might be optimal to have a shared record set evaluated and maintain across a set of users.
  • Partial Cached Results
    The result set appears to be cached when the cursor is created, but I suspect only the Ids since when changing the record field values – changes are seen in the results on refresh. When a record is deleted however it is not returned from the corresponding fetch call – so worth noting the count you ask for when calling fetch may not be what you get back – though the position indexing and total records remains the same. Likewise if you create a record, even if its matches the original criteria it is not returned. These nuances and others are further discussed here – hopefully the GA documentation can formally confirm the behaviours.
  • Ability to Tidy up
    Given the 10k daily limit, while seemingly generous it would be useful to be able to explicitly delete cursors as well. Suggestion posted here.
  • Just because you can show more data – does not mean you should
    What I mean by this – is just because you can now expose more data does not always mean you should – always consider accordingly the fields and rows your users really need as you would any list / data table view your building – in most cases not having any filter is likely to be an anti-pattern. Additionally don’t forget Salesforce has a number of analytical tools that can help users further when moving through large amounts of data.

Additional Resources

Here is a list of useful resources I found:


Leave a comment

Five ways Heroku AppLink Enhances Salesforce Development Capabilities

Over the years through this blog I have enjoyed covering various advancements in Salesforce APIs, Apex, Flow, and more recently, Agentforce. While I have featured Heroku quite a bit – despite it being a Salesforce offering, the reality has been that access to Heroku for a Salesforce developer has felt like plugging in another platform – not just because on the surface its DX is different from SFDX, but because in a more material sense, it has not been integrated with the rest of the platform and its existing tools – in the same way Apex and Flow are. Requiring you to do the integration plumbing before you can access its value.

Now with a new “free” Heroku AppLink add-on, Heroku has now been tangibly integrated by Salesforce into the Salesforce platform – it and code deployed to it even sits under the Setup menu. So now it is finally time to reflect on what Heroku brings to the party!

This blog starts a series on what this new capability means for Salesforce development. Choosing the best tool for the job is crucial for maximizing the holistic development approach the Salesforce platform offers. Apex, Flow, LWC, etc., are still important tools in your toolkit. In my next blog in this series, I’ll share hands-on content, but for now, let’s explore five reasons to be aware of what Heroku and Heroku AppLink can do for Salesforce development:

1. Seamlessly and Securely Attach Unlimited Compute to your Orgs

At times, certain automations or user interactions demand improved response times and/or abiility handle increasing data volumes. While Apex and Flow have a number of options here, they are inherently always constrained by the multi-tenant nature of the core platform that runs their respective logic. The core platform’s first priority is a stable environment for all – thus, largely we see the continued realities of the infamous governor limits. Going beyond some, though not all, of the governor limits that either stop or at least slow things down is now possible – and without leaving the Salesforce family of services.

You can deploy code to Heroku with git push heroku main, which works in much the same way as sf project deploy to upload your code and run it, then you declaratively assign your compute needs, and attach (using the publish command) access to it for use within your Flow, Apex, LWC or Agentforce implementations – across as many orgs as you like using the connect command.

Heroku supports both credit card (pay as you go) and contract billing for compute usage – with the smallest plan at $5 a month already able to run complex and lengthy compute tasks easily – though milage of course varies on usecase.

2. Tap into the worlds most popular languages and frameworks within Salesforce

Salesforce has some history of embracing industry languages such as Node.js for Lightning Web Components and, with that, taps into wider skill set pools, and also commercial and open-source web component libraries. With Heroku AppLink, this is now true for backend logic – and in fact, it extends language support to Python, .NET, Ruby, Java, and many more languages, all with their own rich communities, libraries, and frameworks. Does this mean I am suggesting you port all your Apex code to other languages? No – remember this is a best tool for the job mindset – so if what you need can be better served with existing skills, code, or libraries available in such languages, then with AppLink you can now tap into these while staying within the Salesforce services.

Note: Heroku AppLink provides additional SDK support presently only for Node.js and Python. That said its API is available to any language – and is fully documented. Java samples included with AppLink illustrate how to access the API directly – along with existing Salesforce API libraries.

You may also think that with this flexibility comes more complexity; well, like the rest of Salesforce, Heroku keeps things powerful yet simple. Its buildpacks and simple CLI command git push heroku main remove the heavy lifting of building, deploying, and scaling your code that would otherwise require skills in AWS, GCP, or other platforms – what’s more, Heroku also curates the latest operating system versions, and build tools for you.

3. More power to existing Apex, Flow and Agentforce investments

As we are practicing choosing the best tool for the job – for complex solutions it’s typically not a case of one size fits all – that’s why we have a spectrum of tools to choose from – while one solution, for example, might be mostly delivered through Flow, the most complex parts of it might depend on some code – and thus having interoperability between each approach is important.

Heroku AppLink draws on the use of platform actions – which has over the years become the de facto means to decompose logic/automations – allow reusable logic to be built in code via Apex or declaratively in Flow. Now with Heroku AppLink, you can also effectively write actions in any of the aforementioned languages and also if needed scale that code beyond traditional limits such as CPU timeout and heap – while benefiting from increased execution times.

What is also critical to such code, is user context, so that data access honors the current user, both in terms of their object, field, sharing permissions but also audit information retaining a trail of who did what and when to the data. Thus, Heroku AppLink has the ability to run code in Salesforce “user mode” – much like Apex and Flow – this means the same – your SOQL and DML all operate in this mode – in fact, that’s the default – no need to qualify it as with Apex. This approach follows the industry pattern of the Principle of Least Privilege – there is also a way to selectively elevate permissions as needed using permission sets.

4. Make External Integrations more Secure

Heroku is also known to the wider world as a Paas (Platform-as-a-Service) providing easy to use compute, data and more recently AI services without the infrastructure hassle. This leads to Heroku customers building practically any application or service they desire – again in any language they desire. For example, a web/mobile experience can be hosted along with required data storage – both able to scale to global event needs. Heroku AppLink, joins Heroku Connect to start a family of add-ons that also help such consumer facing or even internal experiences tap into Salesforce org or even Data Cloud data – by effectively managing the connection details securely in one place – elevating the complexity of managing oAuth, JWT, certifications etc.

5. Leverage additional data and AI services

If all your data resides within a Salesforce org or Data Cloud, Heroku AppLink provides an easy to use SDK to make using the most popular APIs easy and even provides a long time favorite of mine, Martin Fowlers, Unit of Work over the relatively complex composite Salesforce APIs to manage multi-object updates within a single transaction.

Beyond this, you can also take advantage of Heroku Postgres to store additional data that does not need to be in the org but needs to be close at hand to your code – likewise attach to data services elsewhere in AWS, DynamoDB for example. Heroku also provides new AI services that provide a set of simple to use AI tooling primitives on top of the latest industry LLM’s. All these Heroku services exist with the same trust and governance as other Salesforce services and thus leveraging them means you don’t have to move data or compute outside of Salesforce if thats something your business is particularly sensitive to.

Summary

Salesforce continues to bring new innovations to its no-code and code tools that are exciting but yet broaden the burden of choice and making the right choice. With Heroku AppLink, this has indeed added to the mix – and expands the classic vs. question – to – when to use Flow vs. Apex vs. Heroku?

I’ve noticed that the Flow vs. Apex debate is still strong at community events this year. When it comes to “code,” whether it’s Apex, Python, Java, or .NET—excluding Triggers, which AppLink doesn’t support—my opinion on no-code versus code remains the same – consider wisely use of one or both accordingly. In respect to coded needs, I still would still generally recommend Apex first, that is unless your project needs align with the points above – then it’s worth further discussion. Ultimately, it’s about finding a suitable mix instead with all three supporting actions, it’s easier to blend and evolve as needed.

As I hinted at the start of this blog, I plan to get into more hands-on blogs on Heroku AppLink and some reflections on ISV usage. Between these, I also have other Apex-related topics I want to explore—such as the new Apex Cursors feature. In the meantime, here below are some useful links about Heroku AppLink available as of the time of this blog.

Thanks for readying, hope it was useful!


Leave a comment

Enhancing Agentforce Output with Rich Text Formatting

I have been working with Agentforce for a while, and as is typically the case, I find myself drawn to platform features that allow extensibility, and then my mind seems to spin as to how to extract the maximum value from them! This blog explores the Rich Text (a subset of HTML) output option to give your agents’ responses a bit more flair, readability, and even some graphical capability.

Agentforce Actions typically return data values that the AI massages into human-readable responses such as, “Project sentiment is good”, “Product sales update: 50% for A, 25% for B, and 25% for C”, “Project performance is exceeding expectations; well done!”. While these are informative, they could be more eye-catching and, in some cases, benefit from more visual alternatives. While we are now getting the ability to use LWC’s in Agentforce for the ultimate control over both rendering and input, Rich Text is a middle-ground approach and does not always require code. All be it perhaps a bit of HTML or SVG knowledge and/or AI assistance is needed – you can achieve results like this in Agentforce chat

Lets start with a Flow example, as it also supports Rich Text through its Text Template feature. Here is an example of a Flow action that can be added to a topic with instructions to tell the agent to use it when presenting good news to the user, perhaps in collaboration with a project status query action.

In this next example a Flow conditionally assigns and output from multiple Text Templates based on an input of negative, neutral or positive – perhaps used in conjunction with a project sentiment query action.

The Edit Text Template dialog allows you to create Rich Text with the toolbar or enter HTML directly. Its using this option we can enter our smile svg shown below:

<svg xmlns='http://www.w3.org/2000/svg' width='50' height='50'><circle cx='25' cy='25' r='24' fill='yellow' stroke='black' stroke-width='2'/><circle cx='17' cy='18' r='3' fill='black'/><circle cx='33' cy='18' r='3' fill='black'/><path d='M17 32 Q25 38 33 32' stroke='black' stroke-width='2' fill='none' stroke-linecap='round'/></svg>

For a more dynamic approach we can break out into Apex and use SVG once again to generate graphs, perhaps in collaboration with an action that retrieves product sales data.

The full Apex is stored in a Gist here but in essence its doing this:

@InvocableMethod(
   label='Generate Bar Chart' 
   description='Creates a bar chart SVG as an embedded <img> tag.')
public static List<ChartOutput> generateBarChart(List<ChartInput> inputList) {
   ...
   String svg = buildSvg(dataMap);
   String imgTag = '<img src="data:image/svg+xml,' + svg + '"/>';
   return new List<ChartOutput>{ new ChartOutput(imgTag) };
}

When building the above agent, I was reminded of a best practice shared by Salesforce MVP, Robert Sösemann recently, which is to keep your actions small enough to be reused by the AI. This means that I could have created an action solely for product sales that generated the graph. Instead, I was able to give the topic instructions to use the graph action when it detects data that fits its inputs. In this way, other actions can generate data, and the AI can now use the graph rendering independently. As you can see below, there is a separation of concerns between actions that retrieve data and those that format it (effectively those that render Rich Text). By crafting the correct instructions, you can teach the AI to effectively chain actions together.

You can also use Prompt Builder based actions to generate HTML as well. This is something that the amazing Alba Rivas covered very well in this video already. I also captured the other SVG examples used in this Gist here. A word on security here, SVG can contain code, so please make sure to only use SVG content you create or have from a trusted source – of note is that using SVG embedded in an img tag code is blocked by the browser, <img src="data:image/svg+xml,<svg>....</svg>"/>.

Whats next? Well I am keen to explore the upcoming ability to use LWC’s in Agentforce. This allows for control of how you request input from the user and how the results of actions are rendered. Potentially enabling things like file uploads, live status updates and more! Meanwhile check this out from Avi Rai.

Meanwhile, enjoy!


1 Comment

New Book – Salesforce Platform Enterprise Architecture 4th Edition

It has been nearly 10 years back in 2014 when the first edition of my book Force.com Enterprise Architecture was first published and clearly a lot has moved on since then and not just the platform capabilities. Keeping pace with branding changes resulted in the second edition of the book being called Lightning Platform Enterprise Architecture. This latest 4th edition, Salesforce Platform Enterprise Architecture has probably the most accurate and I hope enduring title! This 4th edition is also nearly twice the size of the first edition, coming in at 681 pages. So much so, its 16 chapters is now split over 4 parts to make digesting the book easier. As is tradition by now, this blog will cover some of latest updates and additions and what to expect in general from the book.

Links: Amazon | Packt Publishing

Motivation and what to expect…

I first came to the platform with a lot of experience and understanding of architecture principles from other platforms, including leveraging platform agnostic patterns such as those of Martin Fowler. Applying patterns to the platform requires some nuance at times, thus this book is about enterprise architecture considerations as applied to the Salesforce Platform and much more in respect to its capabilities that accelerate traditional application development concerns and tasks for you – and can hinder if not considered upfront in your architecture design.

The book dedicates 3 of its 16 chapters to Apex code separation of concerns, how to layout your code, while the rest covers the full spectrum of app concerns such as designing for code and database performance, profiling, testing, data storage design, building apis, extensibility and more and of course how to write much less code by harnessing the declarative aspects. All this is driven throughout the book via a fictitious reference application known as FormulaForce based on Formula1 motor racing – full source is included. Also included are numerous tips and tricks and info blocks pitted among the pages that highlight learnings from my own experiences, and latterly that of the many amazing reviewers.

As good as the platform is at abstracting a huge amount of the complexity of managing and securing modern day applications for you, there is no escaping having to fully understand good architecture principles that would, and do, in most cases apply elsewhere, such as query optimization and indexing.

If you are developing a lot on the Salesforce Platform, have a few years experience and are passionate about creating applications that are enduring and empathetic to the platform itself – this book is for you. Read on for further highlights!

Is it for ISVs or anyone?

This edition is the first to split out the motivations of packaging in respect to internal distribution needs within your own company vs distributing a commercial solution on AppExchange. Regardless of which motivation you fit into, the chapters will callout which bits are relevant vs not. Even if you are not into packaging at all right now, much of the content on Apex, Lightning, Flow, APIs etc is of course agnostic to how you choose to distribute your application. Regardless who you are sharing it with, the book gives you an appreciation of advantages and limits of packaged based distribution and in my view is a key aspect of your overall architecture considerations checklist.

Have the Apex coding patterns evolved given recent features?

The advent of user mode capabilities has had a big impact on the approach taken within the sample application of the book when it comes to using the Apex Enterprise Patterns Library (fflib), specifically in how the Unit of Work and Selector patterns are used when updating and querying for data respectively.

Additionally, the Domain pattern chapter now reflects on how Domain logic, can, if desired allow the developer to split Domain and Trigger logic into separate classes and thus introduces a new pattern (SDCL) to captured this approach. Thats not to say the original model has gone, it is still very much present, but it felt appropriate to include something in the book that reflects how the library usage has also evolved over the years – thanks to my good friend John M. Daniel for pushing for this!

Apply skills in other languages and additional scaling options

A brand new chapter in the book is dedicated to how Heroku and Functions can be used to leverage existing skills or indeed investments in open source or internally for code written in other languages such as Java or Node.js. These technologies also open up further scaling options. The chapter breaks down the approaches using the books sample application as a basis. Thus it continues the Formula1 motor racing theme by introducing Web and API interfaces for race fans and vendors (authentication scenario) to interact with the application backend APIs. Later in the book the integration chapter doubles down further on authoring APIs via OpenAPI standards and how that simplifies integration with the platform.

Lightning Web Components and Lightning Experience expansion

Since the third book, all but one of the Aura components have now been removed as many only existed at the time as workarounds to the lack of LWC enabled integration points. Two large chapters are dedicated to the Lightning framework itself, how it handles separation of concerns, eventing, security and its alignment with the industry Web Component standard. Completing with a section covering the ever expanding possibilities to extend vs build when thinking about your applications user experience, regardless if thats desktop, mobile or driven through the native UIs and tools such as Lightning Pages or Flow. And yes, I still love the Utility Bar integration the most – as you can see from the number of historic posts and tools on the topic here.

APIs, APIs and APIs…

Many years ago a product manager once thanked me for pushing an API strategy internally, since they appreciated first hand as a past buyer that APIs are effectively an insurance policy for buyers when it comes to unexpected feature gaps post or during implementation. Thus several of the chapters highlight the many Salesforce APIs in contextual ways.

Though its not until the integration and extensibility chapter the book really doubles down on building your own application APIs, versioning, naming and design and the value they bring to your fellow developers, customers and/or partner ecosystems. External Services is one of the most impressive aspects of integrating with the Salesforce Platform for me and it was my pleasure to update the book with an updated Heroku based API leveraging OpenAPI to get the full power of the feature – the two are a perfect combo!

And …

This blog is already in danger of becoming chapter size itself, so I will leave it here with the above highlights and before I close share my deepest thanks to this editions reviewers John M. Daniel, Sebastiano Costanzo, Arvind Narasimhan and foreword author Daniel J. Peter.

I will be popping up in the not to distant future in ApexHours (https://www.apexhours.com/) and perhaps other locations to talk in more depth about the book and likely other side topics of interest. I will update this blog as those resources arrive. Thank you all in the meantime, the Salesforce community is the best and remains very close to my heart, enjoy!

AndyInTheCloud

P.S. Suffice to say this 4th edition is an evolution of the prior three books! I would like to also thank the following fine folks for their support as past reviewers and foreword authors!

Past Reviewers and Forewords:
Matt Bingham, Steven Herod, Matt Lacey, Andrew Smith, Rohit Arora, Karanraj Samkaranarayanan, Jitendra Zaa, Joshua Burk, Peter Knolle, John Leen, Aaron Slettenhaugh, Avrom Roy-Faderman, Michael Salem, Mohith Shrivastava and Wade Wegner


1 Comment

My Top 5 Apex APIs and Bonus

As readers of this blog will already have determined, I love API’s! So when I was approached by the team at 100 Days of Trailhead to share my Top 5 “things” my mind fell to API’s. This is a short blog post because effectively the brief was a 10 minute video! Well ok I ran a bit over in the end so its actually 15 minutes.

In the video I cover 5 Apex API’s and a Bonus API at the end. Thank you for 100 Days of Trailhead for asking to contribute and please do check out all the other amazing content they have on their website! You can watch the video below (or link here) and find full source code in this GitHub repository.

https://100daysoftrailhead.com/
/**
 .----------------. 
| .--------------. |
| |   _______    | |
| |  |  _____|   | |
| |  | |____     | |
| |  '_.____''.  | |
| |  | \____) |  | |
| |   \______.'  | |
| |              | |
| '--------------' |
 '----------------' 
 */

//
// Firing Platform Events from Batch Apex
//
public with sharing class Five
    implements Database.Batchable<SObject>, Database.RaisesPlatformEvents {

 .----------------. 
| .--------------. |
| |   _    _     | |
| |  | |  | |    | |
| |  | |__| |_   | |
| |  |____   _|  | |
| |      _| |_   | |
| |     |_____|  | |
| |              | |
| '--------------' |
 '----------------' 

//
// Request Class - Request Id
//
Request reqInfo = Request.getCurrent();
String currentRequestId = reqInfo.getRequestId();


 .----------------. 
| .--------------. |
| |    ______    | |
| |   / ____ `.  | |
| |   `'  __) |  | |
| |   _  |__ '.  | |
| |  | \____) |  | |
| |   \______.'  | |
| |              | |
| '--------------' |
 '----------------' 
 
//
// Request Class - Quiddity
//
switch on Context  {
    when SYNCHRONOUS {
        RequestOrderProcessor = new TriggerOrderProcessing();
    } when BULK_API {
        RequestOrderProcessor = new BulkAPIOrderProcessing();    
    } when else {
        RequestOrderProcessor = new BaseOrderProcessing();
    }
}        

 .----------------. 
| .--------------. |
| |    _____     | |
| |   / ___ `.   | |
| |  |_/___) |   | |
| |   .'____.'   | |
| |  / /____     | |
| |  |_______|   | |
| |              | |
| '--------------' |
 '----------------' 

// 
// Send Notifications
//

Messaging.CustomNotification notification = new Messaging.CustomNotification();
notification.setTitle('Amazing new order ' + newOrder.OrderNumber);
notification.setBody('Check out this new order that has just come in!');
notification.setNotificationTypeId(notificationType.Id);
notification.setTargetId(newOrder.Id);
notification.send(new Set<String> { UserInfo.getUserId()} );


 .----------------. 
| .--------------. |
| |     __       | |
| |    /  |      | |
| |    `| |      | |
| |     | |      | |
| |    _| |_     | |
| |   |_____|    | |
| |              | |
| '--------------' |
 '----------------' 

Opportunity opp = new Opportunity();
opp.addError('Description', 'Error Message!'); 
List<Database.Error> errors = opp.getErrors();
System.assertEquals(1, errors.size());
System.assertEquals('Error Message!', errors[0].getMessage());
System.assertEquals('Description', errors[0].getFields()[0]);

/**
 * 
  ____                        
 |  _ \                       
 | |_) | ___  _ __  _   _ ___ 
 |  _ < / _ \| '_ \| | | / __|
 | |_) | (_) | | | | |_| \__ \
 |____/ \___/|_| |_|\__,_|___/                                             
 */

public class BonusFinalizer implements Finalizer {

    public void execute(FinalizerContext ctx) {


4 Comments

Reducing Field Validation Boilerplate Code

Boilerplate code is code that we repeat often with little or no variation. When it comes to writing field validations in Apex, especially within Apex Triggers there are a number of examples of this. Especially when it comes to checking if a field value has changed and/or querying for related records, which also requires observing good bulkifcathion best practices. This blog presents a small proof of concept framework aimed at reducing such logic made possible in Winter’21 with a small but critical enhancement to the Apex runtime that allows developers to dynamically add field errors. Additionally, for those practicing unit testing, the enhancement also allows test assert such errors without DML!

This is a very basic demonstration of the new addError and getErrors methods.

Opportunity opp = new Opportunity();
opp.addError('Description', 'Error Message!'); 
List<Database.Error> errors = opp.getErrors();
System.assertEquals(1, errors.size());
System.assertEquals('Error Message!', errors[0].getMessage());
System.assertEquals('Description', errors[0].getFields()[0]);

However in order to really appreciate the value these two features bring to frameworks let us first review a use case and the traditional approach to coding such validation logic. Our requirements are:

  1. When updating an Opportunity validate the Description and AccountId fields
  2. If the StageName field changes to “Closed Won” and the Description field has changed ensure it is not null.
  3. If the AccountId field changes ensure that the NumberOfEmployees field on the Account is not null
  4. Ensure code is bulkified and queries are only performed when needed.

The following code implements the above requirements, but does contain some boilerplate code.

// Classic style validation
switch on Trigger.operationType {
    when AFTER_UPDATE {
        // Prescan to bulkify querying for related Accounts
        Set<Id> accountIds = new Set<Id>();
        for (Opportunity opp : newMap.values()) {
            Opportunity oldOpp = oldMap.get(opp.Id);
            if(opp.AccountId != oldOpp.AccountId) { // AccountId changed?
                accountIds.add(opp.AccountId);
            }
        }                
        // Query related Account records?
        Map<Id, Account> associatedAccountsById = accountIds.size()==0 ? 
            new Map<Id, Account>() : 
            new Map<Id, Account>([select Id, NumberOfEmployees from Account where Id = :accountIds]);
        // Validate
        for (Opportunity opp : newMap.values()) {
            Opportunity oldOpp = oldMap.get(opp.Id);
            if(opp.StageName != oldOpp.StageName) { // Stage changed?
                if(opp.StageName == 'Closed Won') { // Stage closed won?
                    if(opp.Description != oldOpp.Description) { // Description changed?               
                        if(opp.Description == null) { // Description null?
                            opp.Description.addError('Description must be specified when Opportunity is closed');
                        }
                    }
                }                                
            }
            if(opp.AccountId != oldOpp.AccountId) { // AccountId changed?
                Account acct = associatedAccountsById.get(opp.AccountId);
                if(acct!=null) { // Account queried?
                    if(acct.NumberOfEmployees==null) { // NumberOfEmployees null?
                        opp.AccountId.addError('Account does not have any employees');
                    }    
                }
            }
        }
    }
}               

Below is the same validation implemented using a framework built to reduce boilerplate code.

SObjectFieldValidator.build()            
  .when(TriggerOperation.AFTER_UPDATE)
    .field(Opportunity.Description).hasChanged().isNull().addError('Description must be specified when Opportunity is closed')
      .when(Opportunity.StageName).hasChanged().equals('Closed Won')
    .field(Opportunity.AccountId).whenChanged().addError('Account does not have any employees')
      .when(Account.NumberOfEmployees).isNull()
  .validate(operation, oldMap, newMap);

The SObjectFieldValidator framework uses the Fluent style design to its API and as such allows the validator to be dynamically constructed with ease. Additionally configured instances of it can be passed around and extended by other code paths and modules with the validation itself to be performed in one pass. The framework also attempts some smarts to bulkify queries (in this case related Accounts) and only do so if the target field or related fields have been modified – thus ensuring optimal processing time. The test code for either approaches can of course be written in the usual way as shown below.

// Given
Account relatedAccount = new Account(Name = 'Test', NumberOfEmployees = null);        
insert relatedAccount;
Opportunity opp = new Opportunity(Name = 'Test', CloseDate = Date.today(), StageName = 'Prospecting', Description = 'X', AccountId = null);
insert opp;
opp.StageName = 'Closed Won';
opp.Description = null;
opp.AccountId = relatedAccount.Id;
// When
Database.SaveResult saveResult = Database.update(opp, false);
// Then
List<Database.Error> errors = saveResult.getErrors();
System.assertEquals(2, errors.size());
System.assertEquals('Description', errors[0].getFields()[0]);
System.assertEquals('Description must be specified when Opportunity is closed', errors[0].getMessage());
System.assertEquals('AccountId', errors[1].getFields()[0]);
System.assertEquals('Account does not have any employees', errors[1].getMessage());

While you do still need code coverage for your Apex Trigger logic, those practicing unit testing may prefer to leverage the ability to avoid DML in order to assert more varied validation scenarios. The following code is entirely free of any SOQL and DML statements and thus better for test performance. It leverages the ability to inject related records rather than allowing the framework to query them on demand. The SObjectFieldValidator instance is constructed and configured in a separate class for reuse.

// Given
Account relatedAccount = 
    new Account(Id = TEST_ACCOUNT_ID, Name = 'Test', NumberOfEmployees = null);
Map<Id, SObject> oldMap = 
    new Map<Id, SObject> { TEST_OPPORTUNIT_ID => 
        new Opportunity(Id = TEST_OPPORTUNIT_ID, StageName = 'Prospecting', Description = 'X', AccountId = null)};
Map<Id, SObject> newMap = 
    new Map<Id, SObject> { TEST_OPPORTUNIT_ID => 
        new Opportunity(Id = TEST_OPPORTUNIT_ID, StageName = 'Closed Won', Description = null, AccountId = TEST_ACCOUNT_ID)};
Map<SObjectField, Map<Id, SObject>> relatedRecords = 
    new Map<SObjectField, Map<Id, SObject>> {
        Opportunity.AccountId => 
            new Map<Id, SObject>(new List<Account> { relatedAccount })};
// When
OpportunityTriggerHandler.getValidator()
  .validate(TriggerOperation.AFTER_UPDATE, oldMap, newMap, relatedRecords); 
// Then
List<Database.Error> errors = newMap.get(TEST_OPPORTUNIT_ID).getErrors();
System.assertEquals(2, errors.size());
System.assertEquals('AccountId', errors[0].getFields()[0]);
System.assertEquals('Account does not have any employees', errors[0].getMessage());
System.assertEquals('Description', errors[1].getFields()[0]);
System.assertEquals('Description must be specified when Opportunity is closed', errors[1].getMessage());

Finally it is worth noting that such a framework can of course only get you so far and there will be scenarios where you need to be more rich in your criteria. This is something that could be explored further through the SObjectFieldValidator.FieldValidationCondition base type that allows coded field validations to be added via the condition method. The framework is pretty basic as I really do not have a great deal of time these days to build it out more fully, so totally invite anyone interested to take it further.

Enjoy!