Andy in the Cloud

From BBC Basic to Force.com and beyond…

Extending Lightning Process Builder and Visual Workflow with Apex

24 Comments

MyProcessBuilderI love empowering as many people to experience the power of Salesforce’s hugely customisable and extensible platform as possible. In fact this platform has taught me that creating a great solution is not just about designing how we think a solution would be used, but also about empowering how others subsequently use it in conjunction with the platform, to create totally new use cases we never dream off! If your not thinking about how what your building sits “within” the platform your not fully embracing the true power of the platform.

So what is the next great platform feature and what has it go to do with writing Apex code? Well for me its actually two features, one has been around for a while, Visual Workflow, the other is new and shiny and is thus getting more attention, Lightning Process Builder. However as we will see in this blog there is a single way in which you can expose your finely crafted Apex functionality to users of both tools. Both of them have their merits and in fact can be used together for a super charged clicks not code experience!

IMPORTANT NOTE: Regardless if your developing building a solution in a Sandbox or as part of packaged AppExchange solution, you really need to understand this!

What is the difference between Visual Workflow and Lightning Process Builder?

At first sight these two tools look to achieve similar goals, letting the user turn a business process into steps to be executed one after another applying conditional logic and branching as needed.

One of the biggest challenges i see with the power these tools bring is educating users on use cases they can apply. While technically exposing Apex code to them is no different, its important to know some of the differences between how they are used when talking to people about how to best leverage them with your extensions.

  • UI based Processes. Here the big difference is that mainly, where Visual Workflow is concerned its about building user interfaces that allow users to progress through your steps through a wizard style UI, created by the platform for you based on the steps you’ve defined. Such UI’s can be started when the end user clicks on a tab, button or link to start the Visual Workflow (see my other blogs).
    VisualFlowUI
  • Record based Processes. In contrast Process Builder is about steps you define that happen behind the scenes when users are manipulating records such as Accounts, Opportunities an in fact any Standard or Custom object you choose. This also includes users of Salesforce1 Mobile and Salesforce API’s. As such Process Builder actually has more of an overlap with the capabilities of classic Workflow Rules.
    ProcessBuilderActions
  • Complexity. Process Builder is more simplistic compared to Flow, that’s not to say Flow is harder to use, its just got more features historically, for variables, branching and looping over information.
  • Similarities. In terms of the type steps you can perform within each there are some overlaps and some obvious differences, for example there are no UI related steps in Process Builder, yet both do have some support for steps that can be used to create or update records. They can both call out to Apex code, more on this later…
  • Power up with both! If you look closely at the screenshot above, you’ll see that Flows, short for Visual Workflow, can be selected as an Action Type within Process Builder! In this case your Flow cannot contain any visual steps, only logic steps, since Process Builder is not a UI tool. Such Flows are known as Autolaunched Flows (previously known as ‘Headless Flows’), you can read more here. This capability allows you to model more complex business processes in Process Builder.

You can read more details on further differences here from Salesforce.

What parts of your code should you expose?

Both tools have the ability to integrate at a record level with your existing custom objects, thus any logic you’ve placed in Apex Triggers, Validation Rules etc is also applied. So as you read this, your already extending these tools! However such logic is of course related to changes in your solutions record data. What about other logic?

  • Custom Buttons, Visualforce Buttons, If you’ve developed a Custom Button or Visualforce page with Apex logic behind it you may want to consider if that functionality might also benefit from being available through these tools. Allowing automation and/or alternative UI’s to be build without the need to involve a custom Visualforce or Apex development or changes to your based solution.
  • Share Calculations and Sub-Process Logic, You may have historically written a single peace of Apex code that orchestrates in a fixed a larger process via a series of calculations in a certain order and/or chains together other Apex sub-processes together. Consider if by exposing this code in a more granular way, would give users more flexibility in using your solution in different use cases. Normally this might require your code to respond to a new configuration you build in. For example where they wish to determine the order your Apex code is called, if parts should be omitted or even apply the logic to record changes on their own Custom Objects.

Design considerations for Invocable Methods

Now that we understand a bit more about the tools and the parts of your solution you might want to consider exposing, lets consider some common design considerations in doing so. The key to exposing Apex code to these tools is leveraging a new Spring’15 feature known as Invocable Methods. Here is an example i wrote recently…

global with sharing class RollupActionCalculate
{
	/**
	 * Describes a specific rollup to process
	 **/
	global class RollupToCalculate {

		@InvocableVariable(label='Parent Record Id' required=true)
		global Id ParentId;

		@InvocableVariable(label='Rollup Summary Unique Name' required=true)
		global String RollupSummaryUniqueName;

		private RollupService.RollupToCalculate toServiceRollupToCalculate() {
			RollupService.RollupToCalculate rollupToCalculate = new RollupService.RollupToCalculate();
			rollupToCalculate.parentId = parentId;
			rollupToCalculate.rollupSummaryUniqueName = rollupSummaryUniqueName;
			return rollupToCalculate;
		}
	}

	@InvocableMethod(
		label='Calculates a rollup'
		description='Provide the Id of the parent record and the unique name of the rollup to calculate, you specificy the same Id multiple times to invoke multiple rollups')
	global static void calculate(List<RollupToCalculate> rollupsToCalculate) {

		List<RollupService.RollupToCalculate> rollupsToCalc = new List<RollupService.RollupToCalculate>();
		for(RollupToCalculate rollupToCalc : rollupsToCalculate)
			rollupsToCalc.add(rollupToCalc.toServiceRollupToCalculate());

		RollupService.rollup(rollupsToCalc);
	}
}

NOTE: The use of global is only important if you plan to expose the action from an AppExchange package, if your developing a solution in a Sandbox for deployment to Production, you can use public.

As those of you following my blog may have already seen, i’ve been busy enabling my LittleBits Connector and Declarative Lookup Rollup Summary packages for these tools. In doing so, i’ve arrived at the following design considerations when thinking about exposing code via Invocable Methods.

  1. Design for admins not developers. Methods appear as ‘actions’ or ‘elements’ in the tools. Work with a few admins/consultants you know to sense check what your exposing make sense to them, a method name or purpose from a developers perspective might not make as much sense to an admin. 
  2. Don’t go crazy with the annotations! Salesforce has made it really easily to expose an Apex method via simple Apex annotations such as @InvocableMethod and @InvocableVariable. Just because its that easy doesn’t mean you don’t have to think carefully about where you apply them. I view Invocable Methods as part of your solutions API, and treat them as such, in terms of separation of concerns and best practices. So i would not apply them to methods on controller classes or even service classes, instead i would apply them to dedicate class that delegates to my service or business layer classes. 
  3. Apex class, parameter and member names matter. As with my thoughts on API best practices, establish a naming convention and use of common terms when naming these. Salesforce provides a REST API for describing and invoking Invocable Methods over HTTP, the names you use define that API. Currently both tools show the Apex class name and not the label, so i would derive some kind of way to group like actions together, i’ve chosen to follow the pattern [Feature]Action[ActionName], e.g. LittleBitsActonSendToDevice, so all actions for a given feature in your application at least group together. 
  4. Use the ‘label’ and ‘description’ annotation attributes. Both the @InvocableMethod and @InvocableVariable annotations support these, the tools will show your label instead of your Apex variable name in the UI. The Process Builder tool currently as far as i can see does not presently use the parameter description sadly (though Visual Workflow does) and neither tool the method description. Though i would recommend you define both in case in future releases that start to use it.
    • NOTE: As this text is end user facing, you may want to run it past a technical author or others to look for typos, spelling etc. Currently, there does not appear to be a way to translate these via Translation Workbench.

     

  5. Use the ‘required’ attribute. When a user adds your Invocable Method to a Process or Flow, it automatically adds prompts in the UI for parameters marked as required.
    	global class SendParameters {
    		@InvocableVariable(
                       Label='Access Token'
                       Description='Optional, if set via Custom Setting'
                       Required=False)
    		global String AccessToken;
    		@InvocableVariable(
                      Label='Device Id' Description='Optional, if set via Custom Setting'
                      Required=False)
            global String DeviceId;
    		@InvocableVariable(
                       Label='Percent'
                       Description='Percent of voltage sent to device'
                       Required=True)
            global Decimal Percent;
    		@InvocableVariable(
                       Label='Duration in Milliseconds'
                       Description='Duration of voltage sent to device'
                       Required=True)
            global Integer DurationMs;
    	}
    
        /**
         * Send percentages and durations to LittleBits cloud enabled devices
         **/
        @InvocableMethod(Label='Send to LittleBits Device' Description='Sends the given percentage for the given duration to a LittleBits Cloud Device.')
        global static void send(List<SendParameters> sendParameters) {
        	System.enqueueJob(new SendAsync(sendParameters));
    	}
    

    Visual Workflow Example
    FlowParams

    Lightning Process Builder Example
    ProcessBuilderParams

     

  6. Bulkification matters, really it does! Play close attention to the restrictions around your method signature when applying these annotations, as described in Invocable Method Considerations and Invocable Variables Considerations. One of the restrictions that made me smile was the insistence on the compiler requiring parameters be list based! If you recall this is also one of my design guidelines for the Apex Enterprise Patterns Service layer. Basically because it forces the developer to think about the bulk nature of the platform. Salesforce have thankfully enforced this, to ensure that when either of these tools call your method with multiple parameters in bulk record scenarios your strongly reminded to ensure your code behaves itself! Of course if your delegating to your Service layer, as i am doing in the examples above, this should be a fairly painless affair to marshall the parameters across.
    • NOTE: While you should of course write your Apex tests to pass bulk test data and thus test your bulkification code. You can also cause Process Builder and Visual Workflow to call your method with multiple parameters by either using List View bulk edits, Salesforce API or Anonymous Apex to perform a bulk DML operation on the applicable object. 
  7. Separation of Concerns and Apex Enterprise Patterns. As you can see in the examples above, i’m treating Invocable Actions as just another caller of the Service layer (described as part of the Apex Enterprise Patterns series). With its own concerns and requirements.
    • For example the design of the Service method signatures is limited only by the native Apex language itself, so can be quite expressive. Where as Invocable Actions have a much more restricted capability in terms of how they define themselves, we want to keep these two concerns separate.
    • You’ll also see in my LittleBits action example above, i’m delegating to the Service layer only after wrapping it in an Async context, since Process Builder calls the Invocable Method in the Trigger context. Remember the Service layer is ‘caller agnostic’, thus its not its responsibility to implement Aysnc on the callers behalf.
    • I have also taken to encapsulating the parameters and marshalling of these into the Service types as needed, thus allowing the Service layer and Invocable Method to evolve independently if needed.
    • Finally as with any caller of the Service layer i would expect only one invocation and to create a compound Service (see Service best practices) if more was needed, as apposed to calling multiple Services from the one Invocable Method.

     

  8. Do I need to expose a custom Apex or REST API as well? So you might be wondering that since we now have a way to call Apex code declaratively and in fact via the Standard Salesforce REST API (see Carolina’s exploits here). Would you ever still consider exposing a formal Apex or REST API (as described here)? Well here are some of my current thoughts on this, things are still evolving so i stress these are still my current thoughts…
    • The platform provides as REST API generated from you Invocable Method annotations, so aesthetically and functionally may look different from what you might consider a true RESTful API, in addition will fall under a different URL path to those you defined via the explicit Apex REST annotations.
    • If your data structures of your API are such that they don’t fall within those defined by Invocable Methods (such as Apex types referencing other Apex types, use of Maps, Enums etc..) then you’ll have to expose the Service anyway as an Apex API.
    • If your parameters and data structures are no more complex then those required for an Invocable Method, and thus would be identical to those of the Apex Service you might consider using the same class. However keep in mind you can only have one Invocable Method per class, and Apex Services typically have many methods related to the feature or service itself. Also while the data structure may match today, new functional requirements may stress this in the future, causing you to reinstate a Service layer, or worse, bend the parameters of your Invocable Methods.
    • In short my current recommendation is to continue to create a full expressive Service driven API for your solutions and treat Invocable Methods as another API variant.

 

Hopefully this has given you some useful things to think about when planning your own use of Invocable Methods in the future. As this is a very new area of the platform, i’m sure the above will evolve further over time as well.

 

 

 

24 thoughts on “Extending Lightning Process Builder and Visual Workflow with Apex

  1. Hi Andy,
    The lightning components still does not appear in android phones right?

    • Not sure, this blog is about the process builder not components, have you seen something specifc about lightning UI components?

  2. Pingback: Process Automation Trailhead Review - Peter Knolle

  3. Pingback: Automating Org Setup via Process Builder and Metadata API | Andy in the Cloud

  4. Hello Andy, I speak Admin not developer but this post is exactly what I am digging for. I think? We currently have an APEX Class running a callout to Google Maps Distance Matrix API for a milage estimation. When Winter 15′ rolled out last Monday it went hay wire… All of a sudden it will respond with a distance estimation some hours of the day, and some not. I have been stuck on this for weeks, my admin brain says there has got to be a better way and can’t I use proccess builder for this. Do you think a proccess could run from a custom object and take origin_zip_code & destination_zip_cade and return an estimate to a custom field?

    • It’s feasible to link your apex code to process builder via the approach described here, but you will need a developer to expose it before it can be seen. But in principle it’s possible yes. But as a developer I have to say that there is a likelihood the problem you have seen manifest will not be resolved, it is most likely down to the code regardless how it’s used. Sorry to say, but I do like your thinking this is exactly a good use case.

  5. Hey Andy.

    Is there a possible issue with getting values out of Apex back into the flow that called that class? All I am seeing is “output” as a value form the class in visual flow and I can’t save it to a collection and now I am stuck.

    Hope you can help!

    Scott

    • Can you share a snippet of code?

      • Hmm… not sure you might have caught my answer below. Hope you did. I am stuck and have no further clue on how to move forward and this is an important feature to the flow (working with dateTime fields effectively). If I can get this one problem solved and get the user offsets for the dates back into the flow, then I can display and save the correct values.

        Thanks again!

        Scott

  6. This is my class. (hope it is rendered legibly…)

    “`
    public class UserTimezoneOffsets {

    public class inputVariables {

    @InvocableVariable(required=true)
    public DateTime startDateTime;
    @InvocableVariable(required=true)
    public DateTime endDateTime;
    }

    @InvocableMethod(
    label=’Get user timezone offset’
    description=’Returns the users timezone offset in minutes for the given dates.’)

    public static List getOffsets (List dateTimes )
    {
    TimeZone tz = UserInfo.getTimeZone();
    List offsets = new List();
    for (inputVariables dateTimeX : dateTimes) {
    offsets.add(tz.getOffset( dateTimeX.startDateTime ));
    offsets.add(tz.getOffset( dateTimeX.endDateTime ));
    }

    return offsets;
    }

    }
    “`

    All I see in the flow is “output” and it doesn’t let me save to a collection.

    Scott

    • What type of variable in your flow are you trying to bind to?

      • I’d like to bind to a collection, but the flow UI isn’t allowing me to select a collection variable. It only allows me to select from one of the two number variables or my sObjects. I can create two “output” rows and assign the offsets number variables I have. Hmm…. I didn’t try that actually. I can’t imagine that will work.

        Scott

      • Nope. Doesn’t work. Balks saying an output parameter is duplicated.

        Scott

      • Yeah sounds like this is a restriction, may need to get inventive. How about using a sobject, perhaps one soley to marshal this info as a proxy to pass back the info you need?

  7. Hmm…. I’ll write up a case about the list not being properly passed into the view as a collection.

    I’ll try the sObject route and let you know.

    Scott

  8. Ooh. Check this out. I just added output to one variable and I got the error “The number of results does not match the number of interviews that were executed in a single bulk action.” What the heck? LOL!

    Scott

    Ok. Trying the sObject route.

  9. Is there somewhere I could chat with you quickly? ICQ or Gitter.im possibly? My email is

    s (dot) molinari (at) adduco (dot) de,

    if you want to send the chat possibility to me privately.

    Scott

  10. Nevermind! I got it going with an sObject. But it is also strange. The static method only worked with a return type of a List of sObjects. And in the flow, I could only store the “output” to a single sObject variable.

    Nonetheless, it works. Thanks for the tip of trying it with an sObject. To me though, the return of data back to flow from an invokable Apex class is seriously broke.

    Scott

  11. Andy,
    I got here to this page because we’re working with flows + invocable actions and I ran into the same error as Skoopa did. Not sure I understand his comments fully — I’m an admin type, not a developer.

    What we want to be able to do is to 1) pass a collection variable that may have 1 or more elements that are parameters to form the SOQL query and 2) return a dataset. But the output option doesn’t allow me to set it to an sobject collection, only an sobject variable.

    If my dataset returned = 1 row and I set it to an sobject it works.

    Is it possible to set my output to an sobject collection?

    Thank you for any guidance!

    Cathy

  12. Hi Andy, Great Article. I love the process builder for the nice flowchart of everything happens on an object. But, the performance is a disaster. I always get hit with SOQL/CPU limits when a mass update happens on the objects. They don’t yet seem to perform so well as triggers. How did you get to manage this performance drawback with process builder?

    • Yes, I do continue to get mixed views on this tool, though less are negative increasingly. I guess it does depend on what your goals and expectations are. I for sure would not recommend it as an immediate go to alternative to apex triggers and would try to use invocable methods to do more heavy lifting. It’s a per use case decision though. I know they did do some fixes to it a few released back for a soql governor issue. It’s hard to comment further without knowing more tbh. Have you used the developer console debug log profiler to see where the time is being spent? If it is spending more time in the platform code perhaps consider raising a case and attaching some logs to indicate that. cpu time should only be for your code, so if your seeing it without any code that could be a bug or undocumented consideration for complex flows, likewise soql issues. In the end for me, it’s a great tool for customisation, I would still be careful about how far you go with it vs code depending on the needs and look to mix to leverage the best of both. Hope this helps.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s