Andy in the Cloud

From BBC Basic to Force.com and beyond…


6 Comments

Monitoring Record Activity via Data Replication API’s

ReplicateAPI.pngYou might be thinking having started to read this blog that the primary use case of Data Replication API’s is to provide replication, well yes and no! No in a sense it won’t replicate your data for you, it actually won’t even return your data. What it will do is tell you what record Id’s have been updated or deleted within a certain time frame. What happens next is up to you, you don’t even have to do any replication if you don’t want to!

As someone who loves to keep things as native as possible, when answering this StackExchange post, i found it quite cool to find that this API is actually available to Apex developers! The API consist of two Database class methods getUpdated and getDeleted. The former returns new and updated records, its up to you to decide if its a new record or existing depending on if you’ve seen the record Id before or not.

Note: The process around these API’s is explained in more detail in the SOAP Developers Guide, so its worth reading this along with the Apex Developers Guide references. The general polling process is described here and limits and considerations here.

The API’s could not be easier to use, the following shows how to pull a list of record Id’s for updates made to records for a given object in the last hour.

Database.GetUpdatedResult r =
  Database.getUpdated(
    'MyObject__c', Datetime.now().addHours(-1), Datetime.now());
LastDateCovered.png

Salesforce documentation relating to the latestDateCovered field

Neither methods appear to consume any query or DML governor limits, though as noted above have some limits of their own. The GetUpdatedResult (described in more detail in the SOAP API documentation) contains a latestDateCovered field. This value should be retained and used in subsequent getUpdated calls.

Its unclear what Salesforce means by safety in the documentation, though the note relating to long running batch processes does make sense. This aspect highlights a key difference between attempting to query whats changed yourself based on the audit fields. The getDeleted method also works in the same way.

Its worth considering these API’s as a possible and lighter alternative to using Apex Triggers or UI’s to invoke custom behaviour when records are manipulated. Unlike Triggers you don’t know the specific changes, though Field Audit information could be queried if needed.

So be it a replication to another cloud via an Apex HTTP Callout or simply monitoring record activity into some org wide stats. Having Apex support allows you to keep things native via an Apex Schedule job to poll this API as often as you wish. With the advent of External Objects aka Lightning Connect other native possibilities start to present themselves…


30 Comments

Setup Audit Trail API in Winter’16

Winter’16 was a bit low on API for me, but one thing i just found hiding in the New Objects section, was this…

SetupAuditTrail, “Represents changes you or other administrators made in your organization’s Setup area.”

WBAuditWow, this is a BIG thing and opens up a lot of tooling and greater compliance support for changes around orgs. Prior to this, folks where so keen to get hold of this information programatically they would resort to web scraping it from the Salesforce Setup menu!  So I set about giving it a spin via Developer Workbench and learning more about the object.
WBAuditResults

There is also much better data export support via tools such as DataLoader.io (and of course other tools)….

AuditDataLoaderIO

So in addition to Salesforce API’s (used by the above tools), it is also available via Apex SOQL! The following selects all audit records relating to users from a certain email domain.

List<SetupAuditTrail> stuffDoneByConsultants = 
    [SELECT Id,
     	Action,
     	CreatedBy.Name,
     	CreatedDate,
     	Display,
     	Section 
     FROM SetupAuditTrail 
     WHERE CreatedBy.Email LIKE '%xyzconsulting.com%'];

So what else can we do with it? Well… not a lot more sadly, doing an Apex Describe shows that its basically only query-able, no triggers, replication API or indeed streaming API, which while not a total surprised would have been very cool!

Age of Records?

Through the long standing CSV download facility under setup, only 6 months worth of data is available. However running queries in my various long standing orgs i’m seeing data going back as far as all time?!? Although there is no documentation to confirm, and i would honestly would not be surprised to see some limit here, perhaps those of you running long standing production orgs can confirm via the comments below?

What are its fields?

AuditFieldsAs you can see the SetupAuditTrail objects fields are not the most ideal to interpret the data, aside from CreatedBy field (which supports relationship walking to the User object). The most interesting are Action, Section and Display, the later is the one that actually contains the object, field, layout or whatever has changed. The challenge here is its embedded in a message and not called out separately, so there is a bit of parsing to be done here.

IMPORTANT NOTE: The only gap i can see so far is the Delegate User (found in the Setup UI and CSV download) appears to be missing from the API at the moment. Yet it is documented as follows… ‘The Login-As user who executed the action in Setup. If a Login-As user didn’t perform the action, this field is blank.’

So what next?

  • Well i see AuditForce is an example of application tool that was created to better report and dashboard on this information, this tool could be enhanced to use this API now and remove the current web scraping hack the developer (Daniel Peter) was forced to utilise back then.
  • With Apex support its also possible to write Apex Scheduled jobs that periodically scan for certain updates and apply your own rules and notifications.
  • I was thinking it might also be useful to have a nice Lighting Component perhaps?
  • The Setup UI does not permit filtering or even sorting, it would probably not require to much coding to get something like the excellent Data Table component show results of queries from this object (as featured in my List View API blog).
  • Finally given this is available from Salesforce REST API, its also feasible to aggregate into a single console audit information from say multiple sandboxes (something Daniel hints at in his AuditForce blog).

 


Leave a comment

Using Action Link Templates to Declaratively call API’s

ActionLinkSetupSalesforce recently introduced a new platform feature which has now become GA called Action Link Templates. Since then its been staring me in the face and bugging me that i didn’t quite understand them until now…

While there is quite a lot of information in the Salesforce documentation, i was still a bit lost as to what even an Action Link was. It turns out that they are a means to define actions that can appear in a Chatter Post to call external or Salesforce web based API’s. Thus allowing users to do more without leaving their feed.

After realising its a means to link user actions with API’s. I could not resist exploring further with one my favourite external API’s, from LittleBits. The LittleBits cloud API can be used with cloud connected devices constructed by snapping together modules.

The following shows a Chatter Post i created with an Action Link button on it that without any code calls the LittleBits API to cause my device to perform an action. You can read more about my past exploits with Littlebits devices and Salesforce here.

ActionLinkPrimary

It appears for now at least, that such Chatter Posts need to be programatically created and as such lend themselves to integration use cases. While it is possible to create Chatter Posts with Action Links in code without using a template, thats more coding, and doesn’t encourage reuse of Action Link definitions which can also be packaged btw. So this blog focuses, as always on the best practice of balancing the best of declarative with minimal of coding to create the post itself. So first of all lets get some terminology out of the way…

  • Action Link, can actually be rendered as a button or a menu option that appears inline in the chatter post or in the overflow menu on the post. The button can either call an API, redirect to another web site or offer to download a file for the user. You have to add a Action Link to an Action Link Group before you can add it to a Chatter post.
  • Action Link Group, is a collection of one or more Action Links. The idea is the group presents a collection of choices you want to give to the user, e.g Accept, Decline. You can define a default choice, though the user can only pick one. Think of it like a group of radio controls or a choices type UI element. As mentioned above you can create both these 100% in code if you desire.
  • Action Link Group Template, is as the name suggests similar to the above, but allows for declarative definition and then programatic application of the buttons to be separated out. Once you start defining Action Links you’ll see they require a bit of knowledge about the underlying API. So in addition to the reuse benefit, a template is a good way to have someone else or package developer do that work for you. In order to make them generic, you can define place holders in Action Links, called bindings, that allow you to vary the information passed to the underlying API being called.

To define an Action Link you need to create the Action Link Group. Because we are using a template, this can be done using point and click. Under the Setup menu, under Create, you’ll find Action Link Templates, click New.

ActionLinkGroup

The Category field allows you to determine where the Action Link appears, in the body of the feed by selecting Primary action (as shown in the screenshot above) or in the overflow menu by selecting Overflow action, as shown in the screenshot below. Note that my example only defines one Action Link, you can define more.

ActionLinkOverflow

Through the Executions Allowed field, you can also determine if the Action Link can be invoked only once (first come first served) or once by each user who can see the chatter post (for example a chatter post to a group). You can read more about these and other fields here.

Your now ready to add an Action Link to the template, first study the documentation of your chosen web API, not that it can in theory be a SOAP based API, though REST is generally simpler. Hopefully, like the LittleBits API there are some samples that you can copy and paste to get you started. The following extract is what the LittleBits API documentation has to say about the API to control (output to) a device

This outputs 10% amplitude for 10 seconds:

curl -XPOST https://api-http.littlebitscloud.cc/devices/a84hf038ierj/output 
-H ‘Authorization: Bearer TOKEN’ 
-H ‘Accept: application/vnd.littlebits.v2+json’ 
–data ‘{“percent”:10,”duration_ms”:10000}’
“OK”

REST API documentation often uses a command line program called curl as an easy way to try out the API without having to write program code. In the screenshot below you can see how the curl parameters used in the extract above have been mapped to the fields when defining an Action Link. Note also that i have used the {!Bindings.var} syntax to define variable aspects, such as the deviceId, accessToken, percent and durationMs.

ActionLinkLittleBits

NOTE: The User Visibility setting is quite flexible and allows you to control who can actually press the button, as apposed to those who can actually see the Chatter Post.

Go back to your Action Link Group Template and check the Published checkbox. This makes it available for use when creating posts, but also has the effect of making certain aspects read only, such as the bindings. Though you can thankfully continue to tweak the API header and body templates defined on the Action Links.

Execute from Developer Console the following and it will create the Chatter Post shown in above. Currently neither Process Builder or Visual Flow are yet to support Action Link Templates when creating Chatter posts, which gives me an idea for a part two to this blog actually! For now please up vote this idea and review the following code.

// Specify values for Action Link bindings
Map<String, String> bindingMap = new Map<String, String>();
bindingMap.put('deviceId', 'yourdeviceid');
bindingMap.put('accessToken', 'youraccesstoken');
bindingMap.put('percent', '50');
bindingMap.put('durationMs', '10000');
List<ConnectApi.ActionLinkTemplateBindingInput> bindingInputs = new List<ConnectApi.ActionLinkTemplateBindingInput>();
for (String key : bindingMap.keySet()) {
    ConnectApi.ActionLinkTemplateBindingInput bindingInput = new ConnectApi.ActionLinkTemplateBindingInput();
    bindingInput.key = key;
    bindingInput.value = bindingMap.get(key);
    bindingInputs.add(bindingInput);
}

// Create an Action Link Group definition based on the template and bindings
ActionLinkGroupTemplate template = [SELECT Id FROM ActionLinkGroupTemplate WHERE DeveloperName='LittleBits'];
ConnectApi.ActionLinkGroupDefinitionInput actionLinkGroupDefinitionInput = new ConnectApi.ActionLinkGroupDefinitionInput();
actionLinkGroupDefinitionInput.templateId = template.id;
actionLinkGroupDefinitionInput.templateBindings = bindingInputs;
ConnectApi.ActionLinkGroupDefinition actionLinkGroupDefinition =
    ConnectApi.ActionLinks.createActionLinkGroupDefinition(Network.getNetworkId(), actionLinkGroupDefinitionInput);
System.debug('Action Link Id is ' + actionLinkGroupDefinition.actionLinks[0].Id);

// Create the post and utilise the Action Link Group created above
ConnectApi.TextSegmentInput textSegmentInput = new ConnectApi.TextSegmentInput();
textSegmentInput.text = 'Click to Send to the Device.';
ConnectApi.FeedItemInput feedItemInput = new ConnectApi.FeedItemInput();
feedItemInput.body = new ConnectApi.MessageBodyInput();
feedItemInput.subjectId = 'me';
feedItemInput.body.messageSegments = new List<ConnectApi.MessageSegmentInput> { textSegmentInput };
feedItemInput.capabilities = new ConnectApi.FeedElementCapabilitiesInput();
feedItemInput.capabilities.associatedActions = new ConnectApi.AssociatedActionsCapabilityInput();
feedItemInput.capabilities.associatedActions.actionLinkGroupIds = new List<String> { actionLinkGroupDefinition.id };

// Post the feed item.
ConnectApi.FeedElement feedElement =
    ConnectApi.ChatterFeeds.postFeedElement(
        Network.getNetworkId(), feedItemInput, null);

If you review the debug log produced the above code will output the Action Link Id. This can be used to retrieve response information from the Web API called. This is especially useful if the Web API callout failed, as only a generic failure message is shown to the end user. Once you have the Action Link Id paste the following code into Developer Console and review the debug log for the Web API response.

ConnectApi.ActionLinkDiagnosticInfo diagInfo =
    ConnectApi.ActionLinks.getActionLinkDiagnosticInfo(
        Network.getNetworkId(), '0AnG0000000Cd3NKAS');
System.debug('Diag output ' + diagInfo.diagnosticInfo);

Summary

Its true that Chatter Actions (formally Publisher Actions) are another means to customise the user experience of Chatter Posts, however these require development of Visualforce pages or Canvas applications. However by using Action Links you can provide a simpler platform driven user experience with much less coding.

By using Action Link Group Templates you can separate the concerns of delivering an integration, between those who know the external API’s and those that are driving the integration with Chatter via chatter posts referencing them. The bindings form the contract between the two.

Its also worth noting the Apex REST API‘s can be used from Action Links as well as other Salesforce API’s, in this case the authentication is handled for you, nice!


24 Comments

Extending Lightning Process Builder and Visual Workflow with Apex

MyProcessBuilderI love empowering as many people to experience the power of Salesforce’s hugely customisable and extensible platform as possible. In fact this platform has taught me that creating a great solution is not just about designing how we think a solution would be used, but also about empowering how others subsequently use it in conjunction with the platform, to create totally new use cases we never dream off! If your not thinking about how what your building sits “within” the platform your not fully embracing the true power of the platform.

So what is the next great platform feature and what has it go to do with writing Apex code? Well for me its actually two features, one has been around for a while, Visual Workflow, the other is new and shiny and is thus getting more attention, Lightning Process Builder. However as we will see in this blog there is a single way in which you can expose your finely crafted Apex functionality to users of both tools. Both of them have their merits and in fact can be used together for a super charged clicks not code experience!

IMPORTANT NOTE: Regardless if your developing building a solution in a Sandbox or as part of packaged AppExchange solution, you really need to understand this!

What is the difference between Visual Workflow and Lightning Process Builder?

At first sight these two tools look to achieve similar goals, letting the user turn a business process into steps to be executed one after another applying conditional logic and branching as needed.

One of the biggest challenges i see with the power these tools bring is educating users on use cases they can apply. While technically exposing Apex code to them is no different, its important to know some of the differences between how they are used when talking to people about how to best leverage them with your extensions.

  • UI based Processes. Here the big difference is that mainly, where Visual Workflow is concerned its about building user interfaces that allow users to progress through your steps through a wizard style UI, created by the platform for you based on the steps you’ve defined. Such UI’s can be started when the end user clicks on a tab, button or link to start the Visual Workflow (see my other blogs).
    VisualFlowUI
  • Record based Processes. In contrast Process Builder is about steps you define that happen behind the scenes when users are manipulating records such as Accounts, Opportunities an in fact any Standard or Custom object you choose. This also includes users of Salesforce1 Mobile and Salesforce API’s. As such Process Builder actually has more of an overlap with the capabilities of classic Workflow Rules.
    ProcessBuilderActions
  • Complexity. Process Builder is more simplistic compared to Flow, that’s not to say Flow is harder to use, its just got more features historically, for variables, branching and looping over information.
  • Similarities. In terms of the type steps you can perform within each there are some overlaps and some obvious differences, for example there are no UI related steps in Process Builder, yet both do have some support for steps that can be used to create or update records. They can both call out to Apex code, more on this later…
  • Power up with both! If you look closely at the screenshot above, you’ll see that Flows, short for Visual Workflow, can be selected as an Action Type within Process Builder! In this case your Flow cannot contain any visual steps, only logic steps, since Process Builder is not a UI tool. Such Flows are known as Autolaunched Flows (previously known as ‘Headless Flows’), you can read more here. This capability allows you to model more complex business processes in Process Builder.

You can read more details on further differences here from Salesforce.

What parts of your code should you expose?

Both tools have the ability to integrate at a record level with your existing custom objects, thus any logic you’ve placed in Apex Triggers, Validation Rules etc is also applied. So as you read this, your already extending these tools! However such logic is of course related to changes in your solutions record data. What about other logic?

  • Custom Buttons, Visualforce Buttons, If you’ve developed a Custom Button or Visualforce page with Apex logic behind it you may want to consider if that functionality might also benefit from being available through these tools. Allowing automation and/or alternative UI’s to be build without the need to involve a custom Visualforce or Apex development or changes to your based solution.
  • Share Calculations and Sub-Process Logic, You may have historically written a single peace of Apex code that orchestrates in a fixed a larger process via a series of calculations in a certain order and/or chains together other Apex sub-processes together. Consider if by exposing this code in a more granular way, would give users more flexibility in using your solution in different use cases. Normally this might require your code to respond to a new configuration you build in. For example where they wish to determine the order your Apex code is called, if parts should be omitted or even apply the logic to record changes on their own Custom Objects.

Design considerations for Invocable Methods

Now that we understand a bit more about the tools and the parts of your solution you might want to consider exposing, lets consider some common design considerations in doing so. The key to exposing Apex code to these tools is leveraging a new Spring’15 feature known as Invocable Methods. Here is an example i wrote recently…

global with sharing class RollupActionCalculate
{
	/**
	 * Describes a specific rollup to process
	 **/
	global class RollupToCalculate {

		@InvocableVariable(label='Parent Record Id' required=true)
		global Id ParentId;

		@InvocableVariable(label='Rollup Summary Unique Name' required=true)
		global String RollupSummaryUniqueName;

		private RollupService.RollupToCalculate toServiceRollupToCalculate() {
			RollupService.RollupToCalculate rollupToCalculate = new RollupService.RollupToCalculate();
			rollupToCalculate.parentId = parentId;
			rollupToCalculate.rollupSummaryUniqueName = rollupSummaryUniqueName;
			return rollupToCalculate;
		}
	}

	@InvocableMethod(
		label='Calculates a rollup'
		description='Provide the Id of the parent record and the unique name of the rollup to calculate, you specificy the same Id multiple times to invoke multiple rollups')
	global static void calculate(List<RollupToCalculate> rollupsToCalculate) {

		List<RollupService.RollupToCalculate> rollupsToCalc = new List<RollupService.RollupToCalculate>();
		for(RollupToCalculate rollupToCalc : rollupsToCalculate)
			rollupsToCalc.add(rollupToCalc.toServiceRollupToCalculate());

		RollupService.rollup(rollupsToCalc);
	}
}

NOTE: The use of global is only important if you plan to expose the action from an AppExchange package, if your developing a solution in a Sandbox for deployment to Production, you can use public.

As those of you following my blog may have already seen, i’ve been busy enabling my LittleBits Connector and Declarative Lookup Rollup Summary packages for these tools. In doing so, i’ve arrived at the following design considerations when thinking about exposing code via Invocable Methods.

  1. Design for admins not developers. Methods appear as ‘actions’ or ‘elements’ in the tools. Work with a few admins/consultants you know to sense check what your exposing make sense to them, a method name or purpose from a developers perspective might not make as much sense to an admin. 
  2. Don’t go crazy with the annotations! Salesforce has made it really easily to expose an Apex method via simple Apex annotations such as @InvocableMethod and @InvocableVariable. Just because its that easy doesn’t mean you don’t have to think carefully about where you apply them. I view Invocable Methods as part of your solutions API, and treat them as such, in terms of separation of concerns and best practices. So i would not apply them to methods on controller classes or even service classes, instead i would apply them to dedicate class that delegates to my service or business layer classes. 
  3. Apex class, parameter and member names matter. As with my thoughts on API best practices, establish a naming convention and use of common terms when naming these. Salesforce provides a REST API for describing and invoking Invocable Methods over HTTP, the names you use define that API. Currently both tools show the Apex class name and not the label, so i would derive some kind of way to group like actions together, i’ve chosen to follow the pattern [Feature]Action[ActionName], e.g. LittleBitsActonSendToDevice, so all actions for a given feature in your application at least group together. 
  4. Use the ‘label’ and ‘description’ annotation attributes. Both the @InvocableMethod and @InvocableVariable annotations support these, the tools will show your label instead of your Apex variable name in the UI. The Process Builder tool currently as far as i can see does not presently use the parameter description sadly (though Visual Workflow does) and neither tool the method description. Though i would recommend you define both in case in future releases that start to use it.
    • NOTE: As this text is end user facing, you may want to run it past a technical author or others to look for typos, spelling etc. Currently, there does not appear to be a way to translate these via Translation Workbench.

     

  5. Use the ‘required’ attribute. When a user adds your Invocable Method to a Process or Flow, it automatically adds prompts in the UI for parameters marked as required.
    	global class SendParameters {
    		@InvocableVariable(
                       Label='Access Token'
                       Description='Optional, if set via Custom Setting'
                       Required=False)
    		global String AccessToken;
    		@InvocableVariable(
                      Label='Device Id' Description='Optional, if set via Custom Setting'
                      Required=False)
            global String DeviceId;
    		@InvocableVariable(
                       Label='Percent'
                       Description='Percent of voltage sent to device'
                       Required=True)
            global Decimal Percent;
    		@InvocableVariable(
                       Label='Duration in Milliseconds'
                       Description='Duration of voltage sent to device'
                       Required=True)
            global Integer DurationMs;
    	}
    
        /**
         * Send percentages and durations to LittleBits cloud enabled devices
         **/
        @InvocableMethod(Label='Send to LittleBits Device' Description='Sends the given percentage for the given duration to a LittleBits Cloud Device.')
        global static void send(List<SendParameters> sendParameters) {
        	System.enqueueJob(new SendAsync(sendParameters));
    	}
    

    Visual Workflow Example
    FlowParams

    Lightning Process Builder Example
    ProcessBuilderParams

     

  6. Bulkification matters, really it does! Play close attention to the restrictions around your method signature when applying these annotations, as described in Invocable Method Considerations and Invocable Variables Considerations. One of the restrictions that made me smile was the insistence on the compiler requiring parameters be list based! If you recall this is also one of my design guidelines for the Apex Enterprise Patterns Service layer. Basically because it forces the developer to think about the bulk nature of the platform. Salesforce have thankfully enforced this, to ensure that when either of these tools call your method with multiple parameters in bulk record scenarios your strongly reminded to ensure your code behaves itself! Of course if your delegating to your Service layer, as i am doing in the examples above, this should be a fairly painless affair to marshall the parameters across.
    • NOTE: While you should of course write your Apex tests to pass bulk test data and thus test your bulkification code. You can also cause Process Builder and Visual Workflow to call your method with multiple parameters by either using List View bulk edits, Salesforce API or Anonymous Apex to perform a bulk DML operation on the applicable object. 
  7. Separation of Concerns and Apex Enterprise Patterns. As you can see in the examples above, i’m treating Invocable Actions as just another caller of the Service layer (described as part of the Apex Enterprise Patterns series). With its own concerns and requirements.
    • For example the design of the Service method signatures is limited only by the native Apex language itself, so can be quite expressive. Where as Invocable Actions have a much more restricted capability in terms of how they define themselves, we want to keep these two concerns separate.
    • You’ll also see in my LittleBits action example above, i’m delegating to the Service layer only after wrapping it in an Async context, since Process Builder calls the Invocable Method in the Trigger context. Remember the Service layer is ‘caller agnostic’, thus its not its responsibility to implement Aysnc on the callers behalf.
    • I have also taken to encapsulating the parameters and marshalling of these into the Service types as needed, thus allowing the Service layer and Invocable Method to evolve independently if needed.
    • Finally as with any caller of the Service layer i would expect only one invocation and to create a compound Service (see Service best practices) if more was needed, as apposed to calling multiple Services from the one Invocable Method.

     

  8. Do I need to expose a custom Apex or REST API as well? So you might be wondering that since we now have a way to call Apex code declaratively and in fact via the Standard Salesforce REST API (see Carolina’s exploits here). Would you ever still consider exposing a formal Apex or REST API (as described here)? Well here are some of my current thoughts on this, things are still evolving so i stress these are still my current thoughts…
    • The platform provides as REST API generated from you Invocable Method annotations, so aesthetically and functionally may look different from what you might consider a true RESTful API, in addition will fall under a different URL path to those you defined via the explicit Apex REST annotations.
    • If your data structures of your API are such that they don’t fall within those defined by Invocable Methods (such as Apex types referencing other Apex types, use of Maps, Enums etc..) then you’ll have to expose the Service anyway as an Apex API.
    • If your parameters and data structures are no more complex then those required for an Invocable Method, and thus would be identical to those of the Apex Service you might consider using the same class. However keep in mind you can only have one Invocable Method per class, and Apex Services typically have many methods related to the feature or service itself. Also while the data structure may match today, new functional requirements may stress this in the future, causing you to reinstate a Service layer, or worse, bend the parameters of your Invocable Methods.
    • In short my current recommendation is to continue to create a full expressive Service driven API for your solutions and treat Invocable Methods as another API variant.

 

Hopefully this has given you some useful things to think about when planning your own use of Invocable Methods in the future. As this is a very new area of the platform, i’m sure the above will evolve further over time as well.

 

 

 


17 Comments

Introducing the LittleBits Connector for Salesforce

As those of you know that follow my brickinthecloud.com blog, i love using API’s in the cloud to connect not only applications, but devices. Salesforce themselves also share this passion, just take a look at their Internet of Things page to see how it can improve your business and the work Reid Carlberg is doing.

LittleBitsWithEV3When Salesforce sent me a LittleBits Cloud Starter Kit as Christmas present i once again set about connecting it to my favourite cloud platform! This blog introduces two new GitHub repos and a brand new installable package to allow you to take full advantage of the snap-not-solder that LittleBits electronics brings with the Salesforce clicks-not-code design model! So if your not an electronics whiz or coder, you really don’t have any excuses for not getting involved! LittleBits provides over 60 snap together components, to build automated fish feeders, to home security systems and practically anything else you can imagine!

cloud_diagram2The heart of the kit is a small computer module, powered by a USB cable (i plugged mine into my external phone battery pack!). It boots from an SD card and uses an onboard USB Wifi adapter to connect itself to the internet (once you’ve connected it to your wifi). After that you send commands to connected outputs to it via a mobile site or set of LittleBits Cloud API’s provided. So far i have focused on sending commands to the outputs (in my case i connected the servo motor), however as i write this i’m teaming up with Cory Cowgill who has also starting working with his kit from an inputs perspective (e.g. pressing on a button on the device).

Everyone in the Salesforce MVP community was lucky enough to get one of these kits and i wanted to make sure everyone could experience it with the cloud platform we love so much! Sadly right now the clicks-not-code solution IFTTT  (If-This-Then-That) used for controlling LittleBits devices does not fully support Salesforce (there is only a Salesforce Chatter plugin). Borrowing an approach i’ve been using for my Declarative Rollup Summary Tool, i set about building a declarative based tool that would allow the Salesforce admin to connect updates to any standard or custom object record to a LittleBits device!

The result is the LittleBits Connector!

LittleBitsTrigger

Once you have assembled and connected your LittleBits device, go to the Settings page under LittleBits Cloud Control and take note of your Access Token and Device ID. As you can see in the screenshot above enter these in the LittleBits Device section or in the LittleBits API custom setting.

The Trigger section needs only the Record ID of the record you want to monitor and have your device respond to when changes are made. Simply list the field API names (separated by a comma) of those you want the tool to monitor. Next fill in the LittleBits Output section with either literal values (on the left) and/or dynamic values driven by values from the record itself. This gives you quite a lot of flexibility to use Formula Fields for example to calculate the percentage.

Controlling a LittleBits cloud device is quite simple, define the duration of the output (how long to apply a voltage) and the amount of voltage as a percentage. Depending on the output module you’ve fitted, light or motor, the effects differ but the principle is the same. In the motor case, i set mine to Turn mode (see below). Then by applying a duration of 100,000 and a percent the motor turns to a specific point each time. Making it ideal for building pointing devices!

With the help of my wifes crafting skills we set about on a joint Christmas project to build a pointing device that would show the Probability of a given Opportunity in realtime. Though the tool I ended up building can effectively be used with any standard or custom object. I also wanted to use only the modules in the LittleBits Cloud Start Kit. So with Salesforce Org and the tool the Internet of Things is in your hands!

Here is a video of our creation in action…

If you want to have a go yourself follow these steps…

Building your own Opportunity Probability Indicator Device #clicksnotcode

If clicks are more your thing than coding, fear not and follow these simple steps!

  1. Purchase a LittleBits Cloud Connector and follow the onscreen instructions once you have created your LittleBits account here. Complete the tutorial to confirm its connected.
  2. Build the device modules configuration as shown in the picture below. On the servo module, there is a tiny switch, use the small screw driver provided to push it to the down position, to put the servo in “turn” mode.
    LittleBitsDevice
  3. Next the fun bit, construct your pointing device! I’d love to see tweets of everyones crafting skills!
    pointingdevice
  4. Install the latest LittleBits Connector either via clicking the package install links or as code, see GitHub README. The first time you go to the LittleBits Trigger tab, you maybe asked to complete the post install step to configure the Metadata API needed by the tool to deploy the Apex Triggers, complete this step as instructed on screen.
  5. Click New on the LittleBits Trigger tab, complete the LittleBits Trigger record as described above, but of course using a record Id from an Opportunity record in your org then click Save.
  6. Click the Manage Object Trigger button to automatically deploy a small Apex Trigger to pass on updates made to the records to the LittleBits Connector engine.
  7. In Salesforce or Salesforce1 Mobile for that matter, update your Opportunity Stage (which updates the Probability). This results in an Apex Job which typically fires fairly promptly and you should see your device respond! If you don’t see anything change on your device confirm its working via the LittleBits Cloud Control test page, next go to the Setup menu check the jobs are completing without error via the Apex Jobs page.

Using the LittleBits Cloud API from Apex

The above tool was built around an Apex wrapper i have started to build around the LittleBits Cloud API, which is a REST API. With a little more time and help from fellow LittleBits fan Cory, we will update it to support not only controlling devices, but also allow them to feedback to Salesforce. In the meantime if you want to code your own solution directly you can install the library here.

The code is quite simply for now, you can read more about it in the README file.

new LittleBits().getDevice().output(80, 10000);

Whats next?

Well i’m quite addicted to this new device, my Lego EV3 robot might be justified in feeling a little left out, but fear not, i’ll find a way to combine them i’m sure! Next up for the LittleBits Connector is subscribing to output from the device back to Salesforce, possibly calling out to a headless Flow, to keep that clicks not code feel going!


6 Comments

Calling Salesforce API’s from Ant Script – Querying Records

Back in June last year i wrote a blog entitled Look ma, no hands!, its main focus was how to leverage the then new ability to install and uninstall packages via the Metadata API. However there was another goal, that is i wanted to invoke Salesforce API’s using only native Ant script and 100% Java based Apache Ant tasks, so no Java coding or native curl executable invocations. Making the resulting script platform neutral and easier to manage.

In this blog i’d like to talk a little bit more about how it was done and highlight the excellent <http> Ant task from Missing Link (so named since surprisingly Ant has yet to provide a core task for HTTP comms). In addition i wanted share how i was able to recently extend this approach. While working with one of FinancialForce.com‘s new up and coming DevOps team members Brad Slater (also see Object Model Tool).

The goal once again was keeping it 100% Ant, this time invoking the Salesforce REST API to perform queries.

 		<!-- Query -->
 		<runQuery 
 			sessionId="${sessionId}" 
 			serverUrl="${serverUrl}" 
 			queryResult="accounts"
 			query="SELECT Id, Name FROM Account LIMIT 1"/>

As before the new Ant tasks are defined in single XML file, ant-salesforce.xml, you can download the updated version with the new <query> task and easily <import> into your own Ant scripts.

Ant provides an excellent way to encapsulate complex script in components it calls Tasks. You can implement these in Java or Ant script itself, using the <macrodef> Ant task. The following shows how the Salesforce <login> task was built for last years blog. You can see both the <http> and <xmltask> tasks in action.

	<!-- Login into Salesforce and return the session Id and serverUrl -->
	<macrodef name="login">
		<attribute name="username" description="Salesforce user name."/>
		<attribute name="password" description="Salesforce password."/>
		<attribute name="serverurl" description="Server Url property."/>
		<attribute name="sessionId" description="Session Id property."/>
		<sequential>
			<!-- Obtain Session Id via Login SOAP service -->
		    <http url="https://login.salesforce.com/services/Soap/c/29.0" method="POST" failonunexpected="false" entityProperty="loginResponse" statusProperty="loginResponseStatus">
		    	<headers>
		    		<header name="Content-Type" value="text/xml"/>
		    		<header name="SOAPAction" value="login"/>
		    	</headers>
		    	<entity>
		    		<![CDATA[
				    	<env:Envelope xmlns:xsd='http://www.w3.org/2001/XMLSchema' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:env='http://schemas.xmlsoap.org/soap/envelope/'>
				    	    <env:Body>
				    	        <sf:login xmlns:sf='urn:enterprise.soap.sforce.com'>
				    	            <sf:username>@{username}</sf:username>
				    	            <sf:password>@{password}</sf:password>
				    	        </sf:login>
				    	    </env:Body>
				    	</env:Envelope>
		    		]]>
		    	</entity>
		    </http>
			<!-- Parse response -->
			<xmltask destbuffer="loginResponseBuffer">
				<insert path="/">${loginResponse}</insert>
			</xmltask>
			<if>
				<!-- Success? -->
				<equals arg1="${loginResponseStatus}" arg2="200"/>
				<then>
					<!-- Parse sessionId and serverUrl -->
					<xmltask sourcebuffer="loginResponseBuffer" failWithoutMatch="true">
						<copy path="/*[local-name()='Envelope']/*[local-name()='Body']/:loginResponse/:result/:sessionId/text()" property="@{sessionId}"/>
						<copy path="/*[local-name()='Envelope']/*[local-name()='Body']/:loginResponse/:result/:serverUrl/text()" property="@{serverUrl}"/>
					</xmltask>
				</then>
				<else>
					<!-- Parse login error message and fail build -->
					<xmltask sourcebuffer="loginResponseBuffer" failWithoutMatch="true">
						<copy path="/*[local-name()='Envelope']/*[local-name()='Body']/*[local-name()='Fault']/*[local-name()='faultstring']/text()" property="faultString"/>
					</xmltask>
					<fail message="${faultString}"/>
				</else>
			</if>
		</sequential>
	</macrodef>

The <query> Task further leverages the <http> task to make a call to the Salesforce REST API query end point.

	<!-- Provides access to the Salesforce REST API for a SOQL query -->
	<macrodef name="runQuery" description="Run database query">
		<attribute name="sessionId" description="Salesforce user name."/>
		<attribute name="serverUrl" description="Salesforce url."/>
		<attribute name="query" description="Salesforce password."/>
		<attribute name="queryResult" description="Query result property name"/>
		<sequential>
			<!-- Extract host/instance name from the serverUrl returned from the login response -->
			<propertyregex property="host"
              input="${serverUrl}"
              regexp="^((http[s]?|ftp):\/)?\/?([^:\/\s]+)((\/\w+)*\/)([\w\-\.]+[^#?\s]+)(.*)?(#[\w\-]+)?$"
              select="\3"
              casesensitive="false" />			
			<!-- Execute Apex via REST API /query resource -->
		    <http url="https://${host}/services/data/v29.0/query" method="GET" entityProperty="queryResultResponse" statusProperty="loginResponseStatus" printrequestheaders="false" printresponseheaders="false">
		    	<headers>
		    		<header name="Authorization" value="Bearer ${sessionId}"/>
		    	</headers>
		    	<query>
		    		<parameter name="q" value="@{query}"/>
		    	</query>
		    </http>		
		    <property name="@{queryResult}" value="${queryResultResponse}"/>
		</sequential>
	</macrodef>

When put together the two Tasks work very well together, allowing you to login and pass the resulting Session Id to the query  task, then parse the results according to your needs with a small peace of inline JavaScript to parse the resulting JSON. The user of these tasks is blissfully unaware of some of the more advanced Ant script approaches used to implement them, which is how things should be when providing good Ant tasks.

<project name="demo" basedir="." default="demo">

    <!-- Import login properties -->
    <property file="${basedir}/build.properties"/>    
    <!-- Import new Salesforce tasks -->
    <import file="${basedir}/lib/ant-salesforce.xml"/>
    
    <!-- Query task demo -->	
    <target name="demo">
    
    	<!-- Login -->
 		<login 
 			username="${sf.username}" 
 			password="${sf.password}" 
 			serverurl="serverUrl" 
 			sessionId="sessionId"/>
 			
 		<!-- Query -->
 		<runQuery 
 			sessionId="${sessionId}" 
 			serverUrl="${serverUrl}" 
 			queryResult="accounts"
 			query="SELECT Id, Name FROM Account LIMIT 1"/>
 		
 		<!-- Parse JSON result via JavaScript eval -->
 		<script language="javascript">
			var response = eval('('+project.getProperty('accounts')+')');
			project.setProperty('Name', response.records[0].Name);
			project.setProperty('Id', response.records[0].Id);
		</script>
		
		<!-- Dump results -->
		<echo message="Queried Account '${Name}' with Id ${Id}"/>
		
    </target>   
     
</project>

Here is a more complex example processing more than one record via an Ant marco for each record.

 		<!-- Query -->
 		<runQuery 
 			sessionId="${sessionId}" 
 			serverUrl="${serverUrl}" 
 			queryResult="accounts"
 			query="SELECT Id, Name FROM Account"/>

		<!-- Ant marco called for each Account retrieved -->
	    <macrodef name="echo.account">
	    	<attribute name="id"/>
	    	<attribute name="name"/>
	    	<sequential>
				<!-- Process for each account -->
		    	<echo message="Queried Account '@{name}' with Id @{id}"/>
	    	</sequential>		    	
	    </macrodef> 		
 		
 		<!-- Parse JSON result via JavaScript eval and call above Ant macro -->
 		<script language="javascript">
			var response = eval('('+project.getProperty('accounts')+')');
			for(var idx in response.records)
			{
				var processRecord = project.createTask("echo.account");
                processRecord.setDynamicAttribute("id", response.records[idx].Id);
                processRecord.setDynamicAttribute("name", response.records[idx].Name);
                processRecord.execute();
			}
		</script>

Ant is not just for build systems or developers, it can be used quite effectively for many automation tasks. You could create an Ant script that polls for certain activity in your Salesforce org and invokes some application or more complex process for example. Ant has a huge array of tasks and massive community support, its a good skills to learn for cross platform scripting and i’ve frankly found very little it can do these days.

So you may be wondering, why ever use a Java based Ant task or process again to implement your complex Ant and Salesforce integrations? Well…. you may still want to go down the Java coding route if your needs are more complex or if your not comfortable with Ant scripting. Indeed in the case above, the project morphed into something much more complex and we ended up in Java after all. As always choose your tools for the job according to time, resources, skills and complexity. Hopefully this blog has given you another option in your tool belt to consider!