Andy in the Cloud

From BBC Basic to Force.com and beyond…


21 Comments

Building an Amazon Echo Skill with the Flow API

Screen Shot 2016-09-25 at 19.31.25.png
The Amazon Echo device sits in your living room or office and listens to your verbal instructions, much like Siri. It performs various activities. Such as fetching and relaying information and/or performing actions on your behalf. It also serves as a large bluetooth speaker. Now, after a run in the US, it has finally been released in the UK!

Why am i writing about it here? Well it has an API of course! So lets roll up our sleeves with an example i built recently with my FinancialForce colleague and partner in crime for all things gadget and platform, Kevin Roberts.

Kevin reached out to me when he noticed that Amazon had built this device with a means to teach it to respond to new phrases. Developers can extend its phrases by creating new Skills.You can read and hear more about the results over on FinancialForce blog site.

The sample code and instructions to reproduce this demo yourself are here. Also don’t worry if you do not have an Amazon Echo, you can test by speaking into your computer by using the EchoSim.io.

Custom Skill Architecture

To create a Skill you need to be a developer, capable of implementing a REST API endpoint that Amazon calls out to when the Echo recognizes a phrase you have trained it with. You can do this in practically any programming language you like of course, providing you comply with the documented JSON definition and host it securely.

lambda.pngOne thing that simplifies the process is hosting your skill code through the Amazon Lambda service. Lambda supports Java, Python and NodeJS, as well as setting up the security stack for you. Leaving all you have to do is provide the code! You can even just type your code in directly to developer console provided by Amazon.

Training your Skill

You cannot just say anything to Amazon Echo and expect it to understand, its clever but not that clever (yet!). Every Skill developer has to provide a set of phrases / sample utterances. From these Amazon does some clever stuff behind the scenes to compile these into a form its speech recognition algorithms can match a users spoken words to.

You are advised to provide as many utterances as you can, up 50,000 of them in fact! To cover as many varied ways in which we can say things differently but mean the same thing.  The sample utterances must all start with an identifier, known as the Intent. You can see various sample utterances for the CreateLead and GetLatestLeads intents below.

CreateLead Lets create a new Lead
CreateLead Create me a new lead
CreateLead New lead
CreateLead Help me create a lead
GetLatestLeads Latest top leads?
GetLatestLeads What are our top leads?

Skills have names, which users can search for in the Skills Marketplace, much like an App does on your phone. For Skill called “Lead Helper” users would speak the following phrases to invoke any of its intents.

  • “Lead Helper, Create me a new lead”
  • “Lead Helper, Lets create a new lead”
  • “Lead Helper, Help me create a lead”
  • “Lead Helper, What are our top leads?”

Your sample utterances can also include parameters / slots.

DueTasks What tasks are due for {Date}?
DueTasks Any tasks that are due for {Date}?

Slots are essentially parameters to your Intents, Amazon supports various slot types. The date slot type is quite flexible in terms of how it handles relative dates.

  • “Task Helper, What tasks are due next thursday?”
  • “Task Helper, Any tasks that are due for today?”

Along with your sample utterances you need to provide an intent schema, this lists the names of your intents (as referenced in your sample utterances) and the slot names and types. Further information can be found in Defining the Voice Interface.

{
  'intents': [
    {
      'intent': 'DueTasks',
      'slots': [
        {
          'name': 'Date',
          'type': 'AMAZON.DATE';
        }
      ]
    }
  ]
}

Mapping Skill Intents and Slots to Flows and Variables

As i mentioned above, Skill developers implement a REST API end point. Instead of receiving the spoken words as raw text, it receives the Intent name and name/value pair of Slot names and values. That method can then invoke the appropriate database query or action and generate a response (as a string) to response back to the user.

To map this to Salesforce Flows, we can consider the Intent name as the Flow Name and the Slot name/values as Flow Input Parameters. Flow Output Parameters can be used to generate the spoken response to the user. For the example above you would define a Flow called DueTasks with the following named input and output Flow parameters.

  • Flow Name: DueTasks
  • Flow Input Parameter Name:  Alexa_Slot_Date
  • Flow Output Parameter Name:  Alexa_Tell

You can then basically use the Flow Assignment element to adjust the variable values. As well as other elements to query and update records accordingly. By using an output variable named Alexa_Tell  before your Flow ends, you end the conversation with a single response contained with the text variable.

For another example see the Echo sample here, this one simply repeats “echo’s” the name given by the user when they speak a phrase with their name in it.

EchoFlow.png

The sample utterances and intent schema are shown below. These utterances also use a literal slot type, which is a kind of picklist with variable possibilities. Meaning that Andrew, Sarah, Kevin and Bob are just sample values, users can use other words in the Name slot, it is up to the developer to validate them if its important.

Echo My name is {Andrew|Name}
Echo My name is {Sarah|Name}
Echo My name is {Kevin|Name}
Echo My name is {Bob|Name}
{
  'intents': [
    {
      'intent': 'Echo',
      'slots': [
        {
          'name': 'Name',
          'type': 'LITERAL'
        }
      ]
    }
  ]
}

Alternatively if create and assign the Alexa_Ask variable in your Flow, this starts a conversation with your user. In this case any Input/Output Flow Parameters are retained between Flow calls. Finally if you suffix any slot name with Number, for example a slot named AmountNumber would be Alexa_Slot_AmountNumber, this will ensure that the value gets converted correctly to pass to a Flow Variable of type Number.

The design for managing conversations with Flow Input/Output variables was inspired by an excellent article on defining conversations in Alexa Skills here.

The following phrases are for the Conversation Flow included in the samples repository.

Conversation About favourite things
Conversation My favourite color is {Red|Color}
Conversation My favourite color is {Green|Color}
Conversation My favourite color is {Blue|Color}
Conversation My favourite number is {Number}

Screen Shot 2016-09-25 at 21.39.09.png

NodeJS Custom Skill

nodejs-new-pantone-black.pngTo code my Skill I went with NodeJS, as i had not done a lot of coding in it and wanted to challenge myself. The other challenge i set myself was to integrate in a generic and extensible way with Salesforce. Thus i wanted to incorporate my old friend Flow!

With its numerous elements for conditional logic, reading and updating the database. Flow is the perfect solution to integrating with Salesforce in the only way we know how on the Salesforce platform, with clicks not code! Now of course Amazon does not talk Flow natively, so we need some glue!

Amazon provide NodeJS developers a useful base class to get things going. In NodeJS this is imported with the require function (interesting “how it works” article). In my case i also leveraged the most excellent nforce library from Kevin O’Hara.

var AlexaSkill = require('./AlexaSkill');
var nforce = require('nforce');

/**
* SalesforceFlowSkill is a child of AlexaSkill.
* To read more about inheritance in JavaScript, see the link below.
*
* @see https://developer.mozilla.org/en-US/docs/Web/JavaScript/Introduction_to_Object-Oriented_JavaScript#Inheritance
*/
var SalesforceFlowSkill = function () {
    AlexaSkill.call(this, APP_ID);
};

The AlexaSkill base class exposes four methods you can override, onSessionStartedonLaunchonSessionEnded and onIntent. As you can see from the method names, requests to your skill code can be scoped in a session. This allows you to manage conversations users can have with the device. Asking questions and gathering answers within the session that build up to perform a specific action.

I implemented the onIntent method to call the Flow API.

SalesforceFlowSkill.prototype.eventHandlers.onIntent =
   function (intentRequest, session, response) {
       // Handle the spoken intent from the user
       // ...
   }

Calling the Salesforce Flow API from NodeJS

Within the onIntent method I used the nforce library to perform oAuth user name and password authentication for simplicity. Though Alexa Skills do support the oAuth web flow by linking accounts. The following code performs the authentication with Salesforce.

SalesforceFlowSkill.prototype.eventHandlers.onIntent =
    // Configure a connection
    var org = nforce.createConnection({
        clientId: 'yourclientid',
        clientSecret: 'yoursecret',
        redirectUri: 'http://localhost:3000/oauth/_callback',
        mode: 'single'
    });
    // Call a Flow!
    org.authenticate({ username: USER_NAME, password: PASSWORD}).
        then(function() {

The following code, calls the Flow API, again via nforce. It maps the slot name/values to parameters and returning any Flow output variables back in the response. A session will be kept open when the response.ask method is called. In this case any Input/Output Flow Parameters are retained in the Session and passed back into the Flow again.

// Build Flow input parameters
var params = {};
// From Session...
for(var sessionAttr in session.attributes) {
    params[sessionAttr] = session.attributes[sessionAttr];
}
// From Slots...
for(var slot in intent.slots) {
    if(intent.slots[slot].value != null) {
        if(slot.endsWith('Number')) {
            params['Alexa_Slot_' + slot] = Number(intent.slots[slot].value);
        } else {
            params['Alexa_Slot_' + slot] = intent.slots[slot].value;
        }
    }
}
// Call the Flow API
var opts = org._getOpts(null, null);
opts.resource = '/actions/custom/flow/'+intentName;
opts.method = 'POST';
var flowRunBody = {};
flowRunBody.inputs  = [];
flowRunBody.inputs[0] = params;
opts.body = JSON.stringify(flowRunBody);
org._apiRequest(opts).then(function(resp) {
    // Ask or Tell?
    var ask = resp[0].outputValues['Alexa_Ask'];
    var tell = resp[0].outputValues['Alexa_Tell'];
    if(tell!=null) {
        // Tell the user something (closes the session)
        response.tell(tell);
    } else if (ask!=null) {
       // Store output variables in Session
       for(var outputVarName in resp[0].outputValues) {
           if(outputVarName == 'Alexa_Ask')
               continue;
           if(outputVarName == 'Alexa_Tell')
               continue;
           if(outputVarName == 'Flow__InterviewStatus')
               continue;
            session.attributes[outputVarName] =
               resp[0].outputValues[outputVarName];
       }
       // Ask another question (keeps session open)
       response.ask(ask, ask);

Summary

I had lot of fun putting this together, even more so seeing what Kevin did with it with his Flow skills (pun intended). If you have someone like Kevin in your company or want to have a go yourself, you can follow the setup and configuration instructions here.

I would also like to call out that past Salesforce MVP, now Trailhead Developer Advocate Jeff Douglass started the ball rolling with his Salesforce CRM examples. Which is also worth checking out if you prefer to build something more explicitly in NodeJS.

 


Leave a comment

Using Action Link Templates to Declaratively call API’s

ActionLinkSetupSalesforce recently introduced a new platform feature which has now become GA called Action Link Templates. Since then its been staring me in the face and bugging me that i didn’t quite understand them until now…

While there is quite a lot of information in the Salesforce documentation, i was still a bit lost as to what even an Action Link was. It turns out that they are a means to define actions that can appear in a Chatter Post to call external or Salesforce web based API’s. Thus allowing users to do more without leaving their feed.

After realising its a means to link user actions with API’s. I could not resist exploring further with one my favourite external API’s, from LittleBits. The LittleBits cloud API can be used with cloud connected devices constructed by snapping together modules.

The following shows a Chatter Post i created with an Action Link button on it that without any code calls the LittleBits API to cause my device to perform an action. You can read more about my past exploits with Littlebits devices and Salesforce here.

ActionLinkPrimary

It appears for now at least, that such Chatter Posts need to be programatically created and as such lend themselves to integration use cases. While it is possible to create Chatter Posts with Action Links in code without using a template, thats more coding, and doesn’t encourage reuse of Action Link definitions which can also be packaged btw. So this blog focuses, as always on the best practice of balancing the best of declarative with minimal of coding to create the post itself. So first of all lets get some terminology out of the way…

  • Action Link, can actually be rendered as a button or a menu option that appears inline in the chatter post or in the overflow menu on the post. The button can either call an API, redirect to another web site or offer to download a file for the user. You have to add a Action Link to an Action Link Group before you can add it to a Chatter post.
  • Action Link Group, is a collection of one or more Action Links. The idea is the group presents a collection of choices you want to give to the user, e.g Accept, Decline. You can define a default choice, though the user can only pick one. Think of it like a group of radio controls or a choices type UI element. As mentioned above you can create both these 100% in code if you desire.
  • Action Link Group Template, is as the name suggests similar to the above, but allows for declarative definition and then programatic application of the buttons to be separated out. Once you start defining Action Links you’ll see they require a bit of knowledge about the underlying API. So in addition to the reuse benefit, a template is a good way to have someone else or package developer do that work for you. In order to make them generic, you can define place holders in Action Links, called bindings, that allow you to vary the information passed to the underlying API being called.

To define an Action Link you need to create the Action Link Group. Because we are using a template, this can be done using point and click. Under the Setup menu, under Create, you’ll find Action Link Templates, click New.

ActionLinkGroup

The Category field allows you to determine where the Action Link appears, in the body of the feed by selecting Primary action (as shown in the screenshot above) or in the overflow menu by selecting Overflow action, as shown in the screenshot below. Note that my example only defines one Action Link, you can define more.

ActionLinkOverflow

Through the Executions Allowed field, you can also determine if the Action Link can be invoked only once (first come first served) or once by each user who can see the chatter post (for example a chatter post to a group). You can read more about these and other fields here.

Your now ready to add an Action Link to the template, first study the documentation of your chosen web API, not that it can in theory be a SOAP based API, though REST is generally simpler. Hopefully, like the LittleBits API there are some samples that you can copy and paste to get you started. The following extract is what the LittleBits API documentation has to say about the API to control (output to) a device

This outputs 10% amplitude for 10 seconds:

curl -XPOST https://api-http.littlebitscloud.cc/devices/a84hf038ierj/output 
-H ‘Authorization: Bearer TOKEN’ 
-H ‘Accept: application/vnd.littlebits.v2+json’ 
–data ‘{“percent”:10,”duration_ms”:10000}’
“OK”

REST API documentation often uses a command line program called curl as an easy way to try out the API without having to write program code. In the screenshot below you can see how the curl parameters used in the extract above have been mapped to the fields when defining an Action Link. Note also that i have used the {!Bindings.var} syntax to define variable aspects, such as the deviceId, accessToken, percent and durationMs.

ActionLinkLittleBits

NOTE: The User Visibility setting is quite flexible and allows you to control who can actually press the button, as apposed to those who can actually see the Chatter Post.

Go back to your Action Link Group Template and check the Published checkbox. This makes it available for use when creating posts, but also has the effect of making certain aspects read only, such as the bindings. Though you can thankfully continue to tweak the API header and body templates defined on the Action Links.

Execute from Developer Console the following and it will create the Chatter Post shown in above. Currently neither Process Builder or Visual Flow are yet to support Action Link Templates when creating Chatter posts, which gives me an idea for a part two to this blog actually! For now please up vote this idea and review the following code.

// Specify values for Action Link bindings
Map<String, String> bindingMap = new Map<String, String>();
bindingMap.put('deviceId', 'yourdeviceid');
bindingMap.put('accessToken', 'youraccesstoken');
bindingMap.put('percent', '50');
bindingMap.put('durationMs', '10000');
List<ConnectApi.ActionLinkTemplateBindingInput> bindingInputs = new List<ConnectApi.ActionLinkTemplateBindingInput>();
for (String key : bindingMap.keySet()) {
    ConnectApi.ActionLinkTemplateBindingInput bindingInput = new ConnectApi.ActionLinkTemplateBindingInput();
    bindingInput.key = key;
    bindingInput.value = bindingMap.get(key);
    bindingInputs.add(bindingInput);
}

// Create an Action Link Group definition based on the template and bindings
ActionLinkGroupTemplate template = [SELECT Id FROM ActionLinkGroupTemplate WHERE DeveloperName='LittleBits'];
ConnectApi.ActionLinkGroupDefinitionInput actionLinkGroupDefinitionInput = new ConnectApi.ActionLinkGroupDefinitionInput();
actionLinkGroupDefinitionInput.templateId = template.id;
actionLinkGroupDefinitionInput.templateBindings = bindingInputs;
ConnectApi.ActionLinkGroupDefinition actionLinkGroupDefinition =
    ConnectApi.ActionLinks.createActionLinkGroupDefinition(Network.getNetworkId(), actionLinkGroupDefinitionInput);
System.debug('Action Link Id is ' + actionLinkGroupDefinition.actionLinks[0].Id);

// Create the post and utilise the Action Link Group created above
ConnectApi.TextSegmentInput textSegmentInput = new ConnectApi.TextSegmentInput();
textSegmentInput.text = 'Click to Send to the Device.';
ConnectApi.FeedItemInput feedItemInput = new ConnectApi.FeedItemInput();
feedItemInput.body = new ConnectApi.MessageBodyInput();
feedItemInput.subjectId = 'me';
feedItemInput.body.messageSegments = new List<ConnectApi.MessageSegmentInput> { textSegmentInput };
feedItemInput.capabilities = new ConnectApi.FeedElementCapabilitiesInput();
feedItemInput.capabilities.associatedActions = new ConnectApi.AssociatedActionsCapabilityInput();
feedItemInput.capabilities.associatedActions.actionLinkGroupIds = new List<String> { actionLinkGroupDefinition.id };

// Post the feed item.
ConnectApi.FeedElement feedElement =
    ConnectApi.ChatterFeeds.postFeedElement(
        Network.getNetworkId(), feedItemInput, null);

If you review the debug log produced the above code will output the Action Link Id. This can be used to retrieve response information from the Web API called. This is especially useful if the Web API callout failed, as only a generic failure message is shown to the end user. Once you have the Action Link Id paste the following code into Developer Console and review the debug log for the Web API response.

ConnectApi.ActionLinkDiagnosticInfo diagInfo =
    ConnectApi.ActionLinks.getActionLinkDiagnosticInfo(
        Network.getNetworkId(), '0AnG0000000Cd3NKAS');
System.debug('Diag output ' + diagInfo.diagnosticInfo);

Summary

Its true that Chatter Actions (formally Publisher Actions) are another means to customise the user experience of Chatter Posts, however these require development of Visualforce pages or Canvas applications. However by using Action Links you can provide a simpler platform driven user experience with much less coding.

By using Action Link Group Templates you can separate the concerns of delivering an integration, between those who know the external API’s and those that are driving the integration with Chatter via chatter posts referencing them. The bindings form the contract between the two.

Its also worth noting the Apex REST API‘s can be used from Action Links as well as other Salesforce API’s, in this case the authentication is handled for you, nice!


5 Comments

Controlling Internet Devices via Lightning Process Builder

Lightning Process Builder will soon become GA once the Spring’15 rollout completes in early February, just a few short weeks away as i write this. I don’t actually know where to start in terms of how huge and significant this new platform feature is! In my recent blog Salesforce evolves customization to a new level! over on the FinancialForce blog, i describe Salesforce as ‘the most powerful and productive cloud platform on the planet’. The more and more i get into Process Builder and how as a developer i can empower users of it, that statement is already starting to sound like an understatement!

There are many things getting me excited (as usual) about Salesforce these days, in addition to Process Builder and Invocable Actions (more on this later), its the Internet of Things. I just love the notion of inspecting and controlling devices no matter where i am on the planet. If you’ve been following my blog from earlier this year, you’ll hopefully have seen my exploits with the LittleBits cloud enabled devices and the Salesforce LittleBits Connector.

pointingdeviceI have just spent a very enjoyable Saturday morning in my Spring’15 Preview org with a special build of the LittleBits Connector. That leverages the ability for Process Builder to callout to specially annotated Apex code that in turn calls out to the LittleBits Cloud API.

The result, a fully declarative way to connect to LittleBits devices from Process Builder! If you watch the demo from my past blog you’ll see my Opportunity Probability Pointer in action, the following implements the same process but using only Process Builder!

LittleBitsProcessBuilder

Once Spring’15 has completely rolled out i’ll release an update to the Salesforce LittleBits Connector managed package that supports Process Builder, so you can try the above out. In the meantime if have a Spring’15 Preview Org you can deploy direct from GitHub and try it out now!

UPDATE August 2015: It seems Process Builder still has some open issues binding Percent fields to Actions. Salesforce have documented a workaround to this via a formula field. Thus if you have Percent field, please create Formula field as follows and bind that to the Percent variable in Process Builder or Flow.

PercentFormulaWorkaround

How can developers enhance Process Builder?

There are some excellent out of the box actions from which Process Builder or Flow Designer users can choose from, as i have covered in past blogs. What is really exciting is how developers can effectively extend these actions.

So while Salesforce has yet to provide a declarative means to make Web API callouts without code. A developer needs to provide a bit of Apex code to make the above work. Salesforce have made it insanely easy to expose code to tools like Process Builder and also Visual Flow. Such tools dynamically inspects Apex code in the org (including that from AppExchange packages) and renders a user interface for the Process Builder user to provide the necessary inputs (and map outputs if defined). All the developer has to do is use some Apex annotations.

global with sharing class LittleBitsActionSendToDevice {

	global class SendParameters {
		@InvocableVariable
		global String AccessToken;
		@InvocableVariable
        global String DeviceId;
		@InvocableVariable
        global Decimal Percent;
		@InvocableVariable
        global Integer DurationMs;
	}
	
    /**
     * Send percentages and durations to LittleBits cloud enabled devices
     **/
    @InvocableMethod(
    	Label='Send to LittleBits Device' 
    	Description='Sends the given percentage for the given duration to a LittleBits Cloud Device.')
    global static void send(List<SendParameters> sendParameters) {
        System.enqueueJob(new SendAsync(sendParameters));
    }	
}

I learn’t quite a lot about writing Invocable Actions today and will be following up with some guidelines and thoughts on how i have integrated them with Apex Enterprise Patterns Service Layer.


17 Comments

Introducing the LittleBits Connector for Salesforce

As those of you know that follow my brickinthecloud.com blog, i love using API’s in the cloud to connect not only applications, but devices. Salesforce themselves also share this passion, just take a look at their Internet of Things page to see how it can improve your business and the work Reid Carlberg is doing.

LittleBitsWithEV3When Salesforce sent me a LittleBits Cloud Starter Kit as Christmas present i once again set about connecting it to my favourite cloud platform! This blog introduces two new GitHub repos and a brand new installable package to allow you to take full advantage of the snap-not-solder that LittleBits electronics brings with the Salesforce clicks-not-code design model! So if your not an electronics whiz or coder, you really don’t have any excuses for not getting involved! LittleBits provides over 60 snap together components, to build automated fish feeders, to home security systems and practically anything else you can imagine!

cloud_diagram2The heart of the kit is a small computer module, powered by a USB cable (i plugged mine into my external phone battery pack!). It boots from an SD card and uses an onboard USB Wifi adapter to connect itself to the internet (once you’ve connected it to your wifi). After that you send commands to connected outputs to it via a mobile site or set of LittleBits Cloud API’s provided. So far i have focused on sending commands to the outputs (in my case i connected the servo motor), however as i write this i’m teaming up with Cory Cowgill who has also starting working with his kit from an inputs perspective (e.g. pressing on a button on the device).

Everyone in the Salesforce MVP community was lucky enough to get one of these kits and i wanted to make sure everyone could experience it with the cloud platform we love so much! Sadly right now the clicks-not-code solution IFTTT  (If-This-Then-That) used for controlling LittleBits devices does not fully support Salesforce (there is only a Salesforce Chatter plugin). Borrowing an approach i’ve been using for my Declarative Rollup Summary Tool, i set about building a declarative based tool that would allow the Salesforce admin to connect updates to any standard or custom object record to a LittleBits device!

The result is the LittleBits Connector!

LittleBitsTrigger

Once you have assembled and connected your LittleBits device, go to the Settings page under LittleBits Cloud Control and take note of your Access Token and Device ID. As you can see in the screenshot above enter these in the LittleBits Device section or in the LittleBits API custom setting.

The Trigger section needs only the Record ID of the record you want to monitor and have your device respond to when changes are made. Simply list the field API names (separated by a comma) of those you want the tool to monitor. Next fill in the LittleBits Output section with either literal values (on the left) and/or dynamic values driven by values from the record itself. This gives you quite a lot of flexibility to use Formula Fields for example to calculate the percentage.

Controlling a LittleBits cloud device is quite simple, define the duration of the output (how long to apply a voltage) and the amount of voltage as a percentage. Depending on the output module you’ve fitted, light or motor, the effects differ but the principle is the same. In the motor case, i set mine to Turn mode (see below). Then by applying a duration of 100,000 and a percent the motor turns to a specific point each time. Making it ideal for building pointing devices!

With the help of my wifes crafting skills we set about on a joint Christmas project to build a pointing device that would show the Probability of a given Opportunity in realtime. Though the tool I ended up building can effectively be used with any standard or custom object. I also wanted to use only the modules in the LittleBits Cloud Start Kit. So with Salesforce Org and the tool the Internet of Things is in your hands!

Here is a video of our creation in action…

If you want to have a go yourself follow these steps…

Building your own Opportunity Probability Indicator Device #clicksnotcode

If clicks are more your thing than coding, fear not and follow these simple steps!

  1. Purchase a LittleBits Cloud Connector and follow the onscreen instructions once you have created your LittleBits account here. Complete the tutorial to confirm its connected.
  2. Build the device modules configuration as shown in the picture below. On the servo module, there is a tiny switch, use the small screw driver provided to push it to the down position, to put the servo in “turn” mode.
    LittleBitsDevice
  3. Next the fun bit, construct your pointing device! I’d love to see tweets of everyones crafting skills!
    pointingdevice
  4. Install the latest LittleBits Connector either via clicking the package install links or as code, see GitHub README. The first time you go to the LittleBits Trigger tab, you maybe asked to complete the post install step to configure the Metadata API needed by the tool to deploy the Apex Triggers, complete this step as instructed on screen.
  5. Click New on the LittleBits Trigger tab, complete the LittleBits Trigger record as described above, but of course using a record Id from an Opportunity record in your org then click Save.
  6. Click the Manage Object Trigger button to automatically deploy a small Apex Trigger to pass on updates made to the records to the LittleBits Connector engine.
  7. In Salesforce or Salesforce1 Mobile for that matter, update your Opportunity Stage (which updates the Probability). This results in an Apex Job which typically fires fairly promptly and you should see your device respond! If you don’t see anything change on your device confirm its working via the LittleBits Cloud Control test page, next go to the Setup menu check the jobs are completing without error via the Apex Jobs page.

Using the LittleBits Cloud API from Apex

The above tool was built around an Apex wrapper i have started to build around the LittleBits Cloud API, which is a REST API. With a little more time and help from fellow LittleBits fan Cory, we will update it to support not only controlling devices, but also allow them to feedback to Salesforce. In the meantime if you want to code your own solution directly you can install the library here.

The code is quite simply for now, you can read more about it in the README file.

new LittleBits().getDevice().output(80, 10000);

Whats next?

Well i’m quite addicted to this new device, my Lego EV3 robot might be justified in feeling a little left out, but fear not, i’ll find a way to combine them i’m sure! Next up for the LittleBits Connector is subscribing to output from the device back to Salesforce, possibly calling out to a headless Flow, to keep that clicks not code feel going!