Andy in the Cloud

From BBC Basic to Force.com and beyond…


7 Comments

Simplified API Integrations with External Services

Salesforce are on a mission to make accessing off platform data and web services as easy as possible. This helps keep the user experience optimal and consistent for the user and also allows admins to continue to leverage the platforms tools such as Process Builder and Flow, even if the data or logic is not on the platform.

Starting with External Objects, they added the ability to see and also update data stored in external databases. Once setup, users can manipulate external records without leaving Salesforce, by staying within the familiar UI’s. With External Services, currently in Beta, they have extended this concept to external API services.

In this blog lets first focus on the clicks-not-code steps you can repeat in your own org, to consume a live ASCII Art web service API i have exposed publicly. The API is simple, it takes a message and returns it in ASCII art format. The following steps result in a working UI to call the API and update a record.

ExternalServicesDemo.png

After the clicks not code bit i will share how the API was built, whats required for compatibility with this feature and how insanely easy it is to develop Web Services in Heroku using Nodejs. So lets dive in to External Services!

Building an ASCII Art Converter in Lightning Experience and Flow

The above solution was built with the following configurations / components. All of which are accessible under the LEX Setup menu (required for External Services) and takes around 5 minutes maximum to get up and running.

  1. Named Credential for the URL of the Web Service
  2. External Service for the URL, referencing the Named Credential
  3. Visual Flow to present a UI, call the External Service and update a record
  4. Lightning Record Page customisation to embed the Flow in the UI

I created myself a Custom Object, called Message, but you can easily adapt the following to any object you want, you just need a Rich Text field to store the result in. The only other thing you need to know of course is the web service URL.

https://createasciiart.herokuapp.com

Can i use External Services with any Web Service then?

In order to build technologies that simplify what are normally things developers have to interpret and code manually. Web Service APIs must be documented in a way that External Services can understand. In this Beta release this is the Interagent schema standard (created by Heroku as it happens).  Support for the more broadly adopted Swagger / OpenId will be added in the Winter release (Safe Harbour).

For my ASCII Art service above, i authored the Interagent schema based on a sample the Salesforce PM for this feature kindly shared, more on this later. When creating the External Service in moment we will provide a schema to this service.

https://createasciiart.herokuapp.com/schema

Creating a Named Credential

From the setup menu search for Named Credential and click New. This is a simple Web Service that requires no authentication. Basically provide only the part of the above URL that points to the Web Service endpoint.

ESNamedCred.png

Creating the External Service

Now for the magic! Under the Setup menu (only in Lightning Experience) search for Integrations and start the wizard. Its a pretty straight forward process, of selecting the above Named Credential, then telling it the URL for the schema. If thats not exposed by the service you want to use, you can paste a Schema in directly (which lets a developer define a schema yourself if one does not already exist).

esstep1.png

esstep2.png

Once you have created the External Service you can review the operations it has discovered. Salesforce uses the documentation embedded in the given schema to display a rather pleasing summary actually.

esstep3.png

So what just happened? Well… internally the wizard wrote some Apex code on your behalf and implemented the Invocable Method annotations to enable that Apex code to appear in tools like Process Builder (not supported in Beta) and Flow. Pretty cool!

Whats more interesting for those wondering, is you cannot actually see this Apex code, its there but some how magically managed by the platform. Though i’ve not confirmed, i would assume it does not require code coverage.

Update: According to the PM, in Winter’18 it will be possible “see” the generated class from other Apex classes and thus reuse the generated code from Apex as well. Kind of like a Api Stub Generator.

Creating a UI to call the External Service via Flow

This simple Flow prompts the user for a message to convert, calls the External Service and updates a Rich Text field on the record with the response. You will see in the Flow sidebar the generated Apex class generated by the External Service appears.

esflow

The following screenshots show some of the key steps involved in setting up the Flow and its three steps, including making a Flow variable for the record Id. This is later used when embedding the Flow in Lightning Experience in the next step.

esflow1

RecordId used by Flow Lightning Component

esflow2

Assign the message service parameter

esflow3

Assign the response to variable

esflow4

Update the Rich Text field

TIP: When setting the ASCII Art service response into the field, i wrapped the value in the HTML elements, pre and code to ensure the use of a monospaced font when the Rich Text field displayed the value.

Embedding the Flow UI in Lightning Experience

Navigate to your desired objects record detail page and select Edit Page from the cog in the top right of the page to open the Lightning App Builder. Here you can drag the Flow component onto the page and configure it to call the above flow. Make sure to map the Flow variable for the record Id as shown in the screenshot, to ensure the current record is passed.

esflowlc.png

Thats it, your done! Enjoy your ASCII Art messages!

Creating your own API for use with External Services

Belinda, the PM for this feature was also kind enough to share the sample code for the example shown in TrailheaDX, from which the service in this blog is based. However i did wanted to build my own version to do something different from the credit example. Also extend my personal experience with Heroku and Nodejs more.

The NodeJS code for this solution is only 41 lines long. It runs up a web server (using the very easy to use hapi library), and registers a couple of handlers. One handler returns the statically defined schema.json file, the other implements the service itself. As side note, the joi library is an easy way add validation to the service parameters.

var Hapi = require('hapi');
var joi = require('joi');
var figlet = require('figlet');

// initialize http listener on a default port
var server = new Hapi.Server();
server.connection({ port: process.env.PORT || 3000 });

// establish route for serving up schema.json
server.route({
  method: 'GET',
  path: '/schema',
  handler: function(request, reply) {
    reply(require('./schema'));
  }
});

// establish route for the /asciiart resource, including some light validation
server.route({
  method: 'POST',
  path: '/asciiart',
  config: {
    validate: {
      payload: {
        message: joi.string().required()
      }
    }
  },
  handler: function(request, reply) {
    // Call figlet to generate the ASCII Art and return it!
    const msg = request.payload.message;
    figlet(msg, function(err, data) {
        reply(data);
    });
  }
});

// start the server
server.start(function() {
  console.log('Server started on ' + server.info.uri);
});

I decided i wanted to explore the diversity of whats available in the Nodejs space, through npm. To keep things light i chose to have a bit of fun and quickly found an ASCIIArt library, called figlet. Though i soon discovered that npm had a library for pretty much every other use case i came up with!

Finally the hand written Interagent schema is also shown below and is reasonably short and easy to understand for this example. Its not all that well documented in layman’s terms as far as i can see. See my thoughts on this and upcoming Swagger support below.

{
  "$schema": "http://interagent.github.io/interagent-hyper-schema",
  "title": "ASCII Art Service",
  "description": "External service example from AndyInTheCloud",
  "properties": {
    "asciiart": {
      "$ref": "#/definitions/asciiart"
    }
  },
  "definitions": {
    "asciiart": {
      "title": "ASCII Art Service",
      "description": "Returns the ASCII Art for the given message.",
      "type": [ "object" ],
      "properties": {
        "message": {
          "$ref": "#/definitions/asciiart/definitions/message"
        },
        "art": {
          "$ref": "#/definitions/asciiart/definitions/art"
        }
      },
      "definitions": {
        "message": {
          "description": "The message.",
          "example": "Hello World",
          "type": [ "string" ]
        },
        "art": {
          "description": "The ASCII Art.",
          "example": "",
          "type": [ "string" ]
        }
      },
      "links": [
        {
          "title": "AsciiArt",
          "description": "Converts the given message to ASCII Art.",
          "href": "/asciiart",
          "method": "POST",
          "schema": {
            "type": [ "object" ],
            "description": "Specifies input parameters to calculate payment term",
            "properties": {
              "message": {
                "$ref": "#/definitions/asciiart/definitions/message"
              }
            },
            "required": [ "message" ]
          },
          "targetSchema": {
            "$ref": "#/definitions/asciiart/definitions/art"
          }
        }
      ]
    }
  }
}

Finally here is the package.json file that brings the whole node app together!

{
  "name": "asciiartservice",
  "version": "1.0.0",
  "main": "server.js",
  "dependencies": {
    "figlet": "^1.2.0",
    "hapi": "~8.4.0",
    "joi": "^6.1.1"
  }
}

Other Observations and Thoughts…

  • Error Handling.
    You can handle errors from the service in the usual way by using the Fault path from the element. The error shown is not all that pretty, but then in fairness there is not really much of a standard to follow here.
    eserrorflow.pngeserror.png
  • Can a Web Service called this way talk back to Salesforce?
    Flow provides various system variables, one of which is the Session Id. Thus you could pass this as an argument to your Web Service. Be careful though as the running user may not have Salesforce API access and this will be a UI session and thus will be short lived. Thus you may want to explore another means to obtain an renewable oAuth token for more advanced uses.
  • Web Service Callbacks.
    Currently in the Beta the Flow is blocked until the Web Service returns, so its good practice to make your service short and sweet. Salesforce are planning async support as part of the roadmap however.
  • Complex Parameters.
    Its unclear at this stage how complex a web service can be supported given Flows limitations around Invocable Methods which this feature depends on.
  • The future is bright with Swagger support!
    I am really glad Salesforce are adding support for Swagger/OpenID, as i really struggled to find good examples and tutorials around Interagent. Really what is needed here is for the schema and code to be tied more closely together, like this!

Summary

Both External Objects and External Services reflect the reality of the continued need for integration tools and making this process simpler and thus cheaper. Separate services and data repositories are for now here to stay. I’m really pleased to see Salesforce doing what it does best, making complex things easier for the masses. Or as Einstein would say…Everything should be made as simple as possible, but no simpler.

Finally you can read more about External Objects here and here through Agustina’s and laterally Alba’s excellent blogs.


2 Comments

Highlights from TrailheaDX 2017

IMG_2857.JPGThis was my first TrailheaDX and what an event it was! With my Field Guide in hand i set out into the wilderness! In this blog i’ll share some of my highlights, thoughts and links to the latest resources. Many of the newly announced things you can actually get your hands on now which is amazing!

Overall the event felt well organized, if a little frantic at times. With smaller sessions of 30 minutes each, often 20 mins after intros and questions, each was bite sized, but quite well tuned with demos and code samples being shown.

SalesforceDX, Salesforce announced the public beta of this new technology aimed at improving the developer experience on the platform. SalesforceDX consist of several modules that will be extended further over time. Salesforce has done a great job at providing a wealth of Trailhead resources to get you started.

Einstein, Since its announcement, myself and other developers have been waiting to get access to more custom tools and API’s, well now that wait is well and truly over. As per my previous blogs we’ve had the Einstein Vision API for a little while now. Announced at the event where no less than three other new Einstein related tools and API’s.

  • Einstein Discovery. Salesforce demonstrated a very slick looking tool that allows you to point and click your way through to analyzing various data sets, including those obtained from your custom objects! They have provided a Trailhead module on it here and i plan on digging in! Pricing and further info is here.
  • Einstein Sentiment API. Allows you to interpret text strings for terms that indicate if its a positive, neutral or negative statement / sentiment. This can be used to scan case comments, forum posts, feedback, twitter posts etc in an automated way and respond or be alerted accordingly to what is being said.
  • Einstein Intent API.  Allows you to interpret text strings for meanings, such as instructions or requests. Routing case comments or even implementing bots that can help automate or propose actions to be taken without human interpretation.
  • Einstein Object Detection API. Is an extension of the Einstein Vision API, that allows for individual items in a picture to be identified. For example a pile of items on a coffee table, such as a mug, magazine, laptop or pot plant! Each can then be recognized and classified to build up more intel on whats in the picture for further processing and analysis.
  • PredictionIO on Heroku. Finally, if you want to go below the declarative tools or intentional simplified Einstein API’s, you can build your own machine learning models using Heroku and the PredictionIO build pack!

Platform Events. These allow messages to be published and subscribed to using a new object known as an Event object, suffixed with __e. Once created you can use a new Apex API’s or REST API’s to send messages and use either Apex Triggers or Streaming API to receive them. There is also a new Process Builder action or Flow element to send messages. You can keep your messages within Force.com or use them to integrate between other cloud platforms or the browser. The possibilities are quite endless here, aysnc processing, inter app comms, logging, ui notifications…. i’m sure myself and other bloggers will be exploring them in the coming months!

External Services. If you find a cool API you want to consume in Force.com you currently have to write some code. No longer! If you have a schema that describes that API you use the External Services wizard to generate some Apex code that will call out to the API. Whats special about this, is the Apex code is accessible via Process Builder and Flow. Making clicks not code API integration possible. Its an excellent way to integrate with complementary services you or others might develop on platforms such as Heroku. I have just submitted a session to Dreamforce 2017 to explore this further, fingers crossed it gets selected! You can read more about it here in the meantime.

Sadly i did have to make a key decision to focus on the above topics and not so much on Lightning. I heard from other attendees these sessions where excellent and i did catch a brief sight of dynamic component rendering in Lightning App Builder, very cool!

Thanks Salesforce for filling my blog backlog for the next year or so! 😉

 

 


23 Comments

Image Recognition with the Salesforce Einstein API and an Amazon Echo

AI services are becoming more and more accessible to developers than ever before. Salesforce acquired Metamind last year and made some big announcements at Dreamforce 2016. Like many developers, i was keen to find out about its API. The answer at the time was “check back with us next year!”.

pipaWith Spring’17 that question has been answered. At least thus far as regards to image recognition, with the availability of Salesforce Einstein Predictive Vision Service (Pilot). The pilot is open to the public and is free to signup.

True AI consists of recognition, be that visual or spoken, performing actions and the final most critical peace, learning. This blog explores the spoken and visual recognition peace further, with the added help of Flow for performing practically any action you can envision!

You may recall a blog from last year relating to integrating Salesforce with Amazon Echo. To explore the new Einstein API, I decided to leverage that work further. In order to trigger recognition of my pictures from Alexa. Also the Salesforce Flow usage enabled easy extensibility via custom Apex Actions. Thus the Einstein Apex Action was born! After a small bit of code and some configuration i had a working voice activated image recognition demo up and running.

The following diagram breaks down what just happened in the video above. Followed by a deeper walk through of the Predictive Vision Service and how to call it.

amazonechoandeinstein

  1. Using Salesforce1 Mobile app I uploaded an image using the Files feature.
  2. Salesforce stores this in the ContentVersion object for later querying (step 6).
  3. Using the Alexa skill, called Einstein, i was able to “Ask Einstein about my photo”
  4. This  NodeJS skill runs on Amazon and simply routes requests to Salesforce Flow
  5. Spoken terms are passed through to a named Flow via the Flow API.
  6. The Flow is simple in this case, it queries the ContentVersion for the latest upload.
  7. The Flow then calls the Einstein Apex Action which in turn calls the Einstein REST API via Apex (more on this later). Finally a Flow assignment takes the resulting prediction of what the images is actually of, and uses it to build a spoken response.
    einstenandflow

Standard Example: The above example is exposing the Einstein API in an Apex Action, this is purely to integrate with the Amazon Echo use case. The pilot documentation walks you through an standalone Apex and Visualforce example to get you started.

How does theEinstein Predictive Vision Service API work?

revaflintsilverThe service introduces a few new terms to get your head round. Firstly a dataset is a named container for the types of images (labels) you want to recognise. The demo above uses a predefined dataset and model. A model is the output from the process of taking examples of each of your data sets labels and processing them (training). Initiating this process is pretty easy, you just make a REST API call with your dataset ID. All the recognition magic is behind the scenes, you just poll for when its done. All you have to do is test the model with other images. The service returns ranked predictions (using the datasets labels) on what it thinks your picture is of. When i ran the pictures above of my family dogs, for the first time i was pretty impressed that it detected the breeds.

EinsteinPredictiveVisionAPI.png

While quite fiddly at times, it is also well worth the walking through how to setup your own image datasets and training to get a hands on example of the above.

How do i call the Einstein API from Apex?

Salesforce saved me the trouble of wrapping the REST API in Apex and have started an Apex wrapper here in this GitHub repo. When you signup you get private key file you have to upload into Salesforce to authenticate the calls. Currently the private key file the pilot gives you seems to be scoped by your org users associated email address.

public with sharing class EinsteinAction {

    public class Prediction {
        @InvocableVariable
        public String label;
        @InvocableVariable
        public Double probability;
    }

    @InvocableMethod(label='Classify the given files' description='Calls the Einsten API to classify the given ContentVersion files.')
    public static List<EinsteinAction.Prediction> classifyFiles(List<ID> contentVersionIds) {
        String access_token = new VisionController().getAccessToken();
        ContentVersion content = [SELECT Title,VersionData FROM ContentVersion where Id in :contentVersionIds LIMIT 1];
        List<EinsteinAction.Prediction> predictions = new List<EinsteinAction.Prediction>();
        for(Vision.Prediction vp : Vision.predictBlob(content.VersionData, access_token, 'GeneralImageClassifier')) {
            EinsteinAction.Prediction p = new EinsteinAction.Prediction();
            p.label = vp.label;
            p.probability = vp.probability;
            predictions.add(p);
            break; // Just take the most probable
        }
        return predictions;
    }
}

NOTE: The above method is only handling the first file passed in the parameter list, the minimum needed for this demo. To bulkify you can remove the limit in the SOQL and ideally put the file ID back in the response. It might also be useful to expose the other predictions and not just the first one.

The VisionController and Vision Apex classes from the GitHub repo are used in the above code. It looks like the repo is still very much WIP so i would expect the API to change a bit. They also assume that you have followed the standalone example tutorial here.

Summary

This initial API has made it pretty easy to access a key part of AI with what is essentially only a handful of simple REST API calls. I’m looking forward to seeing where this goes and where Salesforce goes next with future AI services.


15 Comments

Building an Amazon Echo Skill with the Flow API

Screen Shot 2016-09-25 at 19.31.25.png
The Amazon Echo device sits in your living room or office and listens to your verbal instructions, much like Siri. It performs various activities. Such as fetching and relaying information and/or performing actions on your behalf. It also serves as a large bluetooth speaker. Now, after a run in the US, it has finally been released in the UK!

Why am i writing about it here? Well it has an API of course! So lets roll up our sleeves with an example i built recently with my FinancialForce colleague and partner in crime for all things gadget and platform, Kevin Roberts.

Kevin reached out to me when he noticed that Amazon had built this device with a means to teach it to respond to new phrases. Developers can extend its phrases by creating new Skills.You can read and hear more about the results over on FinancialForce blog site.

The sample code and instructions to reproduce this demo yourself are here. Also don’t worry if you do not have an Amazon Echo, you can test by speaking into your computer by using the EchoSim.io.

Custom Skill Architecture

To create a Skill you need to be a developer, capable of implementing a REST API endpoint that Amazon calls out to when the Echo recognizes a phrase you have trained it with. You can do this in practically any programming language you like of course, providing you comply with the documented JSON definition and host it securely.

lambda.pngOne thing that simplifies the process is hosting your skill code through the Amazon Lambda service. Lambda supports Java, Python and NodeJS, as well as setting up the security stack for you. Leaving all you have to do is provide the code! You can even just type your code in directly to developer console provided by Amazon.

Training your Skill

You cannot just say anything to Amazon Echo and expect it to understand, its clever but not that clever (yet!). Every Skill developer has to provide a set of phrases / sample utterances. From these Amazon does some clever stuff behind the scenes to compile these into a form its speech recognition algorithms can match a users spoken words to.

You are advised to provide as many utterances as you can, up 50,000 of them in fact! To cover as many varied ways in which we can say things differently but mean the same thing.  The sample utterances must all start with an identifier, known as the Intent. You can see various sample utterances for the CreateLead and GetLatestLeads intents below.

CreateLead Lets create a new Lead
CreateLead Create me a new lead
CreateLead New lead
CreateLead Help me create a lead
GetLatestLeads Latest top leads?
GetLatestLeads What are our top leads?

Skills have names, which users can search for in the Skills Marketplace, much like an App does on your phone. For Skill called “Lead Helper” users would speak the following phrases to invoke any of its intents.

  • “Lead Helper, Create me a new lead”
  • “Lead Helper, Lets create a new lead”
  • “Lead Helper, Help me create a lead”
  • “Lead Helper, What are our top leads?”

Your sample utterances can also include parameters / slots.

DueTasks What tasks are due for {Date}?
DueTasks Any tasks that are due for {Date}?

Slots are essentially parameters to your Intents, Amazon supports various slot types. The date slot type is quite flexible in terms of how it handles relative dates.

  • “Task Helper, What tasks are due next thursday?”
  • “Task Helper, Any tasks that are due for today?”

Along with your sample utterances you need to provide an intent schema, this lists the names of your intents (as referenced in your sample utterances) and the slot names and types. Further information can be found in Defining the Voice Interface.

{
  'intents': [
    {
      'intent': 'DueTasks',
      'slots': [
        {
          'name': 'Date',
          'type': 'AMAZON.DATE';
        }
      ]
    }
  ]
}

Mapping Skill Intents and Slots to Flows and Variables

As i mentioned above, Skill developers implement a REST API end point. Instead of receiving the spoken words as raw text, it receives the Intent name and name/value pair of Slot names and values. That method can then invoke the appropriate database query or action and generate a response (as a string) to response back to the user.

To map this to Salesforce Flows, we can consider the Intent name as the Flow Name and the Slot name/values as Flow Input Parameters. Flow Output Parameters can be used to generate the spoken response to the user. For the example above you would define a Flow called DueTasks with the following named input and output Flow parameters.

  • Flow Name: DueTasks
  • Flow Input Parameter Name:  Alexa_Slot_Date
  • Flow Output Parameter Name:  Alexa_Tell

You can then basically use the Flow Assignment element to adjust the variable values. As well as other elements to query and update records accordingly. By using an output variable named Alexa_Tell  before your Flow ends, you end the conversation with a single response contained with the text variable.

For another example see the Echo sample here, this one simply repeats “echo’s” the name given by the user when they speak a phrase with their name in it.

EchoFlow.png

The sample utterances and intent schema are shown below. These utterances also use a literal slot type, which is a kind of picklist with variable possibilities. Meaning that Andrew, Sarah, Kevin and Bob are just sample values, users can use other words in the Name slot, it is up to the developer to validate them if its important.

Echo My name is {Andrew|Name}
Echo My name is {Sarah|Name}
Echo My name is {Kevin|Name}
Echo My name is {Bob|Name}
{
  'intents': [
    {
      'intent': 'Echo',
      'slots': [
        {
          'name': 'Name',
          'type': 'LITERAL'
        }
      ]
    }
  ]
}

Alternatively if create and assign the Alexa_Ask variable in your Flow, this starts a conversation with your user. In this case any Input/Output Flow Parameters are retained between Flow calls. Finally if you suffix any slot name with Number, for example a slot named AmountNumber would be Alexa_Slot_AmountNumber, this will ensure that the value gets converted correctly to pass to a Flow Variable of type Number.

The design for managing conversations with Flow Input/Output variables was inspired by an excellent article on defining conversations in Alexa Skills here.

The following phrases are for the Conversation Flow included in the samples repository.

Conversation About favourite things
Conversation My favourite color is {Red|Color}
Conversation My favourite color is {Green|Color}
Conversation My favourite color is {Blue|Color}
Conversation My favourite number is {Number}

Screen Shot 2016-09-25 at 21.39.09.png

NodeJS Custom Skill

nodejs-new-pantone-black.pngTo code my Skill I went with NodeJS, as i had not done a lot of coding in it and wanted to challenge myself. The other challenge i set myself was to integrate in a generic and extensible way with Salesforce. Thus i wanted to incorporate my old friend Flow!

With its numerous elements for conditional logic, reading and updating the database. Flow is the perfect solution to integrating with Salesforce in the only way we know how on the Salesforce platform, with clicks not code! Now of course Amazon does not talk Flow natively, so we need some glue!

Amazon provide NodeJS developers a useful base class to get things going. In NodeJS this is imported with the require function (interesting “how it works” article). In my case i also leveraged the most excellent nforce library from Kevin O’Hara.

var AlexaSkill = require('./AlexaSkill');
var nforce = require('nforce');

/**
* SalesforceFlowSkill is a child of AlexaSkill.
* To read more about inheritance in JavaScript, see the link below.
*
* @see https://developer.mozilla.org/en-US/docs/Web/JavaScript/Introduction_to_Object-Oriented_JavaScript#Inheritance
*/
var SalesforceFlowSkill = function () {
    AlexaSkill.call(this, APP_ID);
};

The AlexaSkill base class exposes four methods you can override, onSessionStartedonLaunchonSessionEnded and onIntent. As you can see from the method names, requests to your skill code can be scoped in a session. This allows you to manage conversations users can have with the device. Asking questions and gathering answers within the session that build up to perform a specific action.

I implemented the onIntent method to call the Flow API.

SalesforceFlowSkill.prototype.eventHandlers.onIntent =
   function (intentRequest, session, response) {
       // Handle the spoken intent from the user
       // ...
   }

Calling the Salesforce Flow API from NodeJS

Within the onIntent method I used the nforce library to perform oAuth user name and password authentication for simplicity. Though Alexa Skills do support the oAuth web flow by linking accounts. The following code performs the authentication with Salesforce.

SalesforceFlowSkill.prototype.eventHandlers.onIntent =
    // Configure a connection
    var org = nforce.createConnection({
        clientId: 'yourclientid',
        clientSecret: 'yoursecret',
        redirectUri: 'http://localhost:3000/oauth/_callback',
        mode: 'single'
    });
    // Call a Flow!
    org.authenticate({ username: USER_NAME, password: PASSWORD}).
        then(function() {

The following code, calls the Flow API, again via nforce. It maps the slot name/values to parameters and returning any Flow output variables back in the response. A session will be kept open when the response.ask method is called. In this case any Input/Output Flow Parameters are retained in the Session and passed back into the Flow again.

// Build Flow input parameters
var params = {};
// From Session...
for(var sessionAttr in session.attributes) {
    params[sessionAttr] = session.attributes[sessionAttr];
}
// From Slots...
for(var slot in intent.slots) {
    if(intent.slots[slot].value != null) {
        if(slot.endsWith('Number')) {
            params['Alexa_Slot_' + slot] = Number(intent.slots[slot].value);
        } else {
            params['Alexa_Slot_' + slot] = intent.slots[slot].value;
        }
    }
}
// Call the Flow API
var opts = org._getOpts(null, null);
opts.resource = '/actions/custom/flow/'+intentName;
opts.method = 'POST';
var flowRunBody = {};
flowRunBody.inputs  = [];
flowRunBody.inputs[0] = params;
opts.body = JSON.stringify(flowRunBody);
org._apiRequest(opts).then(function(resp) {
    // Ask or Tell?
    var ask = resp[0].outputValues['Alexa_Ask'];
    var tell = resp[0].outputValues['Alexa_Tell'];
    if(tell!=null) {
        // Tell the user something (closes the session)
        response.tell(tell);
    } else if (ask!=null) {
       // Store output variables in Session
       for(var outputVarName in resp[0].outputValues) {
           if(outputVarName == 'Alexa_Ask')
               continue;
           if(outputVarName == 'Alexa_Tell')
               continue;
           if(outputVarName == 'Flow__InterviewStatus')
               continue;
            session.attributes[outputVarName] =
               resp[0].outputValues[outputVarName];
       }
       // Ask another question (keeps session open)
       response.ask(ask, ask);

Summary

I had lot of fun putting this together, even more so seeing what Kevin did with it with his Flow skills (pun intended). If you have someone like Kevin in your company or want to have a go yourself, you can follow the setup and configuration instructions here.

I would also like to call out that past Salesforce MVP, now Trailhead Developer Advocate Jeff Douglass started the ball rolling with his Salesforce CRM examples. Which is also worth checking out if you prefer to build something more explicitly in NodeJS.

 


Leave a comment

Flow in Winter’17 Lightning Experience

A short blog charting an evening with Flow, Lightning Experience in Winter’17

Flow on Record Detail Pages

Screen Shot 2016-08-30 at 22.11.47Flow makes its presence known in Lightning App Builder this release (in Beta) and with it some new possibilities for customising the Lightning Experience user experience, as well as Salesforce1 Mobile. I decided to focus on the Record Detail Pages as you want to see how it passes the recordId. As you can also see Flow gets an automatic face lift in this context, making them look properly at home!

Screen Shot 2016-08-30 at 22.05.48

This is the Screen element showing the passed information from Lighting App Builder…

Screen Shot 2016-08-30 at 22.08.48.png

By creating an Input variable in your Flow called recordId of type Text (see docs). Lightning App Builder will automatically pass in the record Id. You can also expose other input parameters, e.g. CustomMessage so long as they are Input or Input/Output.

Screen Shot 2016-08-30 at 22.00.19.png Screen Shot 2016-08-30 at 22.09.58.png

These will display in the properties pane in Lightning App Builder. Sadly you cannot bind other field values, but this does give some nice options for making the same Flow configurable for different uses on different pages!

Screen Shot 2016-08-30 at 22.04.27.png

Flow Custom Buttons with Selection List Views

Winter’17 brings with it the ability to select records in List Views. As with Salesforce Classic UI it will show checkboxes next to records in the List View, IF a Custom Button has been added to the List View layout that required multi-selection.

In my past blog Visual Flow with List View and Related List Buttons, prior to Winter’17. I was not able to replicate the very useful ability to pass user selected records to a Flow in Lightning Experience. I am now pleased to report that this works!

FlowListViewSelection.png
FlowOverSelectedRecords.pngThis results in the flow from my previous blog showing the selected records. As you can see, sadly because we are using a Visualforce page the lovely new Flow styling we see when using Flow (Beta) support in Lightning App Builder does not apply. But hey being able to select the records is a good step forward for now!  The setup of the Visualforce page and Custom Button is identical to that in my previous blog.

Screen Shot 2016-08-30 at 21.43.43.png

Summary

Flow continues to get a good level of love and investment in Salesforce releases, which pleases me a lot. Its a great tool, the only downside is with more features comes more complexity and thus a great need to stay on top of its capabilities, a nice problem to have!

 

 

 


7 Comments

Introducing the Flow Factory

Flow is a great technology for providing a means for non-coders to build functionality. More so than any other point and click facility on the platform, even Process Builder. Why? Because it offers a rich set of Elements (operations) that contain conditional branching, loop and storage of variables. Along with the ability to read or update any object (API accessible) you like. Its almost like a programming language….

Ironically, like Apex, it is missing one increasingly asked for feature… being able to call another Flow that is not known at the time your writing your calling code. Such as one configured via the amazing Custom Metadata… Basically a kind of Apex reflection for Flow. Often the workaround for this type of problem is to use a factory pattern.

As i highlighted in this prior blog on calling Flow from Apex, the platform does not yet provide this capability to change this please up vote this idea. Well at least not in Apex, as there exists a REST API for this. Meanwhile though back in the land of Apex, it occurred to me when building the LittleBits Connector last year. As workaround i could generate a factory class that would workaround this.

I have created Flow Toolbelt library (GitHub repo here) and package (if you want to install that way) which takes last years solution and lifts it into its own smaller package. The Flow Factory tab discovers the Flows configured in your org and generates the required factory Apex class. If you add or remove flows you need to repeat the process.

FlowFactory.png

Once this has been Deployed you can use code like the following. Passing in the name of your Flow. Note this is a WIP version of the library and needs more error handling, so be sure to pass in a valid Flow name and also at least an empty params Map.

Flow.Interview flow =
  flowtb.FlowFactory.newInstance('TestA', new Map<String, Object>());
flow.start();
System.debug(flow.getVariableValue('Var'));

I think this concept can be extended to allow Flow to run from other Apex entry points, such as the recently added Sandbox Apex callback. Allowing you to run Flow when your Sandbox spins up. Let me know your thoughts, if this is something useful or not.