Andy in the Cloud

From BBC Basic to Force.com and beyond…


3 Comments

The Third Edition

bookI’m proud to announce the third edition of my book has now been released. Back in March this year I took the plunge start updates to many key areas and add two brand new chapters. Between the 2 years and 8 months since the last edition there has been several platform releases and an increasing number of new features and innovations that made this the biggest update ever! This edition also embraces the platforms rebranding to Lightning, hence the book is now entitled Salesforce Lightning Platform Enterprise Architecture.

You can purchase this book direct from Packt or of course from Amazon among other sellers.  As is the case every year Salesforce events such as Dreamforce and TrailheaDX this book and many other awesome publications will be on sale. Here are some of the key update highlights:

  • Automation and Tooling Updates
    Throughout the book SFDX CLI, Visual Studio Code and 2nd Generation Packaging are leverage. While the whole book is certainly larger, certain chapters of the book actually reduced in size as steps previously reflecting clicks where replaced with CLI commands! At one point in time I was quite a master in Ant Scripts and Marcos, they have also given way to built in SFDX commands.
  • User Interface Updates
    Lightning Web Components is a relative new kid on the block, but benefits greatly from its standards compliance, meaning there is plenty of fun to go around exploring industry tools like Jest in the Unit Testing chapter. All of the books components have been re-written to the Web Component standard.
  • Big Data and Async Programming
    Big data was once a future concern for new products, these days it is very much a concern from the very start. The book covers Big Objects and Platform Events more extensibility with worked examples, including ingest and calculations driven by Platform Events and Async Apex Triggers. Event Driven Architecture is something every Lightning developer should be embracing as the platform continues to evolve around more and more standard platforms and features that leverage them.
  • Integration and Extensibility
    A particularly enjoyed exploring the use of Platform Events as another means by which you can expose API’s from your packages to support more scalable invocation of your logic and asynchronous plugins.
  • External Integrations and AI
    External integrations with other cloud services are a key part to application development and also the implementation of your solution, thus one of two brand new chapters focuses on Connected Apps, Named Credentials, External Services and External Objects, with worked examples of existing services or sample Heroku based services. Einstein has an ever growing surface area across Salesforce products and the platform. While this topic alone is worth an entire book, I took the time in the second new chapter, to enumerate Einstein from the perspective of the developer and customer configurations. The Formula1 motor racing theme continued with the ingest of historic race data that you can run AI over.
  • Other Updates
    Among other updates is a fairly extensive update to the CI/CD chapter which still covers Jenkins, but leverages the new Jenkins Pipeline feature to integrate SFDX CLI. The Unit Testing chapter has also been extended with further thoughts on unit vs integration testing and a focus on Lightening Web Component testing.

The above is just highlights for this third edition, you can see a full table of contents here. A massive thanks to everyone involving for providing the inspiration and support for making this third edition happen! Enjoy!


9 Comments

Salesforce DX Integration Strategies

This blog will cover three ways by which you can interact programmatically with Salesforce DX. DX provides a number of time-saving utilities and commands, sometimes though you want to either combine those together or choose to write your own that fit better with your way of working. Fortunately, DX is very open and in fact, goes beyond just interacting with CLI.

If you are familiar with DX you will likely already be writing or have used shell scripts around the CLI, those scripts are code and the CLI commands and their outputs (especially in JSON mode) is the API in this case. The goal of this blog is to highlight this approach further and also other programming options via REST API or Node.js.

Broadly speaking DX is composed of layers, from client side services to those at the backend. Each of these layers is actually supported and available to you as a developer to consume as well. The diagram shown here shows these layers and the following sections highlight some examples and further use cases for each.

DX CLI

Programming via shell scripts is very common and there is a huge wealth of content and help on the internet regardless of your platform. You can perform condition operations, use variables and even perform loops. The one downside is they are platform specific. So if supporting users on multiple platforms is important to you, and you have skills in other more platform neutral languages you may want to consider automating the CLI that way.

Regardless of how you invoke the CLI, parsing human-readable text from CLI commands is not a great experience and leads to fragility (as it can and should be allowed to change between releases). Thus all Salesforce DX commands support the –json parameter. First, let’s consider the default output of the following command.

sfdx force:org:display
=== Org Description
KEY              VALUE
───────────────  ──────────────────────────────────────────────────────────────────────
Access Token     00DR00.....O1012
Alias            demo
Client Id        SalesforceDevelopmentExperience
Created By       admin@sf-fx.org
Created Date     2019-02-09T23:38:10.000+0000
Dev Hub Id       admin@sf-fx.org
Edition          Developer
Expiration Date  2019-02-16
Id               00DR000000093TsMAI
Instance Url     https://customization-java-9422-dev-ed....salesforce.com/
Org Name         afawcett Company
Status           Active
Username         test....a@example.com

Now let’s contrast the output of this command with the –json parameter.

sfdx force:org:display --json
{"status":0,"result":{"username":"test...a@example.com","devHubId":"admin@sf-fx.org","id":"00DR000000093TsMAI","createdBy":"admin@sf-fx.org","createdDate":"2019-02-09T23:38:10.000+0000","expirationDate":"2019-02-16","status":"Active","edition":"Developer","orgName":"afawcett Company","accessToken":"00DR000...yijdqPlO1012","instanceUrl":"https://customization-java-9422-dev-ed.mobile02.blitz.salesforce.com/","clientId":"SalesforceDevelopmentExperience","alias":"demo"}}}

If you are using a programming language with support for interpreting JSON you can now start to parse the response to obtain the information you need. However, if you are using shell scripts you need a little extract assistance. Thankfully there is an awesome open source utility called jq to the rescue. Just simply piping the JSON output through the jq command allows you to get a better look at things…

sfdx force:org:display --json | jq
{
  "status": 0,
  "result": {
    "username": "test-hm83yjxhunoa@example.com",
    "devHubId": "admin@sf-fx.org",
    "id": "00DR000000093TsMAI",
    "createdBy": "admin@sf-fx.org",
    "createdDate": "2019-02-09T23:38:10.000+0000",
    "expirationDate": "2019-02-16",
    "status": "Active",
    "edition": "Developer",
    "orgName": "afawcett Company",
    "accessToken": "00DR000....O1012",
    "instanceUrl": "https://customization-java-9422-dev-ed.....salesforce.com/",
    "clientId": "SalesforceDevelopmentExperience",
    "alias": "demo"
  }
}

You can then get a bit more specific in terms of the information you want.

sfdx force:org:display --json | jq .result.id -r
00DR000000093TsMAI

You can combine this into a shell script to set variables as follows.

ORG_INFO=$(sfdx force:org:display --json)
ORG_ID=$(echo $ORG_INFO | jq .result.id -r)
ORG_DOMAIN=$(echo $ORG_INFO | jq .result.instanceUrl -r)
ORG_SESSION=$(echo $ORG_INFO | jq .result.accessToken -r)

All the DX commands support JSON output, including the query commands…

sfdx force:data:soql:query -q "select Name from Account" --json | jq .result.records[0].Name -r
GenePoint

The Sample Script for Installing Packages with Dependencies has a great example of using JSON output from the query commands to auto-discover package dependencies. This approach can be adapted however to any object, it also shows another useful approach of combining Python within a Shell script.

DX Core Library and DX Plugins

This is a Node.js library contains core DX functionality such as authentication, org management, project management and the ability to invoke REST API’s against scratch orgs vis JSForce. This library is actually used most commonly when you are authoring a DX plugin, however, it can be used standalone. If you have an existing Node.js based tool or existing CLI library you want to embed DX in.

The samples folder here contains some great examples. This example shows how to use the library to access the alias information and provide a means for the user to edit the alias names.

  // Enter a new alias
  const { newAlias } = await inquirer.prompt([
    { name: 'newAlias', message: 'Enter a new alias (empty to remove):' }
  ]);

  if (alias !== 'N/A') {
    // Remove the old one
    aliases.unset(alias);
    console.log(`Unset alias ${chalk.red(alias)}`);
  }

  if (newAlias) {
    aliases.set(newAlias, username);
    console.log(
      `Set alias ${chalk.green(newAlias)} to username ${chalk.green(username)}`
    );
  }

Tooling API Objects

Finally, there is a host of Tooling API objects that support the above features and some added extra features. These are fully documented and accessible via the Salesforce Tooling API for use in your own plugins or applications capable of making REST API calls.  Keep in mind you can do more than just query these objects, some also represent processes, meaning when you insert into them they do stuff! Here is a brief summary of the most interesting objects.

  • PackageUploadRequest, MetadataPackage, MetadataPackageVersion represent objects you can use as a developer to automate the uploading of first generation packages.
  • Package2, Package2Version, Package2VersionCreateRequest and Package2VersionCreateRequestError represent objects you can use as a developer to automate the uploading of second generation packages.
  • PackageInstallRequest SubscriberPackage SubscriberPackageVersion and Package2Member (second generation only) represent objects that allow you to automate the installation of a package and also allow you to discover the contents of packages installed within an org.
  • SandboxProcess and SandboxInfo represent objects that allow you to automate the creation and refresh of Sandboxes, as well as query for existing ones. For more information see the summary at the bottom of this help topic.
  • SourceMember represents changes you make when using the Setup menu within a Scratch org. It is used by the push and pull commands to track changes. The documentation claims you can create and update records in this object, however, I would recommend that you only use it for informationally purposes. For example, you could write your own poller tool to drive code generation based on custom object changes.

IMPORTANT NOTE: Be sure to consider what CLI commands exist to accomplish your need. As you’ve read above its easy to automate those commands and they manage a lot of the complexity in interacting with these objects directly. This is especially true for packaging objects.

Summary

The above options represent a rich set of abilities to integrate and extend DX. Keep in mind the deeper you go the more flexibility you get, but you are also taking on more complexity. So choose wisely and/or use a mix of approaches. Finally worthy of mention is the future of SFDX CLI and Oclif. Salesforce is busy updating the internals of the DX CLI to use this library and once complete will open up new cool possibilities such as CLI hooks which will allow you to extend the existing commands.


5 Comments

Adding Clicks not Code Extensibility to your Apex with Lightning Flow

Building solutions on the Lightning Platform is a highly collaborative process, due to its unique ability to allow Trailblazers in a team to operate in no code, low code and/or code environments. Lightning Flow is a Salesforce native tool for no code automation and Apex is the native programming language of the platform — the code!

A flow author is able to create no-code solutions using the Cloud Flow Designer tool that can query and manipulate records, post Chatter posts, manage approvals, and even make external callouts. Conversely using Salesforce DX, the Apex developer can, of course, do all these things and more! This blog post presents a way in which two Trailblazers (Meaning a flow author and an Apex developer) can consider options that allow them to share the work in both building and maintaining a solution.

Often a flow is considered the start of a process — typically and traditionally a UI wizard or more latterly, something that is triggered when a record is updated (via Process Builder). We also know that via invocable methods, flows and processes can call Apex. What you might not know is that the reverse is also true! Just because you have decided to build a process via Apex, you can still leverage flows within that Apex code. Such flows are known as autolaunched flows, as they have no UI.

Blog_Graphic_01_v01-02_abpx5x.png

I am honored to have this blog hosted on the Salesforce Blog site.  To continue reading the rest of this blog head on over to Salesforce.com blog post here.

 


24 Comments

Swagger / Open API + Salesforce = LIKE

In my previous blog i covered an exciting new integration tool from Salesforce, which consumes API’s that have a descriptor (or schema) associated with them. External Services allows point and click integration with API’s. The ability for Salesforce to consume API’s complying with API schema standards is a pretty huge step forward. Extending its ability to integrate with ease in a way that is in-keeping with its low barrier to entry development and clicks not code mantra.

swaggerlike

At the time of writing my previous blog, only Interagent schema was supported by External Services. However as of the Winter’18 release this is no longer the case. In this blog i will explore the more widely adopted Swagger / Open API 2.0 standard, using Node.js and Heroku and External Services. As bonus topic, i will also touch on using Swagger Code Generator with Apex!

One of the many benefits of supporting the Swagger / Open API standard is the ability to generate documentation for it. The following screenshot shows the API schema on the left and generated documentation on the right. What is also very cool about this, is the Try this operation button. Give it a try for yourself now!

asciiartswagger

oai

Whats the difference between Swagger and Open API  2.0? This was a question i asked myself and thought i would cover the answer here. Basically as at, Swagger v2.0, there is no difference, the Open API Initiative is a rebranding, born out of the huge adoption Swagger has seen since its creation. This move means its future is more formalised and seems to have more meaningful name. You can read more about this amazing story here.

Choosing your methodology for API development

The schema shown above might look a bit scary and you might well want to just get writing code and think about the schema when your ready to share your API. This is certainly supported and there are some tools that support generation of the schema via JSDoc comments in your code or via your joi schema here (useful for existing API’s).

However to really embrace an API first strategy in your development team i feel you should start with the requirements and thus the schema first. This allows others in your team or the intended recipients to review the API before its been developed and even test it out with stub implementations. In my research i was thus drawn to Swagger Node, a tool set, donated by ApiGee, that embraces API-design-first. Read more pros and cons here. It is also the formal Node.js implementation associated with Swagger.

The following describes the development process of API-design-first.

overview2

(ref: Swagger Node README)

Developing Open API’s with “Swagger Node” 

Swagger Node is very easy to get started with and is well documented here. It supports the full API-design-first development process show in the diagram above. The editor (also shown above) is really useful for getting used to writing schemas and the UI is dynamically refreshed, including errors.

The overall Node.js project is still pretty simple (GitHub repo here), now consisting of three files. The schema is edited in YAML file format (translated to JSON when served up to tools). The schema for the ASCIIArt service now looks like the following and is pretty self describing. For further documentation on Swagger / Open API 2.0 see here.

https://createasciiart.herokuapp.com/schema/
swagger: "2.0"
info:
  version: "1.0.0"
  title: AsciiArt Service
# during dev, should point to your local machine
host: localhost:3000
# basePath prefixes all resource paths 
basePath: /
# 
schemes:
  # tip: remove http to make production-grade
  - http
  - https
# format of bodies a client can send (Content-Type)
consumes:
  - application/json
# format of the responses to the client (Accepts)
produces:
  - application/json
paths:
  /asciiart:
    # binds a127 app logic to a route
    x-swagger-router-controller: asciiart
    post:
      description: Returns ASCIIArt to the caller
      # used as the method name of the controller
      operationId: asciiart
      consumes:
        - application/json
      parameters:
        - in: body
          name: body
          description: Message to convert to ASCIIArt
          schema:
            type: object
            required: 
              - message
            properties:
              message:
                type: string
      responses:
        "200":
          description: Success
          schema:
            # a pointer to a definition
            $ref: "#/definitions/ASCIIArtResponse"
  /schema:
    x-swagger-pipe: swagger_raw
# complex objects have schema definitions
definitions:
  ASCIIArtResponse:
    required:
      - art
    properties:
      art:
        type: string

The entry point of the Node.js app, the server.js file now looks like this…

'use strict';

var SwaggerExpress = require('swagger-express-mw');
var app = require('express')();
module.exports = app; // for testing
var config = {
  appRoot: __dirname // required config
};

SwaggerExpress.create(config, function(err, swaggerExpress) {
  if (err) { throw err; }
  // install middleware for swagger ui
  app.use(swaggerExpress.runner.swaggerTools.swaggerUi());
  // install middleware for swagger routing
  swaggerExpress.register(app);
  var port = process.env.PORT || 3000;
  app.listen(port);
});

Note: I changed the Node.js web server framework from hapi (used in my previous blog) to express. As I could not get the Swagger UI to integrate with hapi.

The code implementing the API has been moved to its asciiart.js file.

var figlet = require('figlet');

function asciiart(request, response) {
    // Call figlet to generate the ASCII Art and return it!
    const msg = request.body.message;
    figlet(msg, function(err, data) {
        response.json({ art: data});
    });
}

module.exports = {
    asciiart: asciiart
};

Note: There is no parameter validation code written here, the Swagger Node module dynamically implements parameter validation for you (based on what you define in the schema) before the request reaches your code! It also validates your responses.

To access the documentation simply use the path /docs. The documentation is generated automatically, no need to manage static HTML files. I have hosted my sample AsciiArt service in Heroku so you can try it by clicking the link below.

https://createasciiart.herokuapp.com/docs/

swaggerui

Consuming Swagger API’s with External Services

The process described in my earlier blog for using the above API via External Services has not changed. External Services automatically recognises Swagger API’s.

externalservicesasciiart

NOTE: There is a small bug that prevents the callout if the basePath is specified as root in the schema. Thus this has been commented out in the deployed version of the schema for now. Salesforce will likely have fixed this by the time you read this.

Swagger Tools

  • SwaggerToolsSwagger Editor, the interactive editor shown in the first screenshot of this blog.
  • Swagger Code Generator, creates server stubs and clients for implementing and calling Swagger enabled API’s.
  • Swagger UI, the browser based UI for generating documentation. You can call this from the command line and upload the static HTML files or use frameworks like the one used in this blog to generated it on the fly.

Can we use Swagger to call or implement API’s authored in Apex?

Swagger Tools are available on a number of platforms, including recently added support for Apex clients. This gives you another option to consume API’s directly in Apex. Its not clear if this is going to a better route than consuming the classes generated by External Services, i suspect it might have some pros and cons tbh. Time will tell!

Meanwhile i did run the Swagger Code Generator for Apex and got this…

public class SwagDefaultApi {
    SwagClient client;

    public SwagDefaultApi(SwagClient client) {
        this.client = client;
    }

    public SwagDefaultApi() {
        this.client = new SwagClient();
    }

    public SwagClient getClient() {
        return this.client;
    }

    /**
     *
     * Returns ASCIIArt to the caller
     * @param body Message to convert to ASCIIArt (optional)
     * @return SwagASCIIArtResponse
     * @throws Swagger.ApiException if fails to make API call
     */
    public SwagASCIIArtResponse asciiart(Map<String, Object> params) {
        List<Swagger.Param> query = new List<Swagger.Param>();
        List<Swagger.Param> form = new List<Swagger.Param>();

        return (SwagASCIIArtResponse) client.invoke(
            'POST', '/asciiart',
            (SwagBody) params.get('body'),
            query, form,
            new Map<String, Object>(),
            new Map<String, Object>(),
            new List<String>{ 'application/json' },
            new List<String>{ 'application/json' },
            new List<String>(),
            SwagASCIIArtResponse.class
        );
    }
}

The code is also generated in a Salesforce DX compliant format, very cool!


24 Comments

Image Recognition with the Salesforce Einstein API and an Amazon Echo

AI services are becoming more and more accessible to developers than ever before. Salesforce acquired Metamind last year and made some big announcements at Dreamforce 2016. Like many developers, i was keen to find out about its API. The answer at the time was “check back with us next year!”.

pipaWith Spring’17 that question has been answered. At least thus far as regards to image recognition, with the availability of Salesforce Einstein Predictive Vision Service (Pilot). The pilot is open to the public and is free to signup.

True AI consists of recognition, be that visual or spoken, performing actions and the final most critical peace, learning. This blog explores the spoken and visual recognition peace further, with the added help of Flow for performing practically any action you can envision!

You may recall a blog from last year relating to integrating Salesforce with Amazon Echo. To explore the new Einstein API, I decided to leverage that work further. In order to trigger recognition of my pictures from Alexa. Also the Salesforce Flow usage enabled easy extensibility via custom Apex Actions. Thus the Einstein Apex Action was born! After a small bit of code and some configuration i had a working voice activated image recognition demo up and running.

The following diagram breaks down what just happened in the video above. Followed by a deeper walk through of the Predictive Vision Service and how to call it.

amazonechoandeinstein

  1. Using Salesforce1 Mobile app I uploaded an image using the Files feature.
  2. Salesforce stores this in the ContentVersion object for later querying (step 6).
  3. Using the Alexa skill, called Einstein, i was able to “Ask Einstein about my photo”
  4. This  NodeJS skill runs on Amazon and simply routes requests to Salesforce Flow
  5. Spoken terms are passed through to a named Flow via the Flow API.
  6. The Flow is simple in this case, it queries the ContentVersion for the latest upload.
  7. The Flow then calls the Einstein Apex Action which in turn calls the Einstein REST API via Apex (more on this later). Finally a Flow assignment takes the resulting prediction of what the images is actually of, and uses it to build a spoken response.
    einstenandflow

Standard Example: The above example is exposing the Einstein API in an Apex Action, this is purely to integrate with the Amazon Echo use case. The pilot documentation walks you through an standalone Apex and Visualforce example to get you started.

How does theEinstein Predictive Vision Service API work?

revaflintsilverThe service introduces a few new terms to get your head round. Firstly a dataset is a named container for the types of images (labels) you want to recognise. The demo above uses a predefined dataset and model. A model is the output from the process of taking examples of each of your data sets labels and processing them (training). Initiating this process is pretty easy, you just make a REST API call with your dataset ID. All the recognition magic is behind the scenes, you just poll for when its done. All you have to do is test the model with other images. The service returns ranked predictions (using the datasets labels) on what it thinks your picture is of. When i ran the pictures above of my family dogs, for the first time i was pretty impressed that it detected the breeds.

EinsteinPredictiveVisionAPI.png

While quite fiddly at times, it is also well worth the walking through how to setup your own image datasets and training to get a hands on example of the above.

How do i call the Einstein API from Apex?

Salesforce saved me the trouble of wrapping the REST API in Apex and have started an Apex wrapper here in this GitHub repo. When you signup you get private key file you have to upload into Salesforce to authenticate the calls. Currently the private key file the pilot gives you seems to be scoped by your org users associated email address.

public with sharing class EinsteinAction {

    public class Prediction {
        @InvocableVariable
        public String label;
        @InvocableVariable
        public Double probability;
    }

    @InvocableMethod(label='Classify the given files' description='Calls the Einsten API to classify the given ContentVersion files.')
    public static List<EinsteinAction.Prediction> classifyFiles(List<ID> contentVersionIds) {
        String access_token = new VisionController().getAccessToken();
        ContentVersion content = [SELECT Title,VersionData FROM ContentVersion where Id in :contentVersionIds LIMIT 1];
        List<EinsteinAction.Prediction> predictions = new List<EinsteinAction.Prediction>();
        for(Vision.Prediction vp : Vision.predictBlob(content.VersionData, access_token, 'GeneralImageClassifier')) {
            EinsteinAction.Prediction p = new EinsteinAction.Prediction();
            p.label = vp.label;
            p.probability = vp.probability;
            predictions.add(p);
            break; // Just take the most probable
        }
        return predictions;
    }
}

NOTE: The above method is only handling the first file passed in the parameter list, the minimum needed for this demo. To bulkify you can remove the limit in the SOQL and ideally put the file ID back in the response. It might also be useful to expose the other predictions and not just the first one.

The VisionController and Vision Apex classes from the GitHub repo are used in the above code. It looks like the repo is still very much WIP so i would expect the API to change a bit. They also assume that you have followed the standalone example tutorial here.

Summary

This initial API has made it pretty easy to access a key part of AI with what is essentially only a handful of simple REST API calls. I’m looking forward to seeing where this goes and where Salesforce goes next with future AI services.


21 Comments

Building an Amazon Echo Skill with the Flow API

Screen Shot 2016-09-25 at 19.31.25.png
The Amazon Echo device sits in your living room or office and listens to your verbal instructions, much like Siri. It performs various activities. Such as fetching and relaying information and/or performing actions on your behalf. It also serves as a large bluetooth speaker. Now, after a run in the US, it has finally been released in the UK!

Why am i writing about it here? Well it has an API of course! So lets roll up our sleeves with an example i built recently with my FinancialForce colleague and partner in crime for all things gadget and platform, Kevin Roberts.

Kevin reached out to me when he noticed that Amazon had built this device with a means to teach it to respond to new phrases. Developers can extend its phrases by creating new Skills.You can read and hear more about the results over on FinancialForce blog site.

The sample code and instructions to reproduce this demo yourself are here. Also don’t worry if you do not have an Amazon Echo, you can test by speaking into your computer by using the EchoSim.io.

Custom Skill Architecture

To create a Skill you need to be a developer, capable of implementing a REST API endpoint that Amazon calls out to when the Echo recognizes a phrase you have trained it with. You can do this in practically any programming language you like of course, providing you comply with the documented JSON definition and host it securely.

lambda.pngOne thing that simplifies the process is hosting your skill code through the Amazon Lambda service. Lambda supports Java, Python and NodeJS, as well as setting up the security stack for you. Leaving all you have to do is provide the code! You can even just type your code in directly to developer console provided by Amazon.

Training your Skill

You cannot just say anything to Amazon Echo and expect it to understand, its clever but not that clever (yet!). Every Skill developer has to provide a set of phrases / sample utterances. From these Amazon does some clever stuff behind the scenes to compile these into a form its speech recognition algorithms can match a users spoken words to.

You are advised to provide as many utterances as you can, up 50,000 of them in fact! To cover as many varied ways in which we can say things differently but mean the same thing.  The sample utterances must all start with an identifier, known as the Intent. You can see various sample utterances for the CreateLead and GetLatestLeads intents below.

CreateLead Lets create a new Lead
CreateLead Create me a new lead
CreateLead New lead
CreateLead Help me create a lead
GetLatestLeads Latest top leads?
GetLatestLeads What are our top leads?

Skills have names, which users can search for in the Skills Marketplace, much like an App does on your phone. For Skill called “Lead Helper” users would speak the following phrases to invoke any of its intents.

  • “Lead Helper, Create me a new lead”
  • “Lead Helper, Lets create a new lead”
  • “Lead Helper, Help me create a lead”
  • “Lead Helper, What are our top leads?”

Your sample utterances can also include parameters / slots.

DueTasks What tasks are due for {Date}?
DueTasks Any tasks that are due for {Date}?

Slots are essentially parameters to your Intents, Amazon supports various slot types. The date slot type is quite flexible in terms of how it handles relative dates.

  • “Task Helper, What tasks are due next thursday?”
  • “Task Helper, Any tasks that are due for today?”

Along with your sample utterances you need to provide an intent schema, this lists the names of your intents (as referenced in your sample utterances) and the slot names and types. Further information can be found in Defining the Voice Interface.

{
  'intents': [
    {
      'intent': 'DueTasks',
      'slots': [
        {
          'name': 'Date',
          'type': 'AMAZON.DATE';
        }
      ]
    }
  ]
}

Mapping Skill Intents and Slots to Flows and Variables

As i mentioned above, Skill developers implement a REST API end point. Instead of receiving the spoken words as raw text, it receives the Intent name and name/value pair of Slot names and values. That method can then invoke the appropriate database query or action and generate a response (as a string) to response back to the user.

To map this to Salesforce Flows, we can consider the Intent name as the Flow Name and the Slot name/values as Flow Input Parameters. Flow Output Parameters can be used to generate the spoken response to the user. For the example above you would define a Flow called DueTasks with the following named input and output Flow parameters.

  • Flow Name: DueTasks
  • Flow Input Parameter Name:  Alexa_Slot_Date
  • Flow Output Parameter Name:  Alexa_Tell

You can then basically use the Flow Assignment element to adjust the variable values. As well as other elements to query and update records accordingly. By using an output variable named Alexa_Tell  before your Flow ends, you end the conversation with a single response contained with the text variable.

For another example see the Echo sample here, this one simply repeats “echo’s” the name given by the user when they speak a phrase with their name in it.

EchoFlow.png

The sample utterances and intent schema are shown below. These utterances also use a literal slot type, which is a kind of picklist with variable possibilities. Meaning that Andrew, Sarah, Kevin and Bob are just sample values, users can use other words in the Name slot, it is up to the developer to validate them if its important.

Echo My name is {Andrew|Name}
Echo My name is {Sarah|Name}
Echo My name is {Kevin|Name}
Echo My name is {Bob|Name}
{
  'intents': [
    {
      'intent': 'Echo',
      'slots': [
        {
          'name': 'Name',
          'type': 'LITERAL'
        }
      ]
    }
  ]
}

Alternatively if create and assign the Alexa_Ask variable in your Flow, this starts a conversation with your user. In this case any Input/Output Flow Parameters are retained between Flow calls. Finally if you suffix any slot name with Number, for example a slot named AmountNumber would be Alexa_Slot_AmountNumber, this will ensure that the value gets converted correctly to pass to a Flow Variable of type Number.

The design for managing conversations with Flow Input/Output variables was inspired by an excellent article on defining conversations in Alexa Skills here.

The following phrases are for the Conversation Flow included in the samples repository.

Conversation About favourite things
Conversation My favourite color is {Red|Color}
Conversation My favourite color is {Green|Color}
Conversation My favourite color is {Blue|Color}
Conversation My favourite number is {Number}

Screen Shot 2016-09-25 at 21.39.09.png

NodeJS Custom Skill

nodejs-new-pantone-black.pngTo code my Skill I went with NodeJS, as i had not done a lot of coding in it and wanted to challenge myself. The other challenge i set myself was to integrate in a generic and extensible way with Salesforce. Thus i wanted to incorporate my old friend Flow!

With its numerous elements for conditional logic, reading and updating the database. Flow is the perfect solution to integrating with Salesforce in the only way we know how on the Salesforce platform, with clicks not code! Now of course Amazon does not talk Flow natively, so we need some glue!

Amazon provide NodeJS developers a useful base class to get things going. In NodeJS this is imported with the require function (interesting “how it works” article). In my case i also leveraged the most excellent nforce library from Kevin O’Hara.

var AlexaSkill = require('./AlexaSkill');
var nforce = require('nforce');

/**
* SalesforceFlowSkill is a child of AlexaSkill.
* To read more about inheritance in JavaScript, see the link below.
*
* @see https://developer.mozilla.org/en-US/docs/Web/JavaScript/Introduction_to_Object-Oriented_JavaScript#Inheritance
*/
var SalesforceFlowSkill = function () {
    AlexaSkill.call(this, APP_ID);
};

The AlexaSkill base class exposes four methods you can override, onSessionStartedonLaunchonSessionEnded and onIntent. As you can see from the method names, requests to your skill code can be scoped in a session. This allows you to manage conversations users can have with the device. Asking questions and gathering answers within the session that build up to perform a specific action.

I implemented the onIntent method to call the Flow API.

SalesforceFlowSkill.prototype.eventHandlers.onIntent =
   function (intentRequest, session, response) {
       // Handle the spoken intent from the user
       // ...
   }

Calling the Salesforce Flow API from NodeJS

Within the onIntent method I used the nforce library to perform oAuth user name and password authentication for simplicity. Though Alexa Skills do support the oAuth web flow by linking accounts. The following code performs the authentication with Salesforce.

SalesforceFlowSkill.prototype.eventHandlers.onIntent =
    // Configure a connection
    var org = nforce.createConnection({
        clientId: 'yourclientid',
        clientSecret: 'yoursecret',
        redirectUri: 'http://localhost:3000/oauth/_callback',
        mode: 'single'
    });
    // Call a Flow!
    org.authenticate({ username: USER_NAME, password: PASSWORD}).
        then(function() {

The following code, calls the Flow API, again via nforce. It maps the slot name/values to parameters and returning any Flow output variables back in the response. A session will be kept open when the response.ask method is called. In this case any Input/Output Flow Parameters are retained in the Session and passed back into the Flow again.

// Build Flow input parameters
var params = {};
// From Session...
for(var sessionAttr in session.attributes) {
    params[sessionAttr] = session.attributes[sessionAttr];
}
// From Slots...
for(var slot in intent.slots) {
    if(intent.slots[slot].value != null) {
        if(slot.endsWith('Number')) {
            params['Alexa_Slot_' + slot] = Number(intent.slots[slot].value);
        } else {
            params['Alexa_Slot_' + slot] = intent.slots[slot].value;
        }
    }
}
// Call the Flow API
var opts = org._getOpts(null, null);
opts.resource = '/actions/custom/flow/'+intentName;
opts.method = 'POST';
var flowRunBody = {};
flowRunBody.inputs  = [];
flowRunBody.inputs[0] = params;
opts.body = JSON.stringify(flowRunBody);
org._apiRequest(opts).then(function(resp) {
    // Ask or Tell?
    var ask = resp[0].outputValues['Alexa_Ask'];
    var tell = resp[0].outputValues['Alexa_Tell'];
    if(tell!=null) {
        // Tell the user something (closes the session)
        response.tell(tell);
    } else if (ask!=null) {
       // Store output variables in Session
       for(var outputVarName in resp[0].outputValues) {
           if(outputVarName == 'Alexa_Ask')
               continue;
           if(outputVarName == 'Alexa_Tell')
               continue;
           if(outputVarName == 'Flow__InterviewStatus')
               continue;
            session.attributes[outputVarName] =
               resp[0].outputValues[outputVarName];
       }
       // Ask another question (keeps session open)
       response.ask(ask, ask);

Summary

I had lot of fun putting this together, even more so seeing what Kevin did with it with his Flow skills (pun intended). If you have someone like Kevin in your company or want to have a go yourself, you can follow the setup and configuration instructions here.

I would also like to call out that past Salesforce MVP, now Trailhead Developer Advocate Jeff Douglass started the ball rolling with his Salesforce CRM examples. Which is also worth checking out if you prefer to build something more explicitly in NodeJS.

 


5 Comments

Lightning Out: Components on Any Platform

This blog is my first video blog! Since Salesforce does not record the Developer Theatre sessions at the Salesforce World Tour events i thought i would do a re-run at home of my session last week and publish it here. As you know i have a love for all things API’s, and while I typically focus in this blog on backend API’s, there is one i’ve been keen to explore for a while…

The Lightning Out API, as any good API should, brings great promise and reality i’m pleased to say, to further integrating and extending the power of the platform and generally simplifying our users lives. In this case boldly going where no Lightning Component has gone before….

You can access the slides and thus the links within via this Slideshare upload.


17 Comments

Introduction to the Platform Action API

shutterstock_159003926.jpg

Actions are Salesforce’s general term for tasks users can perform either through buttons throughout various UI’s on desktop, mobile, tablet etc or in fact via non-UI processes such those built via via Process Builder or Automation Flows.

Actions are about “getting things done” in Salesforce. They encapsulate a piece of logic that allows a user to perform some work, such as sending email. When an action runs, it saves changes in your organization by updating the database. More here.

Over the years we’ve had many terms and ways to define these. Custom Button and Custom Link are perhaps the most obvious ones, which i’ve covered here in the past. Quick Actions (previously Publisher Actions) and more recently we’ve had Action Link‘s, which i covered in a past blog. Then of course the Standard Buttons, Edit, Delete, Follow, Submit for Approval etc provided by the platform. Such actions appear in various places Record layouts, List Views, Related Lists, Chatter and more recently Flexi Pages (aka Lighting Pages).

You might wonder then, if you had the task as developer to build your own UI or tool that wanted to expose some or all of the above actions, it would be quite a challenge to find them all. Indeed in some cases you may have had to resort to URL hacking to invoke some of them. Well worry not no longer, Salesforce’s clever architects now have you covered! Enter a new virtual SObject known as PlatformAction! Before we get onto what exactly virtual means, lets review some Actions and some SOQL queries…

Consider this Account Record detail page in the Classic (or Aloha) UI

PlatformActionAccount.png

Note down your record ID and use it in a query like the one below…

SELECT DeviceFormat, Label, Type, Section,
       ActionTarget, ActionTargetType, ActionListContext
  FROM PlatformAction
  WHERE ActionListContext = 'Record' AND
        SourceEntity = '001B000000D2V0n' AND
        Section = 'Page' AND
        DeviceFormat = 'Aloha'

In Developer Console you should see something like this…

PlatformActionQueryResults1.png

Pretty cool huh!? Check out the ActionTarget field, for the Standard Button records, thats the URL you can place on your UI’s to invoke that action, simple as that! Better still this is a supported way to get it, no more URL hacking! Now lets add a couple of Custom Buttons and re-run the query…

PlatformActionAcount2.png

We now see CustomButton records appear…

PlatformActionQueryResults2.png

This next query reveals actions shown on a List View. I did note Custom List View buttons that require record selection did not appear however. I suspect this is due to them requiring more than a simple HTTP GET URL to invoke.

SELECT DeviceFormat, Label, Type, Section,
       ActionTarget, ActionTargetType, ActionListContext
  FROM PlatformAction
  WHERE ActionListContext = 'ListView' AND
        SourceEntity = 'Account' AND
        Section = 'Page' AND
        DeviceFormat = 'Aloha'

Other observations..

  • Prior to Summer’16 (out in preview as i write this), Apex SOQL was not supported, only REST API SOQL. This is due to a limitation with the internally applied LIMIT keyword. This has now been resolved in Summer’16, so Apex SOQL now works!
  • SourceEntity can also be given an SObject API name, e.g. SourceEntity = ‘Account’, the result here are object level buttons, like New or those you add to the MRU page.
  • DeviceFormat field value matters, if you leave it off, it defaults to Phone. Thus some actions will be missing from those in Desktop (Lightning Experience) or Aloha (Classic). I eventually found Custom Buttons using Visualforce pages that didn’t have the Lightning Supported checkbox set didn’t appear when querying with the Phone device type for example.
  • User context matters, actions returned are user and configuration sensitive, meaning the record itself, record type and associated layout all contribute to the actions returned. Custom Buttons for example need to be on the relevant layout.
  • Label and Icon information, there are also fields that allow you to render appropriate labels and icons for the actions.
  • Related List actions, you can also retrieve actions shown on related lists, search for RelatedList in the help topic here.
  • Describe actions? You will notice some actions have an ActionTargetType of Describe? These are invoked via an API, something i will cover in later blog.

So lets discuss the “virtual SObject” bit!?!

Your probably wondering what a virtual SObject is?

Well my best guess is its an SObject that is not backed by physical data in the Salesforce database. If you check the documentation you’ll see fields just like any other object and it supports SOQL (with some limitations). My thinking is the records for this object are dynamically generated on demand by doing all the heavy lifting internally to scan all the various historic places where actions have been defined.

Thank you Salesforce architects, this is now my #1 coolest Salesforce API!

Whats next?

  • For starters, no more URL hacking of those standard pages, no excuses now!
  • Helper class or Visualforce and/or Lighting component for actions?
  • Explore the Force.com Actions Developer Guide