Andy in the Cloud

From BBC Basic to Force.com and beyond…


Leave a comment

The Third Edition

bookI’m proud to announce the third edition of my book has now been released. Back in March this year I took the plunge start updates to many key areas and add two brand new chapters. Between the 2 years and 8 months since the last edition there has been several platform releases and an increasing number of new features and innovations that made this the biggest update ever! This edition also embraces the platforms rebranding to Lightning, hence the book is now entitled Salesforce Lightning Platform Enterprise Architecture.

You can purchase this book direct from Packt or of course from Amazon among other sellers.  As is the case every year Salesforce events such as Dreamforce and TrailheaDX this book and many other awesome publications will be on sale. Here are some of the key update highlights:

  • Automation and Tooling Updates
    Throughout the book SFDX CLI, Visual Studio Code and 2nd Generation Packaging are leverage. While the whole book is certainly larger, certain chapters of the book actually reduced in size as steps previously reflecting clicks where replaced with CLI commands! At one point in time I was quite a master in Ant Scripts and Marcos, they have also given way to built in SFDX commands.
  • User Interface Updates
    Lightning Web Components is a relative new kid on the block, but benefits greatly from its standards compliance, meaning there is plenty of fun to go around exploring industry tools like Jest in the Unit Testing chapter. All of the books components have been re-written to the Web Component standard.
  • Big Data and Async Programming
    Big data was once a future concern for new products, these days it is very much a concern from the very start. The book covers Big Objects and Platform Events more extensibility with worked examples, including ingest and calculations driven by Platform Events and Async Apex Triggers. Event Driven Architecture is something every Lightning developer should be embracing as the platform continues to evolve around more and more standard platforms and features that leverage them.
  • Integration and Extensibility
    A particularly enjoyed exploring the use of Platform Events as another means by which you can expose API’s from your packages to support more scalable invocation of your logic and asynchronous plugins.
  • External Integrations and AI
    External integrations with other cloud services are a key part to application development and also the implementation of your solution, thus one of two brand new chapters focuses on Connected Apps, Named Credentials, External Services and External Objects, with worked examples of existing services or sample Heroku based services. Einstein has an ever growing surface area across Salesforce products and the platform. While this topic alone is worth an entire book, I took the time in the second new chapter, to enumerate Einstein from the perspective of the developer and customer configurations. The Formula1 motor racing theme continued with the ingest of historic race data that you can run AI over.
  • Other Updates
    Among other updates is a fairly extensive update to the CI/CD chapter which still covers Jenkins, but leverages the new Jenkins Pipeline feature to integrate SFDX CLI. The Unit Testing chapter has also been extended with further thoughts on unit vs integration testing and a focus on Lightening Web Component testing.

The above is just highlights for this third edition, you can see a full table of contents here. A massive thanks to everyone involving for providing the inspiration and support for making this third edition happen! Enjoy!


6 Comments

Salesforce DX Integration Strategies

This blog will cover three ways by which you can interact programmatically with Salesforce DX. DX provides a number of time-saving utilities and commands, sometimes though you want to either combine those together or choose to write your own that fit better with your way of working. Fortunately, DX is very open and in fact, goes beyond just interacting with CLI.

If you are familiar with DX you will likely already be writing or have used shell scripts around the CLI, those scripts are code and the CLI commands and their outputs (especially in JSON mode) is the API in this case. The goal of this blog is to highlight this approach further and also other programming options via REST API or Node.js.

Broadly speaking DX is composed of layers, from client side services to those at the backend. Each of these layers is actually supported and available to you as a developer to consume as well. The diagram shown here shows these layers and the following sections highlight some examples and further use cases for each.

DX CLI

Programming via shell scripts is very common and there is a huge wealth of content and help on the internet regardless of your platform. You can perform condition operations, use variables and even perform loops. The one downside is they are platform specific. So if supporting users on multiple platforms is important to you, and you have skills in other more platform neutral languages you may want to consider automating the CLI that way.

Regardless of how you invoke the CLI, parsing human-readable text from CLI commands is not a great experience and leads to fragility (as it can and should be allowed to change between releases). Thus all Salesforce DX commands support the –json parameter. First, let’s consider the default output of the following command.

sfdx force:org:display
=== Org Description
KEY              VALUE
───────────────  ──────────────────────────────────────────────────────────────────────
Access Token     00DR00.....O1012
Alias            demo
Client Id        SalesforceDevelopmentExperience
Created By       admin@sf-fx.org
Created Date     2019-02-09T23:38:10.000+0000
Dev Hub Id       admin@sf-fx.org
Edition          Developer
Expiration Date  2019-02-16
Id               00DR000000093TsMAI
Instance Url     https://customization-java-9422-dev-ed....salesforce.com/
Org Name         afawcett Company
Status           Active
Username         test....a@example.com

Now let’s contrast the output of this command with the –json parameter.

sfdx force:org:display --json
{"status":0,"result":{"username":"test...a@example.com","devHubId":"admin@sf-fx.org","id":"00DR000000093TsMAI","createdBy":"admin@sf-fx.org","createdDate":"2019-02-09T23:38:10.000+0000","expirationDate":"2019-02-16","status":"Active","edition":"Developer","orgName":"afawcett Company","accessToken":"00DR000...yijdqPlO1012","instanceUrl":"https://customization-java-9422-dev-ed.mobile02.blitz.salesforce.com/","clientId":"SalesforceDevelopmentExperience","alias":"demo"}}}

If you are using a programming language with support for interpreting JSON you can now start to parse the response to obtain the information you need. However, if you are using shell scripts you need a little extract assistance. Thankfully there is an awesome open source utility called jq to the rescue. Just simply piping the JSON output through the jq command allows you to get a better look at things…

sfdx force:org:display --json | jq
{
  "status": 0,
  "result": {
    "username": "test-hm83yjxhunoa@example.com",
    "devHubId": "admin@sf-fx.org",
    "id": "00DR000000093TsMAI",
    "createdBy": "admin@sf-fx.org",
    "createdDate": "2019-02-09T23:38:10.000+0000",
    "expirationDate": "2019-02-16",
    "status": "Active",
    "edition": "Developer",
    "orgName": "afawcett Company",
    "accessToken": "00DR000....O1012",
    "instanceUrl": "https://customization-java-9422-dev-ed.....salesforce.com/",
    "clientId": "SalesforceDevelopmentExperience",
    "alias": "demo"
  }
}

You can then get a bit more specific in terms of the information you want.

sfdx force:org:display --json | jq .result.id -r
00DR000000093TsMAI

You can combine this into a shell script to set variables as follows.

ORG_INFO=$(sfdx force:org:display --json)
ORG_ID=$(echo $ORG_INFO | jq .result.id -r)
ORG_DOMAIN=$(echo $ORG_INFO | jq .result.instanceUrl -r)
ORG_SESSION=$(echo $ORG_INFO | jq .result.accessToken -r)

All the DX commands support JSON output, including the query commands…

sfdx force:data:soql:query -q "select Name from Account" --json | jq .result.records[0].Name -r
GenePoint

The Sample Script for Installing Packages with Dependencies has a great example of using JSON output from the query commands to auto-discover package dependencies. This approach can be adapted however to any object, it also shows another useful approach of combining Python within a Shell script.

DX Core Library and DX Plugins

This is a Node.js library contains core DX functionality such as authentication, org management, project management and the ability to invoke REST API’s against scratch orgs vis JSForce. This library is actually used most commonly when you are authoring a DX plugin, however, it can be used standalone. If you have an existing Node.js based tool or existing CLI library you want to embed DX in.

The samples folder here contains some great examples. This example shows how to use the library to access the alias information and provide a means for the user to edit the alias names.

  // Enter a new alias
  const { newAlias } = await inquirer.prompt([
    { name: 'newAlias', message: 'Enter a new alias (empty to remove):' }
  ]);

  if (alias !== 'N/A') {
    // Remove the old one
    aliases.unset(alias);
    console.log(`Unset alias ${chalk.red(alias)}`);
  }

  if (newAlias) {
    aliases.set(newAlias, username);
    console.log(
      `Set alias ${chalk.green(newAlias)} to username ${chalk.green(username)}`
    );
  }

Tooling API Objects

Finally, there is a host of Tooling API objects that support the above features and some added extra features. These are fully documented and accessible via the Salesforce Tooling API for use in your own plugins or applications capable of making REST API calls.  Keep in mind you can do more than just query these objects, some also represent processes, meaning when you insert into them they do stuff! Here is a brief summary of the most interesting objects.

  • PackageUploadRequest, MetadataPackage, MetadataPackageVersion represent objects you can use as a developer to automate the uploading of first generation packages.
  • Package2, Package2Version, Package2VersionCreateRequest and Package2VersionCreateRequestError represent objects you can use as a developer to automate the uploading of second generation packages.
  • PackageInstallRequest SubscriberPackage SubscriberPackageVersion and Package2Member (second generation only) represent objects that allow you to automate the installation of a package and also allow you to discover the contents of packages installed within an org.
  • SandboxProcess and SandboxInfo represent objects that allow you to automate the creation and refresh of Sandboxes, as well as query for existing ones. For more information see the summary at the bottom of this help topic.
  • SourceMember represents changes you make when using the Setup menu within a Scratch org. It is used by the push and pull commands to track changes. The documentation claims you can create and update records in this object, however, I would recommend that you only use it for informationally purposes. For example, you could write your own poller tool to drive code generation based on custom object changes.

IMPORTANT NOTE: Be sure to consider what CLI commands exist to accomplish your need. As you’ve read above its easy to automate those commands and they manage a lot of the complexity in interacting with these objects directly. This is especially true for packaging objects.

Summary

The above options represent a rich set of abilities to integrate and extend DX. Keep in mind the deeper you go the more flexibility you get, but you are also taking on more complexity. So choose wisely and/or use a mix of approaches. Finally worthy of mention is the future of SFDX CLI and Oclif. Salesforce is busy updating the internals of the DX CLI to use this library and once complete will open up new cool possibilities such as CLI hooks which will allow you to extend the existing commands.


5 Comments

Adding Clicks not Code Extensibility to your Apex with Lightning Flow

Building solutions on the Lightning Platform is a highly collaborative process, due to its unique ability to allow Trailblazers in a team to operate in no code, low code and/or code environments. Lightning Flow is a Salesforce native tool for no code automation and Apex is the native programming language of the platform — the code!

A flow author is able to create no-code solutions using the Cloud Flow Designer tool that can query and manipulate records, post Chatter posts, manage approvals, and even make external callouts. Conversely using Salesforce DX, the Apex developer can, of course, do all these things and more! This blog post presents a way in which two Trailblazers (Meaning a flow author and an Apex developer) can consider options that allow them to share the work in both building and maintaining a solution.

Often a flow is considered the start of a process — typically and traditionally a UI wizard or more latterly, something that is triggered when a record is updated (via Process Builder). We also know that via invocable methods, flows and processes can call Apex. What you might not know is that the reverse is also true! Just because you have decided to build a process via Apex, you can still leverage flows within that Apex code. Such flows are known as autolaunched flows, as they have no UI.

Blog_Graphic_01_v01-02_abpx5x.png

I am honored to have this blog hosted on the Salesforce Blog site.  To continue reading the rest of this blog head on over to Salesforce.com blog post here.

 


18 Comments

Swagger / Open API + Salesforce = LIKE

In my previous blog i covered an exciting new integration tool from Salesforce, which consumes API’s that have a descriptor (or schema) associated with them. External Services allows point and click integration with API’s. The ability for Salesforce to consume API’s complying with API schema standards is a pretty huge step forward. Extending its ability to integrate with ease in a way that is in-keeping with its low barrier to entry development and clicks not code mantra.

swaggerlike

At the time of writing my previous blog, only Interagent schema was supported by External Services. However as of the Winter’18 release this is no longer the case. In this blog i will explore the more widely adopted Swagger / Open API 2.0 standard, using Node.js and Heroku and External Services. As bonus topic, i will also touch on using Swagger Code Generator with Apex!

One of the many benefits of supporting the Swagger / Open API standard is the ability to generate documentation for it. The following screenshot shows the API schema on the left and generated documentation on the right. What is also very cool about this, is the Try this operation button. Give it a try for yourself now!

asciiartswagger

oai

Whats the difference between Swagger and Open API  2.0? This was a question i asked myself and thought i would cover the answer here. Basically as at, Swagger v2.0, there is no difference, the Open API Initiative is a rebranding, born out of the huge adoption Swagger has seen since its creation. This move means its future is more formalised and seems to have more meaningful name. You can read more about this amazing story here.

Choosing your methodology for API development

The schema shown above might look a bit scary and you might well want to just get writing code and think about the schema when your ready to share your API. This is certainly supported and there are some tools that support generation of the schema via JSDoc comments in your code or via your joi schema here (useful for existing API’s).

However to really embrace an API first strategy in your development team i feel you should start with the requirements and thus the schema first. This allows others in your team or the intended recipients to review the API before its been developed and even test it out with stub implementations. In my research i was thus drawn to Swagger Node, a tool set, donated by ApiGee, that embraces API-design-first. Read more pros and cons here. It is also the formal Node.js implementation associated with Swagger.

The following describes the development process of API-design-first.

overview2

(ref: Swagger Node README)

Developing Open API’s with “Swagger Node” 

Swagger Node is very easy to get started with and is well documented here. It supports the full API-design-first development process show in the diagram above. The editor (also shown above) is really useful for getting used to writing schemas and the UI is dynamically refreshed, including errors.

The overall Node.js project is still pretty simple (GitHub repo here), now consisting of three files. The schema is edited in YAML file format (translated to JSON when served up to tools). The schema for the ASCIIArt service now looks like the following and is pretty self describing. For further documentation on Swagger / Open API 2.0 see here.

https://createasciiart.herokuapp.com/schema/
swagger: "2.0"
info:
  version: "1.0.0"
  title: AsciiArt Service
# during dev, should point to your local machine
host: localhost:3000
# basePath prefixes all resource paths 
basePath: /
# 
schemes:
  # tip: remove http to make production-grade
  - http
  - https
# format of bodies a client can send (Content-Type)
consumes:
  - application/json
# format of the responses to the client (Accepts)
produces:
  - application/json
paths:
  /asciiart:
    # binds a127 app logic to a route
    x-swagger-router-controller: asciiart
    post:
      description: Returns ASCIIArt to the caller
      # used as the method name of the controller
      operationId: asciiart
      consumes:
        - application/json
      parameters:
        - in: body
          name: body
          description: Message to convert to ASCIIArt
          schema:
            type: object
            required: 
              - message
            properties:
              message:
                type: string
      responses:
        "200":
          description: Success
          schema:
            # a pointer to a definition
            $ref: "#/definitions/ASCIIArtResponse"
  /schema:
    x-swagger-pipe: swagger_raw
# complex objects have schema definitions
definitions:
  ASCIIArtResponse:
    required:
      - art
    properties:
      art:
        type: string

The entry point of the Node.js app, the server.js file now looks like this…

'use strict';

var SwaggerExpress = require('swagger-express-mw');
var app = require('express')();
module.exports = app; // for testing
var config = {
  appRoot: __dirname // required config
};

SwaggerExpress.create(config, function(err, swaggerExpress) {
  if (err) { throw err; }
  // install middleware for swagger ui
  app.use(swaggerExpress.runner.swaggerTools.swaggerUi());
  // install middleware for swagger routing
  swaggerExpress.register(app);
  var port = process.env.PORT || 3000;
  app.listen(port);
});

Note: I changed the Node.js web server framework from hapi (used in my previous blog) to express. As I could not get the Swagger UI to integrate with hapi.

The code implementing the API has been moved to its asciiart.js file.

var figlet = require('figlet');

function asciiart(request, response) {
    // Call figlet to generate the ASCII Art and return it!
    const msg = request.body.message;
    figlet(msg, function(err, data) {
        response.json({ art: data});
    });
}

module.exports = {
    asciiart: asciiart
};

Note: There is no parameter validation code written here, the Swagger Node module dynamically implements parameter validation for you (based on what you define in the schema) before the request reaches your code! It also validates your responses.

To access the documentation simply use the path /docs. The documentation is generated automatically, no need to manage static HTML files. I have hosted my sample AsciiArt service in Heroku so you can try it by clicking the link below.

https://createasciiart.herokuapp.com/docs/

swaggerui

Consuming Swagger API’s with External Services

The process described in my earlier blog for using the above API via External Services has not changed. External Services automatically recognises Swagger API’s.

externalservicesasciiart

NOTE: There is a small bug that prevents the callout if the basePath is specified as root in the schema. Thus this has been commented out in the deployed version of the schema for now. Salesforce will likely have fixed this by the time you read this.

Swagger Tools

  • SwaggerToolsSwagger Editor, the interactive editor shown in the first screenshot of this blog.
  • Swagger Code Generator, creates server stubs and clients for implementing and calling Swagger enabled API’s.
  • Swagger UI, the browser based UI for generating documentation. You can call this from the command line and upload the static HTML files or use frameworks like the one used in this blog to generated it on the fly.

Can we use Swagger to call or implement API’s authored in Apex?

Swagger Tools are available on a number of platforms, including recently added support for Apex clients. This gives you another option to consume API’s directly in Apex. Its not clear if this is going to a better route than consuming the classes generated by External Services, i suspect it might have some pros and cons tbh. Time will tell!

Meanwhile i did run the Swagger Code Generator for Apex and got this…

public class SwagDefaultApi {
    SwagClient client;

    public SwagDefaultApi(SwagClient client) {
        this.client = client;
    }

    public SwagDefaultApi() {
        this.client = new SwagClient();
    }

    public SwagClient getClient() {
        return this.client;
    }

    /**
     *
     * Returns ASCIIArt to the caller
     * @param body Message to convert to ASCIIArt (optional)
     * @return SwagASCIIArtResponse
     * @throws Swagger.ApiException if fails to make API call
     */
    public SwagASCIIArtResponse asciiart(Map<String, Object> params) {
        List<Swagger.Param> query = new List<Swagger.Param>();
        List<Swagger.Param> form = new List<Swagger.Param>();

        return (SwagASCIIArtResponse) client.invoke(
            'POST', '/asciiart',
            (SwagBody) params.get('body'),
            query, form,
            new Map<String, Object>(),
            new Map<String, Object>(),
            new List<String>{ 'application/json' },
            new List<String>{ 'application/json' },
            new List<String>(),
            SwagASCIIArtResponse.class
        );
    }
}

The code is also generated in a Salesforce DX compliant format, very cool!


24 Comments

Image Recognition with the Salesforce Einstein API and an Amazon Echo

AI services are becoming more and more accessible to developers than ever before. Salesforce acquired Metamind last year and made some big announcements at Dreamforce 2016. Like many developers, i was keen to find out about its API. The answer at the time was “check back with us next year!”.

pipaWith Spring’17 that question has been answered. At least thus far as regards to image recognition, with the availability of Salesforce Einstein Predictive Vision Service (Pilot). The pilot is open to the public and is free to signup.

True AI consists of recognition, be that visual or spoken, performing actions and the final most critical peace, learning. This blog explores the spoken and visual recognition peace further, with the added help of Flow for performing practically any action you can envision!

You may recall a blog from last year relating to integrating Salesforce with Amazon Echo. To explore the new Einstein API, I decided to leverage that work further. In order to trigger recognition of my pictures from Alexa. Also the Salesforce Flow usage enabled easy extensibility via custom Apex Actions. Thus the Einstein Apex Action was born! After a small bit of code and some configuration i had a working voice activated image recognition demo up and running.

The following diagram breaks down what just happened in the video above. Followed by a deeper walk through of the Predictive Vision Service and how to call it.

amazonechoandeinstein

  1. Using Salesforce1 Mobile app I uploaded an image using the Files feature.
  2. Salesforce stores this in the ContentVersion object for later querying (step 6).
  3. Using the Alexa skill, called Einstein, i was able to “Ask Einstein about my photo”
  4. This  NodeJS skill runs on Amazon and simply routes requests to Salesforce Flow
  5. Spoken terms are passed through to a named Flow via the Flow API.
  6. The Flow is simple in this case, it queries the ContentVersion for the latest upload.
  7. The Flow then calls the Einstein Apex Action which in turn calls the Einstein REST API via Apex (more on this later). Finally a Flow assignment takes the resulting prediction of what the images is actually of, and uses it to build a spoken response.
    einstenandflow

Standard Example: The above example is exposing the Einstein API in an Apex Action, this is purely to integrate with the Amazon Echo use case. The pilot documentation walks you through an standalone Apex and Visualforce example to get you started.

How does theEinstein Predictive Vision Service API work?

revaflintsilverThe service introduces a few new terms to get your head round. Firstly a dataset is a named container for the types of images (labels) you want to recognise. The demo above uses a predefined dataset and model. A model is the output from the process of taking examples of each of your data sets labels and processing them (training). Initiating this process is pretty easy, you just make a REST API call with your dataset ID. All the recognition magic is behind the scenes, you just poll for when its done. All you have to do is test the model with other images. The service returns ranked predictions (using the datasets labels) on what it thinks your picture is of. When i ran the pictures above of my family dogs, for the first time i was pretty impressed that it detected the breeds.

EinsteinPredictiveVisionAPI.png

While quite fiddly at times, it is also well worth the walking through how to setup your own image datasets and training to get a hands on example of the above.

How do i call the Einstein API from Apex?

Salesforce saved me the trouble of wrapping the REST API in Apex and have started an Apex wrapper here in this GitHub repo. When you signup you get private key file you have to upload into Salesforce to authenticate the calls. Currently the private key file the pilot gives you seems to be scoped by your org users associated email address.

public with sharing class EinsteinAction {

    public class Prediction {
        @InvocableVariable
        public String label;
        @InvocableVariable
        public Double probability;
    }

    @InvocableMethod(label='Classify the given files' description='Calls the Einsten API to classify the given ContentVersion files.')
    public static List<EinsteinAction.Prediction> classifyFiles(List<ID> contentVersionIds) {
        String access_token = new VisionController().getAccessToken();
        ContentVersion content = [SELECT Title,VersionData FROM ContentVersion where Id in :contentVersionIds LIMIT 1];
        List<EinsteinAction.Prediction> predictions = new List<EinsteinAction.Prediction>();
        for(Vision.Prediction vp : Vision.predictBlob(content.VersionData, access_token, 'GeneralImageClassifier')) {
            EinsteinAction.Prediction p = new EinsteinAction.Prediction();
            p.label = vp.label;
            p.probability = vp.probability;
            predictions.add(p);
            break; // Just take the most probable
        }
        return predictions;
    }
}

NOTE: The above method is only handling the first file passed in the parameter list, the minimum needed for this demo. To bulkify you can remove the limit in the SOQL and ideally put the file ID back in the response. It might also be useful to expose the other predictions and not just the first one.

The VisionController and Vision Apex classes from the GitHub repo are used in the above code. It looks like the repo is still very much WIP so i would expect the API to change a bit. They also assume that you have followed the standalone example tutorial here.

Summary

This initial API has made it pretty easy to access a key part of AI with what is essentially only a handful of simple REST API calls. I’m looking forward to seeing where this goes and where Salesforce goes next with future AI services.