I’m proud to announce the third edition of my book has now been released. Back in March this year I took the plunge start updates to many key areas and add two brand new chapters. Between the 2 years and 8 monthssince the last edition there has been several platform releases and an increasing number of new features and innovations that made this the biggest update ever! This edition also embraces the platforms rebranding to Lightning, hence the book is now entitled Salesforce Lightning Platform Enterprise Architecture.
You can purchase this book direct from Packt or of course from Amazon among other sellers. As is the case every year Salesforce events such as Dreamforce and TrailheaDX this book and many other awesome publications will be on sale. Here are some of the key update highlights:
Automation and Tooling Updates
Throughout the book SFDX CLI, Visual Studio Code and 2nd Generation Packaging are leverage. While the whole book is certainly larger, certain chapters of the book actually reduced in size as steps previously reflecting clicks where replaced with CLI commands! At one point in time I was quite a master in Ant Scripts and Marcos, they have also given way to built in SFDX commands.
User Interface Updates
Lightning Web Components is a relative new kid on the block, but benefits greatly from its standards compliance, meaning there is plenty of fun to go around exploring industry tools like Jest in the Unit Testing chapter. All of the books components have been re-written to the Web Component standard.
Big Data and Async Programming
Big data was once a future concern for new products, these days it is very much a concern from the very start. The book covers Big Objects and Platform Events more extensibility with worked examples, including ingest and calculations driven by Platform Events and Async Apex Triggers. Event Driven Architecture is something every Lightning developer should be embracing as the platform continues to evolve around more and more standard platforms and features that leverage them.
Integration and Extensibility
A particularly enjoyed exploring the use of Platform Events as another means by which you can expose API’s from your packages to support more scalable invocation of your logic and asynchronous plugins.
External Integrations and AI
External integrations with other cloud services are a key part to application development and also the implementation of your solution, thus one of two brand new chapters focuses on Connected Apps, Named Credentials, External Services and External Objects, with worked examples of existing services or sample Heroku based services. Einstein has an ever growing surface area across Salesforce products and the platform. While this topic alone is worth an entire book, I took the time in the second new chapter, to enumerate Einstein from the perspective of the developer and customer configurations. The Formula1 motor racing theme continued with the ingest of historic race data that you can run AI over.
Other Updates
Among other updates is a fairly extensive update to the CI/CD chapter which still covers Jenkins, but leverages the new Jenkins Pipeline feature to integrate SFDX CLI. The Unit Testing chapter has also been extended with further thoughts on unit vs integration testing and a focus on Lightening Web Component testing.
The above is just highlights for this third edition, you can see a full table of contents here. A massive thanks to everyone involving for providing the inspiration and support for making this third edition happen! Enjoy!
This blog will cover three ways by which you can interact programmatically with Salesforce DX. DX provides a number of time-saving utilities and commands, sometimes though you want to either combine those together or choose to write your own that fit better with your way of working. Fortunately, DX is very open and in fact, goes beyond just interacting with CLI.
If you are familiar with DX you will likely already be writing or have used shell scripts around the CLI, those scripts are code and the CLI commands and their outputs (especially in JSON mode) is the API in this case. The goal of this blog is to highlight this approach further and also other programming options via REST API or Node.js.
Broadly speaking DX is composed of layers, from client side services to those at the backend. Each of these layers is actually supported and available to you as a developer to consume as well. The diagram shown here shows these layers and the following sections highlight some examples and further use cases for each.
DX CLI
Programming via shell scripts is very common and there is a huge wealth of content and help on the internet regardless of your platform. You can perform condition operations, use variables and even perform loops. The one downside is they are platform specific. So if supporting users on multiple platforms is important to you, and you have skills in other more platform neutral languages you may want to consider automating the CLI that way.
Regardless of how you invoke the CLI, parsing human-readable text from CLI commands is not a great experience and leads to fragility (as it can and should be allowed to change between releases). Thus all Salesforce DX commands support the –json parameter. First, let’s consider the default output of the following command.
sfdx force:org:display
=== Org Description
KEY VALUE
─────────────── ──────────────────────────────────────────────────────────────────────
Access Token 00DR00.....O1012
Alias demo
Client Id SalesforceDevelopmentExperience
Created By admin@sf-fx.org
Created Date 2019-02-09T23:38:10.000+0000
Dev Hub Id admin@sf-fx.org
Edition Developer
Expiration Date 2019-02-16
Id 00DR000000093TsMAI
Instance Url https://customization-java-9422-dev-ed....salesforce.com/
Org Name afawcett Company
Status Active
Username test....a@example.com
Now let’s contrast the output of this command with the –json parameter.
If you are using a programming language with support for interpreting JSON you can now start to parse the response to obtain the information you need. However, if you are using shell scripts you need a little extract assistance. Thankfully there is an awesome open source utility called jq to the rescue. Just simply piping the JSON output through the jq command allows you to get a better look at things…
All the DX commands support JSON output, including the query commands…
sfdx force:data:soql:query -q "select Name from Account" --json | jq .result.records[0].Name -r
GenePoint
The Sample Script for Installing Packages with Dependencies has a great example of using JSON output from the query commands to auto-discover package dependencies. This approach can be adapted however to any object, it also shows another useful approach of combining Python within a Shell script.
DX Core Library and DX Plugins
This is a Node.js library contains core DX functionality such as authentication, org management, project management and the ability to invoke REST API’s against scratch orgs vis JSForce. This library is actually used most commonly when you are authoring a DX plugin, however, it can be used standalone. If you have an existing Node.js based tool or existing CLI library you want to embed DX in.
// Enter a new alias
const { newAlias } = await inquirer.prompt([
{ name: 'newAlias', message: 'Enter a new alias (empty to remove):' }
]);
if (alias !== 'N/A') {
// Remove the old one
aliases.unset(alias);
console.log(`Unset alias ${chalk.red(alias)}`);
}
if (newAlias) {
aliases.set(newAlias, username);
console.log(
`Set alias ${chalk.green(newAlias)} to username ${chalk.green(username)}`
);
}
Tooling API Objects
Finally, there is a host of Tooling API objects that support the above features and some added extra features. These are fully documented and accessible via the Salesforce Tooling API for use in your own plugins or applications capable of making REST API calls. Keep in mind you can do more than just query these objects, some also represent processes, meaning when you insert into them they do stuff! Here is a brief summary of the most interesting objects.
PackageUploadRequest, MetadataPackage, MetadataPackageVersion represent objects you can use as a developer to automate the uploading of first generation packages.
Package2, Package2Version, Package2VersionCreateRequest and Package2VersionCreateRequestError represent objects you can use as a developer to automate the uploading of second generation packages.
PackageInstallRequestSubscriberPackageSubscriberPackageVersion and Package2Member (second generation only) represent objects that allow you to automate the installation of a package and also allow you to discover the contents of packages installed within an org.
SandboxProcess and SandboxInfo represent objects that allow you to automate the creation and refresh of Sandboxes, as well as query for existing ones. For more information see the summary at the bottom of this help topic.
SourceMember represents changes you make when using the Setup menu within a Scratch org. It is used by the push and pull commands to track changes. The documentation claims you can create and update records in this object, however, I would recommend that you only use it for informationally purposes. For example, you could write your own poller tool to drive code generation based on custom object changes.
IMPORTANT NOTE: Be sure to consider what CLI commands exist to accomplish your need. As you’ve read above its easy to automate those commands and they manage a lot of the complexity in interacting with these objects directly. This is especially true for packaging objects.
Summary
The above options represent a rich set of abilities to integrate and extend DX. Keep in mind the deeper you go the more flexibility you get, but you are also taking on more complexity. So choose wisely and/or use a mix of approaches. Finally worthy of mention is the future of SFDX CLI and Oclif. Salesforce is busy updating the internals of the DX CLI to use this library and once complete will open up new cool possibilities such as CLI hooks which will allow you to extend the existing commands.
Building solutions on the Lightning Platform is a highly collaborative process, due to its unique ability to allow Trailblazers in a team to operate in no code, low code and/or code environments. Lightning Flow is a Salesforce native tool for no code automation and Apex is the native programming language of the platform — the code!
A flow author is able to create no-code solutions using the Cloud Flow Designer tool that can query and manipulate records, post Chatter posts, manage approvals, and even make external callouts. Conversely using Salesforce DX, the Apex developer can, of course, do all these things and more! This blog post presents a way in which two Trailblazers (Meaning a flow author and an Apex developer) can consider options that allow them to share the work in both building and maintaining a solution.
Often a flow is considered the start of a process — typically and traditionally a UI wizard or more latterly, something that is triggered when a record is updated (via Process Builder). We also know that via invocable methods, flows and processes can call Apex. What you might not know is that the reverse is also true! Just because you have decided to build a process via Apex, you can still leverage flows within that Apex code. Such flows are known as autolaunched flows, as they have no UI.
Its been nearly 9 years since i created my first Salesforce developer account. Back then I was leading a group of architects building on premise enterprise applications with JavaJ2EE and Microsoft .Net. It was fair to say my decision to refocus my career not only in building the first Accounting application in the cloud, but to do so on an emerging and comparatively prescriptive platform, was a risk. Although its not been an easy ride, leading and innovating rarely is, it is a journey that has inspired me and my perspective on successfully delivering enterprise applications.
Clearly since 2008 things have changed a lot! For me though, it was in 2014 when the platform really started to evolve in a significant way, when Lightning struck! It has continued to evolve at an increasingly rapid pace. Not just for the front end architecture, but the backend and latterly the developer tooling as well.
Component and Container Driven UI Development Decomposing code into smaller reusable units is not exactly new, but has arguably taken time to find its feet in the browser. By making Lightning Components the heart of their next generation framework, Salesforce made decomposition and reuse the primary consideration and moved us away from monolithic page centric thinking. Components need a place to live! With the increase of usability features in the various Lightning containers, namely Experience, Mobile and Community, we are further encouraged to build once run everywhere. Lightning Design System has not only taken the leg work out of creating great UI’s, but also brings with it often forgotten aspects such as keyboard navigation and support for accessibility.
Metadata Driven Solutions Metadata has been at the heart of Salesforce since the beginning, driving it forward as low code or zero code, high productivity platform for creating solutions and applying customisations. When Salesforce created Custom Metadata, it enabled developers to also deliver solutions that harness these same strengths that has made the platform so successful, driving up productivity and easy of implementation and time scales down.
Event Driven Architecture Decomposition of processing is key to scalability and resiliency. While we often get frustrated with the governors, especially in an interactive/synchronous context, the reality is safe guarding server resources responsible for delivering a responsive UI to the user is critical.Batch Apex, Future and Queables have long since been ways to manage async processing. With Platform Events, Salesforce has delivered a much more open and extensible approach to orchestrating processing within the platform as well as off platform. With a wealth of API’s for developers on and off platform, and tooling integration, EDA is now firmly integrated into the platform. Retry semantics with Platform Events is also a welcome addition to what has previously been left to the developer when utilising the aforementioned technologies.
Industry Standards and Integrations Salesforce has always been strong in terms of its own API’s, the Enterprise and Partner API’s being the classic go to API’s, now available in REST form. With External Objects and External Services supporting the OData and Swagger industry standards, off platform data sources and external API’s are obtained at a much reduced implementation overhead. Also without the user having to leave behind the value of various platform tools or the latest Lightning user experience.
Open Tools and Source Driven Development The tooling ecosystem has been a rich tapestry of story telling and is still emerging. The main focus and desire has been to leverage other industry standard approaches such as Continuous Integration and Deployment, with varying degrees of success. With SalesforceDX going GA, the first wave of change is now with us, with the ability to define, create, manage and destroy development environments at will. With more API’s and Services allowing for richer IDE experiences to be built in a more open and IDE agnostic way. I am very much looking forward to the future of DX, especially upcoming improvements around packaging.
Hybrid Architectures Last but not least, many of the above advancements provide more secure, responsive and integrated options for leveraging services and capabilities of other cloud application platforms. Heroku is the natural choice for those of us wanting to stay within the Salesforce ecosystem. With both Heroku Connect and Salesforce Connect (aka External Objects) integrating and synchronising data is now possible at much greater easy and reliability. Platform Events and External Services also both provide additional means for developers to connect the two platforms and take advantage of broader languages, libraries and additional compute resources. FinancialForce has open sourced an exciting new library, Orizuru, to assist in integrating Force.com and Heroku that will be showcased for the first time at Dreamforce.
The above list is certainly not exhaustive, when you consider Big Data (Big Objects), Analytics (Einstein/Wave Analytics) and of course AI/ML (Einstein Platform). Its a great time to being heading into my 5thDreamforce i am sure the list will grow even further!
I will be presenting in the following sessions at Dreamforce 2017.
In my previous blog i covered an exciting new integration tool from Salesforce, which consumes API’s that have a descriptor (or schema) associated with them. External Services allows point and click integration with API’s. The ability for Salesforce to consume API’s complying with API schema standards is a pretty huge step forward. Extending its ability to integrate with ease in a way that is in-keeping with its low barrier to entry development and clicks not code mantra.
At the time of writing my previous blog, only Interagent schema was supported by External Services. However as of the Winter’18 release this is no longer the case. In this blog i will explore the more widely adopted Swagger / Open API 2.0 standard, using Node.js and Heroku and External Services. As bonus topic,i will also touch on using Swagger Code Generator with Apex!
One of the many benefits of supporting the Swagger / Open API standard is the ability to generate documentation for it. The following screenshot shows the API schema on the left and generated documentation on the right. What is also very cool about this, is the Try this operation button. Give it a try for yourself now!
Whats the difference between Swagger and Open API 2.0? This was a question i asked myself and thought i would cover the answer here. Basically as at, Swagger v2.0, there is no difference, the Open API Initiative is a rebranding, born out of the huge adoption Swagger has seen since its creation. This move means its future is more formalised and seems to have more meaningful name. You can read more about this amazing story here.
Choosing your methodology for API development
The schema shown above might look a bit scary and you might well want to just get writing code and think about the schema when your ready to share your API. This is certainly supported and there are some tools that support generation of the schema via JSDoc comments in your code or via your joi schema here (useful for existing API’s).
However to really embrace an API first strategy in your development team i feel you should start with the requirements and thus the schema first. This allows others in your team or the intended recipients to review the API before its been developed and even test it out with stub implementations. In my research i was thus drawn to Swagger Node, a tool set, donated by ApiGee, that embraces API-design-first. Read more pros and cons here. It is also the formal Node.js implementation associated with Swagger.
The following describes the development process of API-design-first.
Swagger Node is very easy to get started with and is well documented here. It supports the full API-design-first development process show in the diagram above. The editor (also shown above) is really useful for getting used to writing schemas and the UI is dynamically refreshed, including errors.
The overall Node.js project is still pretty simple (GitHub repo here), now consisting of three files. The schema is edited inYAML file format (translated to JSON when served up to tools). The schema for the ASCIIArt service now looks like the following and is pretty self describing. For further documentation on Swagger / Open API 2.0 see here.
swagger: "2.0"
info:
version: "1.0.0"
title: AsciiArt Service
# during dev, should point to your local machine
host: localhost:3000
# basePath prefixes all resource paths
basePath: /
#
schemes:
# tip: remove http to make production-grade
- http
- https
# format of bodies a client can send (Content-Type)
consumes:
- application/json
# format of the responses to the client (Accepts)
produces:
- application/json
paths:
/asciiart:
# binds a127 app logic to a route
x-swagger-router-controller: asciiart
post:
description: Returns ASCIIArt to the caller
# used as the method name of the controller
operationId: asciiart
consumes:
- application/json
parameters:
- in: body
name: body
description: Message to convert to ASCIIArt
schema:
type: object
required:
- message
properties:
message:
type: string
responses:
"200":
description: Success
schema:
# a pointer to a definition
$ref: "#/definitions/ASCIIArtResponse"
/schema:
x-swagger-pipe: swagger_raw
# complex objects have schema definitions
definitions:
ASCIIArtResponse:
required:
- art
properties:
art:
type: string
The entry point of the Node.js app, the server.js file now looks like this…
'use strict';
var SwaggerExpress = require('swagger-express-mw');
var app = require('express')();
module.exports = app; // for testing
var config = {
appRoot: __dirname // required config
};
SwaggerExpress.create(config, function(err, swaggerExpress) {
if (err) { throw err; }
// install middleware for swagger ui
app.use(swaggerExpress.runner.swaggerTools.swaggerUi());
// install middleware for swagger routing
swaggerExpress.register(app);
var port = process.env.PORT || 3000;
app.listen(port);
});
Note: I changed the Node.js web server framework from hapi (used in my previous blog) to express. As I could not get the Swagger UI to integrate with hapi.
The code implementing the API has been moved to its asciiart.js file.
var figlet = require('figlet');
function asciiart(request, response) {
// Call figlet to generate the ASCII Art and return it!
const msg = request.body.message;
figlet(msg, function(err, data) {
response.json({ art: data});
});
}
module.exports = {
asciiart: asciiart
};
Note: There is no parameter validation code written here, the Swagger Node module dynamically implements parameter validation for you (based on what you define in the schema) before the request reaches your code! It also validates your responses.
To access the documentation simply use the path /docs. The documentation is generated automatically, no need to manage static HTML files. I have hosted my sample AsciiArt service in Heroku so you can try it by clicking the link below.
NOTE: There is a small bug that prevents the callout if the basePath is specified as root in the schema. Thus this has been commented out in the deployed version of the schema for now. Salesforce will likely have fixed this by the time you read this.
Swagger Tools
Swagger Editor, the interactive editor shown in the first screenshot of this blog.
Swagger Code Generator, creates server stubs and clients for implementing and calling Swagger enabled API’s.
Swagger UI, the browser based UI for generating documentation. You can call this from the command line and upload the static HTML files or use frameworks like the one used in this blog to generated it on the fly.
Can we use Swagger to call or implement API’s authored in Apex?
Swagger Tools are available on a number of platforms, including recently added support for Apex clients. This gives you another option to consume API’s directly in Apex. Its not clear if this is going to a better route than consuming the classes generated by External Services, i suspect it might have some pros and cons tbh. Time will tell!
public class SwagDefaultApi {
SwagClient client;
public SwagDefaultApi(SwagClient client) {
this.client = client;
}
public SwagDefaultApi() {
this.client = new SwagClient();
}
public SwagClient getClient() {
return this.client;
}
/**
*
* Returns ASCIIArt to the caller
* @param body Message to convert to ASCIIArt (optional)
* @return SwagASCIIArtResponse
* @throws Swagger.ApiException if fails to make API call
*/
public SwagASCIIArtResponse asciiart(Map<String, Object> params) {
List<Swagger.Param> query = new List<Swagger.Param>();
List<Swagger.Param> form = new List<Swagger.Param>();
return (SwagASCIIArtResponse) client.invoke(
'POST', '/asciiart',
(SwagBody) params.get('body'),
query, form,
new Map<String, Object>(),
new Map<String, Object>(),
new List<String>{ 'application/json' },
new List<String>{ 'application/json' },
new List<String>(),
SwagASCIIArtResponse.class
);
}
}
The code is also generated in a Salesforce DX compliant format, very cool!
AI services are becoming more and more accessible to developers than ever before. Salesforce acquired Metamind last year and made some big announcements at Dreamforce 2016. Like many developers, i was keen to find out about its API. The answer at the time was “check back with us next year!”.
True AI consists of recognition, be that visual or spoken, performing actions and the final most critical peace, learning. This blog explores the spoken and visual recognition peace further, with the added help of Flow for performing practically any action you can envision!
You may recall a blog from last year relating to integrating Salesforce with Amazon Echo. To explore the new Einstein API, I decided to leverage that work further. In order to trigger recognition of my pictures from Alexa. Also the Salesforce Flow usage enabled easy extensibility via custom Apex Actions. Thus the Einstein Apex Action was born! After a small bit of code and some configuration i had a working voice activated image recognition demo up and running.
The following diagram breaks down what just happened in the video above. Followed by a deeper walk through of the Predictive Vision Service and how to call it.
Using Salesforce1 Mobile app I uploaded an image using the Files feature.
Salesforce stores this in the ContentVersion object for later querying (step 6).
Using the Alexa skill, called Einstein, i was able to “Ask Einstein about my photo”
This NodeJS skill runs on Amazon and simply routes requests to Salesforce Flow
Spoken terms are passed through to a named Flow via the Flow API.
The Flow is simple in this case, it queries the ContentVersion for the latest upload.
The Flow then calls the Einstein Apex Action which in turn calls the Einstein REST API via Apex (more on this later). Finally a Flow assignment takes the resulting prediction of what the images is actually of, and uses it to build a spoken response.
How does theEinstein Predictive Vision Service API work?
The service introduces a few new terms to get your head round. Firstly a dataset is a named container for the types of images (labels) you want to recognise. The demo above uses a predefined dataset and model. A model is the output from the process of taking examples of each of your data sets labels and processing them (training). Initiating this process is pretty easy, you just make a REST API call with your dataset ID. All the recognition magic is behind the scenes, you just poll for when its done. All you have to do is test the model with other images. The service returns ranked predictions (using the datasets labels) on what it thinks your picture is of. When i ran the pictures above of my family dogs, for the first time i was pretty impressed that it detected the breeds.
Salesforce saved me the trouble of wrapping the REST API in Apex and have started an Apex wrapper here in this GitHub repo. When you signup you get private key file you have to upload into Salesforce to authenticate the calls. Currently the private key file the pilot gives you seems to be scoped by yourorg users associated email address.
public with sharing class EinsteinAction {
public class Prediction {
@InvocableVariable
public String label;
@InvocableVariable
public Double probability;
}
@InvocableMethod(label='Classify the given files' description='Calls the Einsten API to classify the given ContentVersion files.')
public static List<EinsteinAction.Prediction> classifyFiles(List<ID> contentVersionIds) {
String access_token = new VisionController().getAccessToken();
ContentVersion content = [SELECT Title,VersionData FROM ContentVersion where Id in :contentVersionIds LIMIT 1];
List<EinsteinAction.Prediction> predictions = new List<EinsteinAction.Prediction>();
for(Vision.Prediction vp : Vision.predictBlob(content.VersionData, access_token, 'GeneralImageClassifier')) {
EinsteinAction.Prediction p = new EinsteinAction.Prediction();
p.label = vp.label;
p.probability = vp.probability;
predictions.add(p);
break; // Just take the most probable
}
return predictions;
}
}
NOTE: The above method is only handling the first file passed in the parameter list, the minimum needed for this demo. To bulkify you can remove the limit in the SOQL and ideally put the file ID back in the response. It might also be useful to expose the other predictions and not just the first one.
The VisionController and Vision Apex classes from the GitHub repo are used in the above code. It looks like the repo is still very much WIP so i would expect the API to change a bit. They also assume that you have followed the standalone example tutorial here.
Summary
This initial API has made it pretty easy to access a key part of AI with what is essentially only a handful of simple REST API calls. I’m looking forward to seeing where this goes and where Salesforce goes next with future AI services.
As a self confessed API junkie,each time the new Salesforce platform release notes land. I tend to head straight to anything API related, such as sections on REST API, Metadata, Tooling, Streaming, Apex etc etc. This time Spring’17 release seems more packed than ever with API potential for building apps on platform, off platform and combinations of the two! So i thought i would write a short blog highlight what i found and my thoughts on the following…
New or updated API’s in Spring’17…
Lightning API (Developer Preview)
External Services (Beta)
Einstein Predictive Vision Service (Selected Customers Pilot)
Apex Stub API (GA)
SObject.getPopulatedFieldsAsMap API (GA)
Reports and Dashboard REST API Enhancements (GA)
Composite Resource and SObject Tree REST APIs (GA)
Enterprise Messaging Platform Java API (GA)
Bulk API v2.0 (Pilot)
Tooling API (GA)
Metadata API (GA)
Lightning API (Developer Preview)
This REST API seems to be UI helper API that wraps a number of smaller already existing REST API’s on the platform. Providing a one stop shop (a single API call) for reading both record dataandrelated record metadata such as layout and theme information. In addition to that it will resolve security before returning the response. If your building your own replacement UI or integrating the platform into a custom UI. This API looks like it could be quite a saving on development costs, compared to the many API calls and client logic that would be required to figure all this out. Reading between the lines its likely its the byproduct of a previously internal API Salesforce themselves have been using for Salesforce1 Mobile already? But thats just a guess on my behalf! The good news if so, is that its likely pretty well battle tested from a stability and use case perspective. The API has its own dedicated Developer Guide if want to read more.
External Services (Beta)
If there is one major fly in the ointment of the #clicksnotcode story so far, it’s been calling API’s. By definition they require a developer to write code to use them, right? Well not anymore! A new feature of delivered via Flow (and likely Process Builder) allows the user to effectively teach Flow about REST API’s via JSON Hyper-Schema (an emerging and very interesting independent specification for describing API’s). Once the user points the new External Services Wizard to an API supporting JSON Hyper Schema it uses the information to generate Apex code for an Invocable Method that makes the HTTP callout. Generating Apex code, is a relatively new approach by Salesforce to a tricky requirement to bring more power to non-developers and one i am also a fan of. It is something they have done before for Transaction Security Policies plugins and of course Force.com Sites. At time of writing i could not find it in my pre-release org, but i am keen to dig in deeper! Read more here.
Einstein Predictive Vision Service (Selected Customers Pilot)
Following the big splash made at Dreamforce 2016 around the new AI capability known as Einstein. The immediate question on mine and many other partner and developers mind was “How do we make use of it from code?”. Spring provides an invite only pilot access to a new REST API around image processing and recognition. No mention yet of an APEX API though. You can read more about the API at in the release notes and in more detailed via the dedicated Metamind “A Salesforce Company” site here. There is also some clearer information on exactly where it popups up in Salesforce products.
So calling this an “API” is a bit of stretch i know. Since its basically a existing Apex method on the SObject class. The big news though is that a gap in its behaviour has been fixed / filled that makes it more useful. Basically prior to Spring this method would not recognise fields set by code after a record (SObject) was queried. Thus if for example your attempting to implement a generic FLS checking solution using the response from this method, you where left feeling a little disappointed. Thankfully the method now returns all populated fields, regardless if they are populated via the query or later set by code.
Reports and Dashboard REST API Enhancements (GA)
Its now possible to create and delete reports using the Analytics REST API (no mention of the Apex API equivalent and i suspect this wont be supported). Reports are a great way to provide a means for driving data selection for processes you develop. The Analytics API is available in REST and Apex contexts. As well as driving reports from your code, Report Notifications allow users to schedule reports and have actions performed if certain criteria is met. I recently covered the ability to invoke an Apex class and Flow in response to Report Notification in this blog, Supercharing Salesforce Report Subscriptions. In Spring, the Reports REST API can now create notifications.
Composite Resource and SObject Tree REST APIs (GA)
An often overlooked implication of using multiple REST API calls in response to a user action is that if those calls update the database, there is no over arching database transaction. Meaning if the user was to close the page before processing was done, or kill the mobile app or your client code just crashed. It is possible to leave the records in an an invalid state. This is bad for database integrity. Apart from this, making multiple consecutive REST API calls can eat into an orgs 24hr rolling quota.
To address these use cases Salesforce have now released in GA form the composite and tree APIs (which actually this was already GA, how did i miss that?!). The composite resource API does allow you to package multiple CRUD REST API calls into one call and optionally control transaction scope via the AllOrNothing header. Allowing the possibility of committing multiple records in one CRUD API requested. The tree API allows you to create an account with a related set of contacts (for example) in one transaction wrapped REST API call. Basically the REST API is now bulkified! You can read more in the release notes here and in the REST API developers guide here and here.
Salesforce is overhauling their long standing Bulk REST API. Chances are you have not used it much, as its mostly geared towards data loading tools and integration frameworks (its simply invoked by ticking a box in the Salesforce Data Loader). The first phase of v2.0 changes to this API allow it to support larger CSV files to be uploaded and automatically chunked by the platform without the developer having to split them. Also changing the way limits are imposed, making it more record centric. Read more here.
Tooling API (GA)
Tooling API appears to be taken on new REST API resources that expose more standard aspects of the platform, such as formula functions and operators. For those building alternative UI’s over these features its a welcome alternative to hard coding these lists and having to remember to check / update them each release. Read more here.
Metadata API (GA)
Ironically my favourite API, the Metadata API has undergone mainly typical changes relating to new features elsewhere in the release. So no new methods or general features. I guess given all the great stuff above, i can not feel to sad! Especially with the announcement recently from the Apex PM that the native Apex Metadata API is finally under development, of course safe harbour and no statement yet on dates… but progress!
The Amazon Echo device sits in your living room or office and listens to your verbal instructions, much like Siri. It performs various activities. Such as fetching and relaying information and/or performing actions on your behalf. It also serves as a large bluetooth speaker. Now, after a run in the US, it has finally been released in the UK!
Why am i writing about it here? Well it has an API of course! So lets roll up our sleeves with an example i built recently with my FinancialForce colleague and partner in crime for all things gadget and platform, Kevin Roberts.
Kevin reached out to me when he noticed that Amazon had built this device with a means to teach it to respond to new phrases. Developers can extend its phrases by creating new Skills.You can read and hear more about the results over on FinancialForceblogsite.
To create a Skill you need to be a developer, capable of implementing a REST API endpoint that Amazon calls out to when the Echo recognizes a phrase you have trained it with. You can do this in practically any programming language you like of course, providing you comply with the documented JSON definition and host it securely.
One thing that simplifies the process is hosting your skill code through the Amazon Lambda service. Lambda supports Java, Python and NodeJS, as well as setting up the security stack for you. Leaving all you have to do is provide the code! You can even just type your code in directly to developer console provided by Amazon.
Training your Skill
You cannot just say anything to Amazon Echo and expect it to understand, its clever but not that clever (yet!). Every Skill developer has to provide a set of phrases / sample utterances. From these Amazon does some clever stuff behind the scenes to compile these into a form its speech recognition algorithms can match a users spoken words to.
You are advised to provide as many utterances as you can, up 50,000 of them in fact! To cover as many varied ways in which we can say things differently but mean the same thing. The sample utterances must all start with an identifier, known as the Intent. You can see various sample utterances for the CreateLead and GetLatestLeads intents below.
CreateLead Lets create a new Lead
CreateLead Create me a new lead
CreateLead New lead
CreateLead Help me create a lead
GetLatestLeads Latest top leads?
GetLatestLeads What are our top leads?
Skills have names, which users can search for in the Skills Marketplace, much like an App does on your phone. For Skill called “Lead Helper” users would speak the following phrases to invoke any of its intents.
“Lead Helper, Create me a new lead”
“Lead Helper, Lets create a new lead”
“Lead Helper, Help me create a lead”
“Lead Helper, What are our top leads?”
Your sample utterances can also include parameters / slots.
DueTasks What tasks are due for {Date}?
DueTasks Any tasks that are due for {Date}?
Slots are essentially parameters to your Intents, Amazon supports various slot types. The date slot type is quite flexible in terms of how it handles relative dates.
“Task Helper, What tasks are due next thursday?”
“Task Helper, Any tasks that are due for today?”
Along with your sample utterances you need to provide an intent schema, this lists the names of your intents (as referenced in your sample utterances) and the slot names and types. Further information can be found in Defining the Voice Interface.
Mapping Skill Intents and Slots to Flows and Variables
As i mentioned above, Skill developers implement a REST API end point. Instead of receiving the spoken words as raw text, it receives the Intent name and name/value pair of Slot names and values. That method can then invoke the appropriate database query or action and generate a response (as a string) to response back to the user.
To map this to Salesforce Flows, we can consider the Intent name as the Flow Name and the Slot name/values as Flow Input Parameters. Flow Output Parameters can be used to generate the spoken response to the user. For the example above you would define a Flow called DueTasks with the following named input and output Flow parameters.
Flow Name: DueTasks
Flow Input ParameterName: Alexa_Slot_Date
Flow Output ParameterName: Alexa_Tell
You can then basically use the Flow Assignment element to adjust the variable values. As well as other elements to query and update records accordingly. By using an output variable named Alexa_Tell before your Flow ends, you end the conversation with a single response contained with the text variable.
For another example see the Echo sample here, this one simply repeats “echo’s” the name given by the user when they speak a phrase with their name in it.
The sample utterances and intent schema are shown below. These utterances also use a literal slot type, which is a kind of picklist with variable possibilities. Meaning that Andrew, Sarah, Kevin and Bob are just sample values, users can use other words in the Name slot, it is up to the developer to validate them if its important.
Echo My name is {Andrew|Name}
Echo My name is {Sarah|Name}
Echo My name is {Kevin|Name}
Echo My name is {Bob|Name}
Alternatively if create and assign the Alexa_Ask variable in your Flow, this starts a conversation with your user. In this case any Input/Output Flow Parameters are retained between Flow calls. Finally if you suffix any slot name with Number, for example a slot named AmountNumber would be Alexa_Slot_AmountNumber, this will ensure that the value gets converted correctly to pass to a Flow Variable of type Number.
The following phrases are for the Conversation Flow included in the samples repository.
Conversation About favourite things
Conversation My favourite color is {Red|Color}
Conversation My favourite color is {Green|Color}
Conversation My favourite color is {Blue|Color}
Conversation My favourite number is {Number}
NodeJS Custom Skill
To code my Skill I went with NodeJS, as i had not done a lot of coding in it and wanted to challenge myself. The other challenge i set myself was to integrate in a generic and extensible way with Salesforce. Thus i wanted to incorporate my old friend Flow!
With its numerous elements for conditional logic, reading and updating the database. Flow is the perfect solution to integrating with Salesforce in the only way we know how on the Salesforce platform, with clicks not code! Now of course Amazon does not talk Flow natively, so we need some glue!
var AlexaSkill = require('./AlexaSkill');
var nforce = require('nforce');
/**
* SalesforceFlowSkill is a child of AlexaSkill.
* To read more about inheritance in JavaScript, see the link below.
*
* @see https://developer.mozilla.org/en-US/docs/Web/JavaScript/Introduction_to_Object-Oriented_JavaScript#Inheritance
*/
var SalesforceFlowSkill = function () {
AlexaSkill.call(this, APP_ID);
};
The AlexaSkill base class exposes four methods you can override, onSessionStarted, onLaunch, onSessionEnded and onIntent. As you can see from the method names, requests to your skill code can be scoped in a session. This allows you to manage conversations users can have with the device. Asking questions and gathering answers within the session that build up to perform a specific action.
I implemented the onIntent method to call the Flow API.
SalesforceFlowSkill.prototype.eventHandlers.onIntent =
function (intentRequest, session, response) {
// Handle the spoken intent from the user
// ...
}
Calling the Salesforce Flow API from NodeJS
Within the onIntent method I used the nforce library to perform oAuthuser name and password authentication for simplicity. Though Alexa Skills do support the oAuth web flow by linking accounts. The following code performs the authentication with Salesforce.
SalesforceFlowSkill.prototype.eventHandlers.onIntent =
// Configure a connection
var org = nforce.createConnection({
clientId: 'yourclientid',
clientSecret: 'yoursecret',
redirectUri: 'http://localhost:3000/oauth/_callback',
mode: 'single'
});
// Call a Flow!
org.authenticate({ username: USER_NAME, password: PASSWORD}).
then(function() {
The following code, calls the Flow API, again via nforce. It maps the slot name/values to parameters and returning any Flow output variables back in the response. A session will be kept open when the response.ask method is called. In this case any Input/Output Flow Parameters are retained in the Session and passed back into the Flow again.
// Build Flow input parameters
var params = {};
// From Session...
for(var sessionAttr in session.attributes) {
params[sessionAttr] = session.attributes[sessionAttr];
}
// From Slots...
for(var slot in intent.slots) {
if(intent.slots[slot].value != null) {
if(slot.endsWith('Number')) {
params['Alexa_Slot_' + slot] = Number(intent.slots[slot].value);
} else {
params['Alexa_Slot_' + slot] = intent.slots[slot].value;
}
}
}
// Call the Flow API
var opts = org._getOpts(null, null);
opts.resource = '/actions/custom/flow/'+intentName;
opts.method = 'POST';
var flowRunBody = {};
flowRunBody.inputs = [];
flowRunBody.inputs[0] = params;
opts.body = JSON.stringify(flowRunBody);
org._apiRequest(opts).then(function(resp) {
// Ask or Tell?
var ask = resp[0].outputValues['Alexa_Ask'];
var tell = resp[0].outputValues['Alexa_Tell'];
if(tell!=null) {
// Tell the user something (closes the session)
response.tell(tell);
} else if (ask!=null) {
// Store output variables in Session
for(var outputVarName in resp[0].outputValues) {
if(outputVarName == 'Alexa_Ask')
continue;
if(outputVarName == 'Alexa_Tell')
continue;
if(outputVarName == 'Flow__InterviewStatus')
continue;
session.attributes[outputVarName] =
resp[0].outputValues[outputVarName];
}
// Ask another question (keeps session open)
response.ask(ask, ask);
I would also like to call out that past Salesforce MVP, now Trailhead Developer Advocate Jeff Douglass started the ball rolling with his Salesforce CRM examples. Which is also worth checking out if you prefer to build something more explicitly in NodeJS.
This blog is my first video blog! Since Salesforce does not record the Developer Theatre sessions at the Salesforce World Tour events i thought i would do a re-run at home of my session last week and publish it here. As you know i have a love for all things API’s, and while I typically focus in this blog on backend API’s, there is one i’ve been keen to explore for a while…
The Lightning Out API, as any good API should, brings great promise and reality i’m pleased to say, to further integrating and extending the power of the platform and generally simplifying our users lives. In this case boldly going where no Lightning Component has gone before….
Actions are Salesforce’s general term for tasks users can perform either through buttons throughout various UI’s on desktop, mobile, tablet etc or in fact via non-UI processes such those built via via Process Builder or Automation Flows.
Actions are about “getting things done” in Salesforce. They encapsulate a piece of logic that allows a user to perform some work, such as sending email. When an action runs, it saves changes in your organization by updating the database. More here.
Over the years we’ve had many terms and ways to define these. Custom Button and Custom Link are perhaps the most obvious ones, which i’ve covered here in the past. Quick Actions (previously Publisher Actions) and more recently we’ve had Action Link‘s, which i covered in a past blog. Then of course the Standard Buttons, Edit, Delete, Follow, Submit for Approval etc provided by the platform. Such actions appear in various places Record layouts, List Views, Related Lists, Chatter and more recently Flexi Pages (aka Lighting Pages).
You might wonder then, if you had the task as developer to build your own UI or tool that wanted to expose some or all of the above actions, it would be quite a challenge to find them all. Indeed in some cases you may have had to resort to URL hacking to invoke some of them. Well worry not no longer, Salesforce’s clever architects now have you covered! Enter a new virtual SObject known as PlatformAction!Before we get onto what exactly virtual means, lets review some Actions and some SOQL queries…
Consider this Account Record detail page in the Classic (or Aloha) UI…
Note down your record ID and use it in a query like the one below…
SELECT DeviceFormat, Label, Type, Section,
ActionTarget, ActionTargetType, ActionListContext
FROM PlatformAction
WHERE ActionListContext = 'Record' AND
SourceEntity = '001B000000D2V0n' AND
Section = 'Page' AND
DeviceFormat = 'Aloha'
In Developer Console you should see something like this…
Pretty cool huh!? Check out the ActionTarget field, for the Standard Button records, thats the URL you can place on your UI’s to invoke that action, simple as that! Better still this is a supported way to get it, no more URL hacking! Now lets add a couple of Custom Buttons and re-run the query…
We now see CustomButton records appear…
This next query reveals actions shown on a List View. I did note Custom List View buttons that require record selection did not appear however. I suspect this is due to them requiring more than a simple HTTP GET URL to invoke.
SELECT DeviceFormat, Label, Type, Section,
ActionTarget, ActionTargetType, ActionListContext
FROM PlatformAction
WHERE ActionListContext = 'ListView' AND
SourceEntity = 'Account' AND
Section = 'Page' AND
DeviceFormat = 'Aloha'
Other observations..
Prior to Summer’16 (out in preview as i write this), Apex SOQL was not supported, only REST API SOQL. This is due to a limitation with the internally applied LIMIT keyword. This has now been resolved in Summer’16, so Apex SOQL now works!
SourceEntity can also be given an SObject API name, e.g. SourceEntity = ‘Account’, the result here are object level buttons, like New or those you add to the MRU page.
DeviceFormat field value matters, if you leave it off, it defaults to Phone. Thus some actions will be missing from those in Desktop (Lightning Experience) or Aloha (Classic). I eventually found Custom Buttons using Visualforce pages that didn’t have the Lightning Supported checkbox set didn’t appear when querying with the Phone device type for example.
User context matters, actions returned are user and configuration sensitive, meaning the record itself, record type and associated layout all contribute to the actions returned. Custom Buttons for example need to be on the relevant layout.
Label and Icon information, there are also fields that allow you to render appropriate labels and icons for the actions.
Related List actions, you can also retrieve actions shown on related lists, search for RelatedList in the help topic here.
Describe actions? You will notice some actions have an ActionTargetType of Describe? These are invoked via an API, something i will cover in later blog.
So lets discuss the “virtual SObject” bit!?!
Your probably wondering what a virtual SObject is?
Well my best guess is its an SObject that is not backed by physical data in the Salesforce database. If you check the documentation you’ll see fields just like any other object and it supports SOQL (with some limitations). My thinking is the records for this object are dynamically generated on demand by doing all the heavy lifting internally to scan all the various historic places where actions have been defined.
Thank you Salesforce architects, this is now my #1 coolest Salesforce API!
Whats next?
For starters, no more URL hacking of those standard pages, no excuses now!
Helper class or Visualforce and/or Lighting component for actions?