Andy in the Cloud

From BBC Basic to Force.com and beyond…


3 Comments

The Third Edition

bookI’m proud to announce the third edition of my book has now been released. Back in March this year I took the plunge start updates to many key areas and add two brand new chapters. Between the 2 years and 8 months since the last edition there has been several platform releases and an increasing number of new features and innovations that made this the biggest update ever! This edition also embraces the platforms rebranding to Lightning, hence the book is now entitled Salesforce Lightning Platform Enterprise Architecture.

You can purchase this book direct from Packt or of course from Amazon among other sellers.  As is the case every year Salesforce events such as Dreamforce and TrailheaDX this book and many other awesome publications will be on sale. Here are some of the key update highlights:

  • Automation and Tooling Updates
    Throughout the book SFDX CLI, Visual Studio Code and 2nd Generation Packaging are leverage. While the whole book is certainly larger, certain chapters of the book actually reduced in size as steps previously reflecting clicks where replaced with CLI commands! At one point in time I was quite a master in Ant Scripts and Marcos, they have also given way to built in SFDX commands.
  • User Interface Updates
    Lightning Web Components is a relative new kid on the block, but benefits greatly from its standards compliance, meaning there is plenty of fun to go around exploring industry tools like Jest in the Unit Testing chapter. All of the books components have been re-written to the Web Component standard.
  • Big Data and Async Programming
    Big data was once a future concern for new products, these days it is very much a concern from the very start. The book covers Big Objects and Platform Events more extensibility with worked examples, including ingest and calculations driven by Platform Events and Async Apex Triggers. Event Driven Architecture is something every Lightning developer should be embracing as the platform continues to evolve around more and more standard platforms and features that leverage them.
  • Integration and Extensibility
    A particularly enjoyed exploring the use of Platform Events as another means by which you can expose API’s from your packages to support more scalable invocation of your logic and asynchronous plugins.
  • External Integrations and AI
    External integrations with other cloud services are a key part to application development and also the implementation of your solution, thus one of two brand new chapters focuses on Connected Apps, Named Credentials, External Services and External Objects, with worked examples of existing services or sample Heroku based services. Einstein has an ever growing surface area across Salesforce products and the platform. While this topic alone is worth an entire book, I took the time in the second new chapter, to enumerate Einstein from the perspective of the developer and customer configurations. The Formula1 motor racing theme continued with the ingest of historic race data that you can run AI over.
  • Other Updates
    Among other updates is a fairly extensive update to the CI/CD chapter which still covers Jenkins, but leverages the new Jenkins Pipeline feature to integrate SFDX CLI. The Unit Testing chapter has also been extended with further thoughts on unit vs integration testing and a focus on Lightening Web Component testing.

The above is just highlights for this third edition, you can see a full table of contents here. A massive thanks to everyone involving for providing the inspiration and support for making this third edition happen! Enjoy!


2 Comments

Highlights from TrailheaDX 2017

IMG_2857.JPGThis was my first TrailheaDX and what an event it was! With my Field Guide in hand i set out into the wilderness! In this blog i’ll share some of my highlights, thoughts and links to the latest resources. Many of the newly announced things you can actually get your hands on now which is amazing!

Overall the event felt well organized, if a little frantic at times. With smaller sessions of 30 minutes each, often 20 mins after intros and questions, each was bite sized, but quite well tuned with demos and code samples being shown.

SalesforceDX, Salesforce announced the public beta of this new technology aimed at improving the developer experience on the platform. SalesforceDX consist of several modules that will be extended further over time. Salesforce has done a great job at providing a wealth of Trailhead resources to get you started.

Einstein, Since its announcement, myself and other developers have been waiting to get access to more custom tools and API’s, well now that wait is well and truly over. As per my previous blogs we’ve had the Einstein Vision API for a little while now. Announced at the event where no less than three other new Einstein related tools and API’s.

  • Einstein Discovery. Salesforce demonstrated a very slick looking tool that allows you to point and click your way through to analyzing various data sets, including those obtained from your custom objects! They have provided a Trailhead module on it here and i plan on digging in! Pricing and further info is here.
  • Einstein Sentiment API. Allows you to interpret text strings for terms that indicate if its a positive, neutral or negative statement / sentiment. This can be used to scan case comments, forum posts, feedback, twitter posts etc in an automated way and respond or be alerted accordingly to what is being said.
  • Einstein Intent API.  Allows you to interpret text strings for meanings, such as instructions or requests. Routing case comments or even implementing bots that can help automate or propose actions to be taken without human interpretation.
  • Einstein Object Detection API. Is an extension of the Einstein Vision API, that allows for individual items in a picture to be identified. For example a pile of items on a coffee table, such as a mug, magazine, laptop or pot plant! Each can then be recognized and classified to build up more intel on whats in the picture for further processing and analysis.
  • PredictionIO on Heroku. Finally, if you want to go below the declarative tools or intentional simplified Einstein API’s, you can build your own machine learning models using Heroku and the PredictionIO build pack!

Platform Events. These allow messages to be published and subscribed to using a new object known as an Event object, suffixed with __e. Once created you can use a new Apex API’s or REST API’s to send messages and use either Apex Triggers or Streaming API to receive them. There is also a new Process Builder action or Flow element to send messages. You can keep your messages within Force.com or use them to integrate between other cloud platforms or the browser. The possibilities are quite endless here, aysnc processing, inter app comms, logging, ui notifications…. i’m sure myself and other bloggers will be exploring them in the coming months!

External Services. If you find a cool API you want to consume in Force.com you currently have to write some code. No longer! If you have a schema that describes that API you use the External Services wizard to generate some Apex code that will call out to the API. Whats special about this, is the Apex code is accessible via Process Builder and Flow. Making clicks not code API integration possible. Its an excellent way to integrate with complementary services you or others might develop on platforms such as Heroku. I have just submitted a session to Dreamforce 2017 to explore this further, fingers crossed it gets selected! You can read more about it here in the meantime.

Sadly i did have to make a key decision to focus on the above topics and not so much on Lightning. I heard from other attendees these sessions where excellent and i did catch a brief sight of dynamic component rendering in Lightning App Builder, very cool!

Thanks Salesforce for filling my blog backlog for the next year or so! 😉

 

 


24 Comments

Image Recognition with the Salesforce Einstein API and an Amazon Echo

AI services are becoming more and more accessible to developers than ever before. Salesforce acquired Metamind last year and made some big announcements at Dreamforce 2016. Like many developers, i was keen to find out about its API. The answer at the time was “check back with us next year!”.

pipaWith Spring’17 that question has been answered. At least thus far as regards to image recognition, with the availability of Salesforce Einstein Predictive Vision Service (Pilot). The pilot is open to the public and is free to signup.

True AI consists of recognition, be that visual or spoken, performing actions and the final most critical peace, learning. This blog explores the spoken and visual recognition peace further, with the added help of Flow for performing practically any action you can envision!

You may recall a blog from last year relating to integrating Salesforce with Amazon Echo. To explore the new Einstein API, I decided to leverage that work further. In order to trigger recognition of my pictures from Alexa. Also the Salesforce Flow usage enabled easy extensibility via custom Apex Actions. Thus the Einstein Apex Action was born! After a small bit of code and some configuration i had a working voice activated image recognition demo up and running.

The following diagram breaks down what just happened in the video above. Followed by a deeper walk through of the Predictive Vision Service and how to call it.

amazonechoandeinstein

  1. Using Salesforce1 Mobile app I uploaded an image using the Files feature.
  2. Salesforce stores this in the ContentVersion object for later querying (step 6).
  3. Using the Alexa skill, called Einstein, i was able to “Ask Einstein about my photo”
  4. This  NodeJS skill runs on Amazon and simply routes requests to Salesforce Flow
  5. Spoken terms are passed through to a named Flow via the Flow API.
  6. The Flow is simple in this case, it queries the ContentVersion for the latest upload.
  7. The Flow then calls the Einstein Apex Action which in turn calls the Einstein REST API via Apex (more on this later). Finally a Flow assignment takes the resulting prediction of what the images is actually of, and uses it to build a spoken response.
    einstenandflow

Standard Example: The above example is exposing the Einstein API in an Apex Action, this is purely to integrate with the Amazon Echo use case. The pilot documentation walks you through an standalone Apex and Visualforce example to get you started.

How does theEinstein Predictive Vision Service API work?

revaflintsilverThe service introduces a few new terms to get your head round. Firstly a dataset is a named container for the types of images (labels) you want to recognise. The demo above uses a predefined dataset and model. A model is the output from the process of taking examples of each of your data sets labels and processing them (training). Initiating this process is pretty easy, you just make a REST API call with your dataset ID. All the recognition magic is behind the scenes, you just poll for when its done. All you have to do is test the model with other images. The service returns ranked predictions (using the datasets labels) on what it thinks your picture is of. When i ran the pictures above of my family dogs, for the first time i was pretty impressed that it detected the breeds.

EinsteinPredictiveVisionAPI.png

While quite fiddly at times, it is also well worth the walking through how to setup your own image datasets and training to get a hands on example of the above.

How do i call the Einstein API from Apex?

Salesforce saved me the trouble of wrapping the REST API in Apex and have started an Apex wrapper here in this GitHub repo. When you signup you get private key file you have to upload into Salesforce to authenticate the calls. Currently the private key file the pilot gives you seems to be scoped by your org users associated email address.

public with sharing class EinsteinAction {

    public class Prediction {
        @InvocableVariable
        public String label;
        @InvocableVariable
        public Double probability;
    }

    @InvocableMethod(label='Classify the given files' description='Calls the Einsten API to classify the given ContentVersion files.')
    public static List<EinsteinAction.Prediction> classifyFiles(List<ID> contentVersionIds) {
        String access_token = new VisionController().getAccessToken();
        ContentVersion content = [SELECT Title,VersionData FROM ContentVersion where Id in :contentVersionIds LIMIT 1];
        List<EinsteinAction.Prediction> predictions = new List<EinsteinAction.Prediction>();
        for(Vision.Prediction vp : Vision.predictBlob(content.VersionData, access_token, 'GeneralImageClassifier')) {
            EinsteinAction.Prediction p = new EinsteinAction.Prediction();
            p.label = vp.label;
            p.probability = vp.probability;
            predictions.add(p);
            break; // Just take the most probable
        }
        return predictions;
    }
}

NOTE: The above method is only handling the first file passed in the parameter list, the minimum needed for this demo. To bulkify you can remove the limit in the SOQL and ideally put the file ID back in the response. It might also be useful to expose the other predictions and not just the first one.

The VisionController and Vision Apex classes from the GitHub repo are used in the above code. It looks like the repo is still very much WIP so i would expect the API to change a bit. They also assume that you have followed the standalone example tutorial here.

Summary

This initial API has made it pretty easy to access a key part of AI with what is essentially only a handful of simple REST API calls. I’m looking forward to seeing where this goes and where Salesforce goes next with future AI services.