Andy in the Cloud

From BBC Basic to Force.com and beyond…


Leave a comment

The Third Edition

bookI’m proud to announce the third edition of my book has now been released. Back in March this year I took the plunge start updates to many key areas and add two brand new chapters. Between the 2 years and 8 months since the last edition there has been several platform releases and an increasing number of new features and innovations that made this the biggest update ever! This edition also embraces the platforms rebranding to Lightning, hence the book is now entitled Salesforce Lightning Platform Enterprise Architecture.

You can purchase this book direct from Packt or of course from Amazon among other sellers.  As is the case every year Salesforce events such as Dreamforce and TrailheaDX this book and many other awesome publications will be on sale. Here are some of the key update highlights:

  • Automation and Tooling Updates
    Throughout the book SFDX CLI, Visual Studio Code and 2nd Generation Packaging are leverage. While the whole book is certainly larger, certain chapters of the book actually reduced in size as steps previously reflecting clicks where replaced with CLI commands! At one point in time I was quite a master in Ant Scripts and Marcos, they have also given way to built in SFDX commands.
  • User Interface Updates
    Lightning Web Components is a relative new kid on the block, but benefits greatly from its standards compliance, meaning there is plenty of fun to go around exploring industry tools like Jest in the Unit Testing chapter. All of the books components have been re-written to the Web Component standard.
  • Big Data and Async Programming
    Big data was once a future concern for new products, these days it is very much a concern from the very start. The book covers Big Objects and Platform Events more extensibility with worked examples, including ingest and calculations driven by Platform Events and Async Apex Triggers. Event Driven Architecture is something every Lightning developer should be embracing as the platform continues to evolve around more and more standard platforms and features that leverage them.
  • Integration and Extensibility
    A particularly enjoyed exploring the use of Platform Events as another means by which you can expose API’s from your packages to support more scalable invocation of your logic and asynchronous plugins.
  • External Integrations and AI
    External integrations with other cloud services are a key part to application development and also the implementation of your solution, thus one of two brand new chapters focuses on Connected Apps, Named Credentials, External Services and External Objects, with worked examples of existing services or sample Heroku based services. Einstein has an ever growing surface area across Salesforce products and the platform. While this topic alone is worth an entire book, I took the time in the second new chapter, to enumerate Einstein from the perspective of the developer and customer configurations. The Formula1 motor racing theme continued with the ingest of historic race data that you can run AI over.
  • Other Updates
    Among other updates is a fairly extensive update to the CI/CD chapter which still covers Jenkins, but leverages the new Jenkins Pipeline feature to integrate SFDX CLI. The Unit Testing chapter has also been extended with further thoughts on unit vs integration testing and a focus on Lightening Web Component testing.

The above is just highlights for this third edition, you can see a full table of contents here. A massive thanks to everyone involving for providing the inspiration and support for making this third edition happen! Enjoy!


10 Comments

Getting your users attention with Custom Notifications

customnotificationsummaryGetting your users attention is not always easy, choosing how, when and where to notify them is critical. Ever since Lightning Experience and Salesforce Mobile came out the notification bell has been a one stop shop for Chatter and Approval notifications, regardless if you are on your desktop or your mobile device.

In beta release at time of writing is a new platform feature known as Notification Manager that allows you to send your own custom notifications to your users for anything your heart desires from the very same locations, even on a users mobile device! This blog dives into this feature and how you can integrate it into your creations regardless if you are a admin click coder, Apex developer or REST API junkie.

Getting Started

The first thing you need to do is define a new Notification Type under the Setup menu. This is a simple process that involves giving it a name and deciding what channels you want the notification to go out on, currently user desktop and mobile devices.

notificationstypes

Once this has been done you can use the new Send Custom Notification action in Process Builder or Flow. This allows you to define the title and body of your notification, along with the target recipients (users, groups, queues and more) along the target record that determines the record the user sees when they click/tap the notification. The following screenshot shows an example of such an Action in Process Builder:-

opportunitycustomnotification

notificationoppty.png

Basically that is all there is to it! You will have in a few clicks empowered yourself with the ability to reach out to not only your users desktop but the actual mobile device notification experience on each of the their mobile devices!  You didn’t have to learn how to write a mobile app, figure out how to do mobile notifications, register things with Google or Apple. I am honestly blown away at how easy and powerful this is!

So it is pretty easy to to send notifications this way from Process Builder processes driven by record updates from the user and also reference field values to customize the notification text. However in the ever expanding world of Platform Events, how do we send custom notifications based on Platform Events?

Sending Custom Notifications for Batch Apex Job Failures

One of my oldest and most popular blog posts discussed design best practices around Batch Apex jobs. One of the considerations it calls out is how important it is to route errors that occur in the background back to the user. Fast forward a bit to this blog, where I covered the new BatchApexError Platform Event as a means to capture and route batch errors (even uncatchable exceptions) in near realtime. It also describes strategy to enabled users to retry failed jobs. What it didn’t really solve, is letting them know something had gone wrong without them checking a custom tab. Let’s change that!

Process Builder is now able to subscribe to the standard BatchApexErrorEvent and thus enables you as an admin to apply filter and routing logic on failed batch jobs. When combined with custom notifications those errors can now be routed to users devices and/or desktops in realtime. While Process Builder can subscribe to events it does have some restrictions on what it can do with the event data itself. Thus we are going to call an autolaunch Flow from Process Builder to actually handle the event and send the custom notification from within Flow. If you are reading this wondering if your Apex code can get in on the action, the answer is yes (ish), more on this later though. The declarative solution utilizes one Process Builder process and two Flows. The separation of concerns between them is shown in the diagram below:-

batchapexeventtonotificationarch

Let’s work from the bottom to the top to understand why I decided to split it up this way. Firstly, SendCustomNotification is a Sub Flow (callable by other Flows) and is a pretty simple wrapper around the new Send Custom Notification action shown above. It does help with one wrinkle when working within Flow. You have to pass a Notification Type Id. In this case this Sub Flow leverages a small Custom Metadata Type map that contains the predefined Notification Names and Ids. This magic is encapsulated in the Sub Flow, so if that restriction is lifted in the future, this can easily be removed. You can take a closer look at this later through the sample code repository here.

SendCustomNotification

Next the BatchApexErrorPlatformEventHandler Flow defines a set of input variables that are populated from the Process Builder process. These variables match the fields and types per the definition of the Batch Apex Error Event here. The only other thing it does is add the Id of the user that generated the event (aka the user who submitted the failed job) to the list of recipients passed to the SendCustomNotification sub flow above. This could also be a Group Id if you wanted to send the notification further.

batchapexeventhandlerflow.png

Lastly, in the screenshot below you see the Process Builder that subscribes to the Batch Apex Error Event and maps the event field values to the input variables exposed from BatchApexErrorPlatformEventHandler Flow via the EventReference. The example here is very simple, but you can now imagine how you can add other filter criteria to this process that allows you to inspect which Batch Apex job failed and route and/or adjust messaging in the notifications accordingly, all done declaratively of course!

batchapexerrorfrompb.png

NOTE: It is not immediately apparent in all cases that you can access the event fields from Process Builder, since the documentation states them as not supported within formulas. I want to give a shout out to Alex Edelstein PM for Flow for clarifying that it is possible! Check out his amazing blog around all things Flow here. Finally note that Process Builder requires an Object to map the incoming event to. In this case I mapped to a User record using the CreatedById field on the event.

Sending Custom Notifications from Code

The Send Custom Notification action is also exposed via the Salesforce Action REST API defined here (hint hint for Doug Ayers Mass Action tool to support it). You can of course attempt to call this REST API via Apex as well. While there is currently no native Apex Action API, it turns out calling the above SendCustomNotification Flow from Apex works pretty well meanwhile. I have written a small wrapper around this technique to make it a little more elegant to perform from Apex and it also serves to abstract away this hopefully temporary workaround for Apex developers.

new CustomNotification()
    .type('MyNotificationType')
    .title('Fun Custom Notification')
    .body('Custom Notifications are Awesome!')
    .sendToCurrentUser();

The following Apex code results in this notification appearing on your device!

funfromapexnotification.png

This CustomNotification helper class is included in the sample code for this blog and leverages another class I wrote that wraps the native Apex Flow API. I used this wrapper because it allowed me to mock the actual Flow invocation since there is no way as far as I can see to assert the notification was actually sent.

NOTE: When sending custom notifications via declarative tools and/or via code I did confirm in my testing that they are included in the current transaction. Also I recommend you always avoid calling Flow in loops in your Apex code, instead make your Flows take list variables (aka try to bulkify Flows called from Apex). Though not shown in the Apex above, the wrapped Flow takes a list of recipients.

Summary

So there you have it, custom mobile and desktop notifications sent from Process Builder, Flow, Apex and REST API. Keep in mind of course at time of writing this is a Beta feature and thus read the clause in the documentation carefully. Now go forth and start thinking of all the areas you can enable with this feature!

P.S. Check out another new cool feature called Lightning In-App Guidance.

 

 


2 Comments

FinancialForce Apex Common Community Updates

This short blog highlights a batch of new features recently merged to the FinancialForce Apex Common library aka fflib. In addition to the various Dreamforce and blog resources linked from the repo, fans of Trailhead can also find modules relating to the library here and here. But please read this blog first before heading out to the trails to hunt down badges! It’s really pleasing to see it continue to get great contributions so here goes…

Added methods for detecting changed records with given fields in the Domain layer (fflib_SObjectDomain)

First up is a great new optimization feature for your Domain class methods from Nathan Pepper aka MayTheSForceBeWithYou based on a suggestion by Daniel Hoechst. Where applicable its a good optimization practice to considering comparing the old and new values of fields relating to processing you are doing in your Domain methods to avoid unnecessary overheads. The new fflib_SObjectDomain.getChangedRecords method can be used as an alternative to the Records property to just the records that have changed based on the field list passed to the method.

// Returns a list of Account where the Name or AnnaulRevenue has changed
List<Account> accounts =
  (List<Account>) getChangedRecords(
     new List<SObjectField> { Account.Name, Account.AnnualRevenue });

Supporting EventBus.publish(list<SObject>) in Unit of Work (fflib_SObjectUnitOfWork)

Platform Events are becoming ever popular in many situations. If you regard them as logically part of the unit of work your code is performing, this enhancement from Chris Mail is for you! You can now register platform events to be sent based on various scenarios. Chris has also provided bulkified versions of the following methods, nice!

    /**
     * Register a newly created SObject (Platform Event) instance to be published when commitWork is called
     *
     * @param record A newly created SObject (Platform Event) instance to be inserted during commitWork
     **/
    void registerPublishBeforeTransaction(SObject record);
    /**
     * Register a newly created SObject (Platform Event) instance to be published when commitWork has successfully
     * completed
     *
     * @param record A newly created SObject (Platform Event) instance to be inserted during commitWork
     **/
    void registerPublishAfterSuccessTransaction(SObject record);
    /**
     * Register a newly created SObject (Platform Event) instance to be published when commitWork has caused an error
     *
     * @param record A newly created SObject (Platform Event) instance to be inserted during commitWork
     **/
    void registerPublishAfterFailureTransaction(SObject record);

Add custom DML for Application.UnitOfWork.newInstance call (fflib_Application)

It’s been possible for a while now to override the default means by which the fflib_SObjectUnitOfWork.commitWork method performs DML operations (for example if you wanted to do some additional pre/post processing or logging). However, if you have been using the Application class pattern to access your UOW (shorthand and helps with mocking) then this has not been possible. Thanks to William Velzeboer you can now get the best of both worlds!

fflib_SObjectUnitOfWork.IDML myDML = new MyCustomDMLImpl();
fflib_ISObjectUnitOfWork uow = Application.UnitOfWork.newIntance(myDML);

Added methods to Unit of Work to be able to register record for upsert (fflib_SObjectUnitOfWork)

Unit Of Work is a very popular class and receives yet another enhancement in this batch from Yury Bondarau. These two methods allow you to register records that will either be inserted or updated as automatically determined by the records having an Id populated or not, aka a UOW upsert.

    /**
     * Register a new or existing record to be inserted or updated during the commitWork method
     *
     * @param record An new or existing record
     **/
    void registerUpsert(SObject record);
    /**
     * Register a list of mix of new and existing records to be upserted during the commitWork method
     *
     * @param records A list of mix of existing and new records
     **/
    void registerUpsert(List&lt;SObject&gt; records);
    /**
     * Register an existing record to be deleted during the commitWork method
     *
     * @param record An existing record
     **/

Alleviates unit-test exception when Org’s email service is limited

Finally, long term mega fan of the library John Storey comes in with an ingenious fix to an Apex test failure which occurs when the org’s email deliverability’s ‘Access Level’ setting is not ‘All Email’. John leveraged an extensibility feature in the Unit Of Work to avoid the test being dependent on this org config all while not losing any code coverage, sweet!

Last but not least, thank you Christian Coleman for fixing those annoying typos in the docs! 🙂


26 Comments

Managing Dependency Injection within Salesforce

When developing within Salesforce, dependencies are formed in many ways, not just those made explicitly when writing code, but those formed by using declarative tools. Such as defining Actions and Layouts for example. This blog introduces a new open source library I have been working on called Force DI. The goal is to simplify and more importantly consolidate where and how to configure at runtime certain dependencies between Apex, Visualforce or Lightning component code.

Forming dependencies at runtime instead of explicitly during development can be very advantageous. So whether you are attempting to decompose a large org into multiple DX packages or building a highly configurable solution, hopefully, you will find this library useful!

So what does the DI bit stand for?

The DI bit in Force DI stands for Dependency Injection, which is a form of IoC (Inversion of Control). Both are well-established patterns for providing the runtime glue between two points, basically the bit in the middle. Let’s start with an Apex example. In order to use DI, you need to forgo the use of the “new” operator at the point where you want to do the injection. For example, consider the following code:-

PaymentEngine engine = new PayPal();

In the above example, you are explicitly expressing a dependency.  Which not only means you have to deploy or package all your payment engines together, but you have hardcoded a finite set you support and thus also forgone extensibility. With Force DI you can instead write

PaymentEngine engine = (PaymentEngine) di_Injector.Org.getInstance(PaymentEngine.class);

How does it know which class to instantiate then?

Whats happening here is the Injector class is using binding configuration (also dynamically discovered) to find out which class to actually instantiate. This binding configuration can be admin controlled, packaged (e.g. “PayPal Package”) and/or defined dynamically via code. Setting up binding config via code enables dynamic binding by reading other configuration (e.g. the user’s payment preference) and binding accordingly.

The key goal of DI is that calling code is not concerning itself with how an instance is obtained, only what it does with it. The following shows how a declarative binding is expressed via the libraries Binding Custom Metadata Type:-

If this all seems a bit indirect, that’s the point! Because of this indirection, you can now choose to deploy/package other payment gateway implementations independently from each other as well as be sure that everywhere your other code needs a PaymentEngine the implementation is resolved consistently. For a more advanced OOP walkthrough see the code sample here.

Can this help me with other kinds of dependencies?

Yes! Let’s take an example of Lightning Component used as an Action Override. Typically you would create a Lightning Component and associate it directly with an action override. However, this means that the object metadata, action override and the Lightning code (as well as whatever is dependent on that) must travel around together. Rather than, for example, in separate DX packages. It also means that if you want to offer different variations of this action you would need to code all of that into the single component as well.

As before let’s review what the Lightning Component Action Override looks like without DI:-

<aura:component implements="lightning:actionOverride,force:hasSObjectName">
   <lightning:cardtitle="Widget">
     <p class="slds-p-horizontal_small">Custom UI to Create a Widget ({!v.sObjectName})</p>
   </lightning:card>
</aura:component>

This component (and all its dependencies) would be directly referenced in the Action Override below:-

Now let us take a look at this again but using the Lightning c:injector component in its place:-

<aura:component implements="lightning:actionOverride,force:hasSObjectName">
   <c:di_injector bindingName="lc_actionWidgetNew">
      <c:di_injectorAttribute name="sObjectName" value="{!v.sObjectName}"/>
   </c:di_injector>
</aura:component>

To make things clearer when reviewing Lightning Components in the org, the above component follows a generic naming convention, such as actionWidgetNew. This component is instead bound to the Action Override, not the above one and now looks like this:-

The binding configuration looks like this:-

Finally, the injected Lightning Component widgetWizard looks like this:-

<aura:component>
   <aura:attribute name="sObjectName"type="String"/>
   <lightning:card title="Widget">
     <p class="slds-p-horizontal_small">Custom UI to Create a Widget ({!v.sObjectName})</p>
   </lightning:card>
</aura:component>

Note: You have the ability to pass context through to the bound Lightning Component just as the sObjectName attribute value was passed above. The c:injector component can be used in many other places such as Quick Actions, Lightning App Builder Pages, and Utility Bar. Check out this example page in the repo for another example.

What about my Visualforce page content can I inject that?

Visualforce used by Actions and in Layouts can be injected in much the same way as above, with a VF page acting as the injector proxy using the Visualforce c:injector component. We will skip showing what things looked like before DI, as things follow much the same general pattern as the Lightning Component approach.

The following example shows the layoutWidgetInfo page, which is again somewhat generically named to indicate its an injector proxy and not a real page. It is this page that is referenced in the Widget objects Layout:-

<apex:page standardController="Widget__c" extensions="di_InjectorController">
   <c:di_injector bindingName="vf_layoutWidgetInfo" parameters="{!standardController}"/>
</apex:page>
The following shows an alternative means to express binding configuration via code. The ForceApp3Module class defines the bindings for a module/package of code where the Visualforce Component that actually implements the UI is stored. Note that the binding for vf_layoutWidgetInfo points to an Apex class in the controller, not the actual VF component to inject. The Provider inner class actually creates the specific component (via Dynamic Visualforce).
public class ForceApp3Module extends di_Module {

    public override void configure() {

        // Example named binding to a Visualforce component (via Provider)
        bind('vf_layoutWidgetInfo').visualforceComponent().to(WidgetInfoController.Provider.class);

        // Example SObject binding (can be used by trigger frameworks, see force-di-demo-trigger)
        bind(Account.getSObjectType()).apex().sequence(20).to(CheckBalanceAccountTrigger.class);

        // Example named binding to a Lightning component
        bind('lc_actionWidgetManage').lightningComponent().to('c:widgetManager');
    }
}

NOTE: The above binding configuration module class is itself injected into the org-wide Injector by a corresponding custom metadata Binding record here. You can also see in the above example other bindings being configured, see below for more on this.

The actual implementation of the injected Visualforce Component widgetInfo looks like this:-

<apex:component controller="WidgetInfoController">
  <apex:attribute name="standardController"
     type="ApexPages.StandardController"
     assignTo="{!StandardControllerValue}" description=""/>
  <h1>Success I have been injected! {!standardController.Id}</h1>
</apex:component>

Decomposition Examples

The examples, shown above and others are contained in the same repo as the library (for now). Each of the root package directories, force-app-1, force-app-2, and force-app-3 helps illustrate how the point of injection vs the runtime binding can be split across the boundaries of a DX package, thus aiding decomposition. The force-di-trigger-demo (not shown below) also contains a sample trigger handler framework using the libraries ability to resolve multiple bindings (to trigger handlers) in a given sequence, thus supporting the best practice of a single trigger per object.

Further Background and Features

I must confess when I started to research Java Dependency Injection (mainly via Java Guice) I was skeptical as to how much I could get done without custom annotation and reflection support in Apex. However, I am pretty pleased with the result and how it has woven in with features like Custom Metadata Types and how the Visualforce and Lightning Component injectors have turned out. A plan to write future Wiki pages on the associated GitHub repo to share more details on the Force DI API. Meanwhile here is a rundown of some of the more advanced features.

  • Provider Support
    Injectors by default only return one instance of the bound object, hence getInstance. Bindings that point to a class implementing the Provider interface (see inner interface) can override this. Which also allows for the construction of classes that do not have default constructors or types not supported by Type.forName. This feature also works in conjunction with the ability to pass a parameter via the Apex Injector, e.g. Injector.Org.getInstance(PaymentEngine.cls, someData);
  • Parameters
    Each of the three Injectors permits the passing of parameter/context information into the bound class or component. The examples above illustrate this.
  • Modules, Programmatic Binding Configuration and Injector Scopes
    Binding Modules group programmatic bindings and allow you to hook programmatically into the initialization of the Injector. Modules use the Fluent style interface to express bindings very clearly. The force-app-3 package in the repo uses this approach to define the bindings shown in the VF example above. You can also take a look at a worked example here of how local (one-off) Injectors can be used and here for a more complex OO example of conditional bindings works.
  • StandardController Passthrough
    For Visualforce Component injections the frameworks parameter passing capabilities supports passing through the instance of the StandardController from the hosting page into the injected component, as can be seen in the example above.
  • Binding Discovery by SObject vs Name
    The examples above utilize single bindings by a unique name. However, it is becoming quite common to adapt trigger frameworks to support DI and thus allow a single trigger to dynamically reach out to one or more handlers (perhaps installed in separate DX packages). This example shows how Force DI could be used in such a scenario.

Conclusion

This blog has hopefully wet your appetite to learn more! If so, head over to the repo and have a look through the samples in this blog and others. My next step is to wrap this up in a DX package to make it easier to get your hands on it, for now, download the repo and deploy via DX. I am also keen to explore what other aspects of Java Guice might make sense, such as the Linked Bindings feature.

Meanwhile, I would love feedback on the sample code and library thus far. Last but not least I would like to give a shout out to John Daniel and Doug Ayers for their great feedback during the initial development of the library and this blog. Enjoy!

 


5 Comments

Adding Clicks not Code Extensibility to your Apex with Lightning Flow

Building solutions on the Lightning Platform is a highly collaborative process, due to its unique ability to allow Trailblazers in a team to operate in no code, low code and/or code environments. Lightning Flow is a Salesforce native tool for no code automation and Apex is the native programming language of the platform — the code!

A flow author is able to create no-code solutions using the Cloud Flow Designer tool that can query and manipulate records, post Chatter posts, manage approvals, and even make external callouts. Conversely using Salesforce DX, the Apex developer can, of course, do all these things and more! This blog post presents a way in which two Trailblazers (Meaning a flow author and an Apex developer) can consider options that allow them to share the work in both building and maintaining a solution.

Often a flow is considered the start of a process — typically and traditionally a UI wizard or more latterly, something that is triggered when a record is updated (via Process Builder). We also know that via invocable methods, flows and processes can call Apex. What you might not know is that the reverse is also true! Just because you have decided to build a process via Apex, you can still leverage flows within that Apex code. Such flows are known as autolaunched flows, as they have no UI.

Blog_Graphic_01_v01-02_abpx5x.png

I am honored to have this blog hosted on the Salesforce Blog site.  To continue reading the rest of this blog head on over to Salesforce.com blog post here.

 


8 Comments

Streaming Debug Logs to your console

Debug logs are a key tool in triaging and profiling on the Lightning Platform (formerly Force.com) both in development and production. While the Apex Interactive Debugger provides an interactive experience, sometimes you want to monitor, parse or filter logs. Maybe you are reproducing a bug and are watching for a certain SOQL query or method being executed or we just want filter output in different ways.

taillog.png

A recent addition to the DX command line from Chris Wall is the ability to effectively stream debug logs from any org connected to DX to the command line console. This is similar to the experience in Developer Console logs pane, but is effectively opening the logs and dumping them out as they are produced on the server for you automatically.

sfdx force:apex:log:tail

You can install the Salesforce DX CLI here. Note that you do not need to have a DX project to use this command.

In the following command line example, I have piped the output to another command (grep) that filters the output to show only USER_DEBUG log lines.

sfdx force:apex:log:tail --color | grep USER_DEBUG 

Pictures do not really do it justice, so here is a short demo video!

The command works against any org you have connected to the DX CLI, including production and sandbox orgs. However, if you run it from the same folder as a DX project it will use the currently configured default user/scratch org for that project.

Adding a bit of color to your debug logs!

The –color parameter used above enables some basic color highlighting for method, constructor, variable assignments etc.

colordebuglog.png

You can also customize your own colors by setting the SFDX_APEX_LOG_COLOR_MAP environment variable to an absolute file path to a JSON file per the format shown below.

{
    CONSTRUCTOR_: 'magenta',
    EXCEPTION_: 'red',
    FATAL_: 'red',
    METHOD_: 'blue',
    SOQL_: 'yellow',
    USER_: 'green',
    VARIABLE_: 'cyan'
}

Power to the pipe!

One of the most exciting features for me is the ability to pipe debug logs. Maybe you want to parse out some information to profile how many SOQL statements have been used or aggregate timestamp values (the bit in brackets after the time!) to do some performance profiling… I am looking forward to seeing what folks do with this, please share!

Anything else?

The –debugLevel command is optional but allows you to define your own debug level by inserting records into the TraceFlag object (via the DX CLI command force:data:record:create). Finally, you can run the command with the –help parameter to get the latest help.

Usage: sfdx force:apex:log:tail [-c] [-d ] [-s] [-u ] [--json] [--loglevel ] 

start debug logging and display logs

Flags:

 -c, --color                          colorize noteworthy log lines

 -d, --debuglevel DEBUGLEVEL          debug level for trace flag

 -s, --skiptraceflag                  skip trace flag setup

 -u, --targetusername TARGETUSERNAME  username or alias for the target org;

                                      overrides default target org

 --json                               format output as json

 --loglevel LOGLEVEL                  logging level for this command invocation

                                      (error*,trace,debug,info,warn,fatal)