Andy in the Cloud

From BBC Basic to Force.com and beyond…


7 Comments

Where to place Validation code in an Apex Trigger?

Quite often when i answer questions on Salesforce StackExchange they prompt me to consider future blog posts. This question has been sat on my blog list for a while and i’m finally going to tackle the ‘performing validation in the after‘ comment in this short blog post, so here goes!

Salesforce offers Apex developers two phases within an Apex Trigger, before and after. Most examples i see tend to perform validation code in the before phase of the trigger, even the Salesforce examples show this. However there can be implications with this, that is not at first that obvious. Lets look at an example, first here is my object…

Screen Shot 2015-04-19 at 09.31.05

Now the Apex Trigger doing some validation…

trigger LifeTheUniverseAndEverythingTrigger1 on LifeTheUniverseAndEverything__c
   (before insert) {

	// Make sure if there is an answer given its always 42!
	for(LifeTheUniverseAndEverything__c record : Trigger.new) {
		if(record.Answer__c!=null && record.Answer__c != 42) {
			record.Answer__c.addError('Answer is not 42!');
		}
	}
}

The following test method asserts that a valid value has been written to the database. In reality you would also have tests that assert invalid values are rejected, though the test method below will suffice for this blog.

	@IsTest
	private static void testValidSucceeds() {

		// Insert a valid answer
		LifeTheUniverseAndEverything__c
			lifeTheUniverseAndEverything = new LifeTheUniverseAndEverything__c();
		lifeTheUniverseAndEverything.Answer__c = 42;
		insert lifeTheUniverseAndEverything;

		// Is the answer still the same, surely nobody could have changed it right?
		System.assertEquals(42,
			[select Answer__c
			   from LifeTheUniverseAndEverything__c
			   where Id = :lifeTheUniverseAndEverything.Id][0].Answer__c);
	}

This all works perfectly so far!

Screen Shot 2015-04-19 at 10.46.51

What harm can a second Apex Trigger do?

Once developed Apex Triggers are either deployed into a Production org or packaged within an AppExchange package which is then installed. In the later case such Apex Triggers cannot be changed. The consideration being raised here arises if a second Apex Trigger is created on the object. There can be a few reasons for this, especially if the existing Apex Trigger is managed and cannot be modified or a developer simply chooses to add another Apex Trigger.

So what harm can a second Apex Trigger on the same object really cause? Well, like the first Apex Trigger it has the ability to change field values as well as validate them. As per the additional considerations at the bottom of the Salesforce trigger invocation documentation, Apex Triggers are not guaranteed to fire in any order. So what would happen if we add a second trigger like the one below which attempts to modify the answer to an invalid value?

trigger LifeTheUniverseAndEverythingTrigger2 on LifeTheUniverseAndEverything__c
  (before insert) {

	// I insist that the answer is actually 43!
	for(LifeTheUniverseAndEverything__c record : Trigger.new) {
		record.Answer__c = 43;
	}
}

At time of writing in my org, it appears that this new trigger (despite its name) is actually being run by the platform before my first validation trigger. Thus since the validation code gets executed after this new trigger changes the value we can see the validation is still catching it. So while thats technically not what this test method was testing, it shows for the purposes of this blog that the validation is still working, phew!

Screen Shot 2015-04-19 at 10.50.12

Apex Trigger execution order matters…

So while all seems well up until this point,  remember that we cannot guarantee that Salesforce will always run our validation trigger last. What if ours validation trigger ran first? Since we cannot determine the order of invocation of triggers, what we can do to illustrate the effects of this is simply switch the code in the two examples triggers like this so.

trigger LifeTheUniverseAndEverythingTrigger2 on LifeTheUniverseAndEverything__c
  (before insert) {

  	// Make if there is an answer given its always 42!
	for(LifeTheUniverseAndEverything__c record : Trigger.new) {
		if(record.Answer__c!=null && record.Answer__c != 42) {
			record.Answer__c.addError('Answer is not 42!');
		}
	}
}

trigger LifeTheUniverseAndEverythingTrigger1 on LifeTheUniverseAndEverything__c
  (before insert, before update) {

	// I insist that the answer is actually 43!
	for(LifeTheUniverseAndEverything__c record : Trigger.new) {
		record.Answer__c = 43;
	}
}

Having effectively emulated the platform running the two triggers in a different order, validation trigger first, then the field modify trigger second. Our test asserts are now showing the validation logic in this scenario failed to do its job and invalid data reached the database, not good!

Screen Shot 2015-04-19 at 10.53.51

So whats the solution to making my triggers bullet proof?

So what is the solution to avoiding this, well its pretty simple really, move your logic into the after phase. Even though the triggers may still fire in different orders, one thing is certain. Nothing and i mean nothing, can change in the after phase of a trigger execution, meaning you can reliably check the field values without fear of them changing later!

trigger LifeTheUniverseAndEverythingTrigger2 on LifeTheUniverseAndEverything__c
  (after insert) {

  	// Make if there is an answer given its always 42!
	for(LifeTheUniverseAndEverything__c record : Trigger.new) {
		if(record.Answer__c!=null && record.Answer__c != 42) {
			record.Answer__c.addError('Answer is not 42!');
		}
	}
}

Thus with this change in place, even though the second trigger fires afterwards and attempts to change the value an inserted to 43, the first trigger validation still prevents records being inserted to the database, success!

Screen Shot 2015-04-19 at 10.50.12

Summary

This approach is actually referenced in a few Trigger frameworks such as Tony Scott’s ‘Trigger Pattern for Tidy, Streamlined, Bulkified Triggers‘ and the Apex Enterprise Patterns Domain pattern.

One downside here is that for error scenarios the record is executing potentially much more platform features (for workflow and other processes) before your validation eventually stops proceedings and asks for the platform to roll everything back. That said, error scenarios, once users learn your system are hopefully less frequent use cases.

So if you feel scenario described above is likely to occur (particularly if your developing a managed package where its likely subscriber developers will add triggers), you should seriously consider leveraging the after phase of triggers for validation. Note this also applies with the update and delete events in triggers.

 


2 Comments

Unit Testing, Apex Enterprise Patterns and ApexMocks – Part 2

In Part 1 of this blog series i introduced a new means of applying true unit testing to Apex code leveraging the Apex Enterprise Patterns. Covering the differences between true unit testing vs integration testing and how the lines can get a little blurred when writing Apex test methods.

If your following along you should be all set to start writing true unit tests against your controller, service and domain classes. Leveraging the inbuilt dependency injection framework provided by the Application class introduced in the last blog. By injecting mock implementations of service, domain, selector and unit of work classes accordingly.

What are Mock classes and why do i need them?

Depending on the type of class your unit testing you’ll need to mock different dependencies so that you don’t have to worry about the data setup of those classes while your busy putting your hard work in to testing your specific class.

Unit Testing

In object-oriented programming, mock objects are simulated objects that mimic the behavior of real objects in controlled ways. A programmer typically creates a mock object to test the behavior of some other object, in much the same way that a car designer uses a crash test dummy to simulate the dynamic behavior of a human in vehicle impacts. Wikipedia.

In this blog we are going to focus on an example unit test method for a Service, which requires that we mock the unit of work, selector and domain classes it depends on (unit tests for these classes will of course be written as well). Lets take a look first at the overall test method then break it down bit by bit. The following test method makes no SOQL queries or DML to accomplish its goal of testing the service layer method.

	@IsTest
	private static void callingServiceShouldCallSelectorApplyDiscountInDomainAndCommit()
	{
		// Create mocks
		fflib_ApexMocks mocks = new fflib_ApexMocks();
		fflib_ISObjectUnitOfWork uowMock = new fflib_SObjectMocks.SObjectUnitOfWork(mocks);
		IOpportunities domainMock = new Mocks.Opportunities(mocks);
		IOpportunitiesSelector selectorMock = new Mocks.OpportunitiesSelector(mocks);

		// Given
		mocks.startStubbing();
		List<Opportunity> testOppsList = new List<Opportunity> { 
			new Opportunity(
				Id = fflib_IDGenerator.generate(Opportunity.SObjectType),
				Name = 'Test Opportunity',
				StageName = 'Open',
				Amount = 1000,
				CloseDate = System.today()) };
		Set<Id> testOppsSet = new Map<Id, Opportunity>(testOppsList).keySet();
		mocks.when(domainMock.sObjectType()).thenReturn(Opportunity.SObjectType);
		mocks.when(selectorMock.sObjectType()).thenReturn(Opportunity.SObjectType);
		mocks.when(selectorMock.selectByIdWithProducts(testOppsSet)).thenReturn(testOppsList);
		mocks.stopStubbing();
		Decimal discountPercent = 10;
		Application.UnitOfWork.setMock(uowMock);
		Application.Domain.setMock(domainMock);
		Application.Selector.setMock(selectorMock);

		// When
		OpportunitiesService.applyDiscounts(testOppsSet, discountPercent);

		// Then
		((IOpportunitiesSelector) 
			mocks.verify(selectorMock)).selectByIdWithProducts(testOppsSet);
		((IOpportunities) 
			mocks.verify(domainMock)).applyDiscount(discountPercent, uowMock);
		((fflib_ISObjectUnitOfWork) 
			mocks.verify(uowMock, 1)).commitWork();
	}

First of all, you’ll notice the test method name is a little longer than you might be used to, also the general layout of the test splits code into Given, When and Then blocks. These conventions help add some documentation, readability and consistency to test methods, as well as helping you focus on what it is your testing and assuming to happen. The convention is one defined by Martin Fowler, you can read more about GivenWhenThen here. The test method name itself, stems from a desire to express the behaviour the test is confirming.

Generating and using Mock Classes

The Java based Mockito framework leverages the Java runtimes capability to dynamically create mock implementations. However the Apex runtime does not have any support for this. Instead ApexMocks uses source code generation to generate the mock classes it requires based on the interfaces you defined in my earlier post.

The patterns library also comes with its own mock implementation of the Unit of Work for you to use, as well as some base mock classes for your selectors and domain mocks (made know to the tool below). The following code at the top of the test method creates the necessary mock instances that will be configured and injected into the execution.

// Create mocks
fflib_ApexMocks mocks = new fflib_ApexMocks();
fflib_ISObjectUnitOfWork uowMock = new fflib_SObjectMocks.SObjectUnitOfWork(mocks);
IOpportunities domainMock = new Mocks.Opportunities(mocks);
IOpportunitiesSelector selectorMock = new Mocks.OpportunitiesSelector(mocks);

To generate the Mocks class used above use the ApexMocks Generator, you can run it via the Ant tool. The apex-mocks-generator-3.1.2.jar file can be downloaded from the ApexMocks repo here.

<?xml version="1.0" encoding="UTF-8"?>
<project name="Apex Commons Sample Application" default="generate.mocks" basedir=".">

	<target name="generate.mocks">
		<java classname="com.financialforce.apexmocks.ApexMockGenerator">
			<classpath>
				<pathelement location="${basedir}/bin/apex-mocks-generator-3.1.2.jar"/>
			</classpath>
			<arg value="${basedir}/fflib-sample-code/src/classes"/>
			<arg value="${basedir}/interfacemocks.properties"/>
			<arg value="Mocks"/>
			<arg value="${basedir}/fflib-sample-code/src/classes"/>
		</java>
	</target>

</project>

You can configure the output of the tool using a properties file (you can find more information here).

IOpportunities=Opportunities:fflib_SObjectMocks.SObjectDomain
IOpportunitiesSelector=OpportunitiesSelector:fflib_SObjectMocks.SObjectSelector
IOpportunitiesService=OpportunitiesService

The generated mock classes are contained as inner classes in the Mocks class and also implement the interfaces you define, just as the real classes do. You can choose to add the above Ant tool call into your build scripts or just simply retain the class in your org refreshing it by re-run the tool whenever your interfaces change.


</pre>
<pre>/* Generated by apex-mocks-generator version 3.1.2 */
@isTest
public class Mocks
{
	public class OpportunitiesService 
		implements IOpportunitiesService
	{
		// Mock implementations of the interface methods...
	}

	public class OpportunitiesSelector extends fflib_SObjectMocks.SObjectSelector 
		implements IOpportunitiesSelector
	{
		// Mock implementations of the interface methods...
	}

	public class Opportunities extends fflib_SObjectMocks.SObjectDomain 
		implements IOpportunities
	{
		// Mock implementations of the interface methods...
	}
}

To make it easier for those wanting to try out the sample application i have committed the generated Mocks class here, so you can review the full generated code if you wish. Though you can of course edit it, i would recommend against it as you will not be able to re-run the generator again without loosing changes.

Mocking method responses

Mock classes are dumb by default, so of course you cannot inject them into the upcoming code execution and expect them to work. You have to tell them how to respond when called. They will however record for you when their methods have been called for you to check or assert later. Using the framework you can tell a mock method what to return or exceptions to throw when the class your testing calls it.

So in effect you can teach them to emulate their real counter parts. For example when a Service method calls a Selector method it can return some in memory records as apposed to having to have them setup on the database. Or when the unit of work is used it will record method invocations as apposed to writing to the database.

Here is an example of configuring a Selector mock method to return test record data. Note that you also need to inform the Selector mock what type of SObject it relates to, this is also the case when mocking the Domain layer. Finally be sure to call startStubbing and stopStubbing between your mock configuration code. You can read much more about the ApexMocks API here, which resembles the Java Mockito API as well.

// Given
mocks.startStubbing();
List<Opportunity> testOppsList = new List<Opportunity> { 
	new Opportunity(
		Id = fflib_IDGenerator.generate(Opportunity.SObjectType),
		Name = 'Test Opportunity',
		StageName = 'Open',
		Amount = 1000,
		CloseDate = System.today()) };
Set<Id> testOppsSet = new Map<Id, Opportunity>(testOppsList).keySet();
mocks.when(domainMock.sObjectType()).thenReturn(Opportunity.SObjectType);
mocks.when(selectorMock.sObjectType()).thenReturn(Opportunity.SObjectType);
mocks.when(selectorMock.selectByIdWithProducts(testOppsSet)).thenReturn(testOppsList);
mocks.stopStubbing();

TIP: If you want to mock sub-select queries returned from a selector take a look at this.

Injecting your mock implementations

Finally before you call the method your wanting to test, ensure you have injected the mock implementations. So that the calls to the Application class factory methods will return your mock instances over the real implementations.

Application.UnitOfWork.setMock(uowMock);
Application.Domain.setMock(domainMock);
Application.Selector.setMock(selectorMock);

Testing your method and asserting the results

Calling your method to test is a straight forward as you would expect. If it returns values or modifies parameters you can assert those values. However the ApexMocks framework also allows you to add further behavioural assertions that add further confidence the code your testing is working the way it should. In this case we are wanting to assert or verify (to using mocking speak) the correct information was passed onto the domain and selector classes.

// When
OpportunitiesService.applyDiscounts(testOppsSet, discountPercent);

// Then
((IOpportunitiesSelector) 
	mocks.verify(selectorMock)).selectByIdWithProducts(testOppsSet);
((IOpportunities) 
	mocks.verify(domainMock)).applyDiscount(discountPercent, uowMock);
((fflib_ISObjectUnitOfWork) 
	mocks.verify(uowMock, 1)).commitWork();

TIP: You can verify method calls have been made and also how many times. For example checking a method is only called a specific number of times can help add some level of performance and optimisation checking into your tests.

Summary

The full API for ApecMocks is outside the scope of this blog series, and frankly Paul Hardaker and Jessie Altman have done a much better job, take a look at the full list of documentation links here. Finally keep in mind my comments at the start of this series, this is not to be seen as a total alternative to traditional Apex test method writing. Merely another option to consider when your wanting a more focused means to test specific methods in more varied ways without incurring the development and execution costs of having to setup all of your applications data in each test method.


2 Comments

Unit Testing, Apex Enterprise Patterns and ApexMocks – Part 1

If you attended my Advanced Apex Enterprise Patterns session at Dreamforce 2014 you’ll have heard me highlight the different between Apex tests that are either written as true unit test vs those written in a way that more resembles an integration test. As Paul Hardaker (ApexMocks author) once pointed out to me, technically the reality is Apex developers often only end up writing only integration tests.

Lets review Wikipedia’s definition of unit tests

Intuitively, one can view a unit as the smallest testable part of an application. In procedural programming, a unit could be an entire module, but it is more commonly an individual function or procedure. In object-oriented programming, a unit is often an entire interface, such as a class, but could be an individual method. Unit tests are short code fragments created by programmers or occasionally by white box testers during the development process

Does this describe an Apex Test you have written recently?

Lets review what Apex tests typically require us to perform…

  • Setup of application data for every test method
  • Executes the more code than we care about testing at the time
  • Tests often not very varied enough, as they can take a long time to run!

Does the following Wikipedia snippet describing integration tests more accurately describe this?

Integration testing (sometimes called integration and testing, abbreviated I&T) is the phase in software testing in which individual software modules are combined and tested as a group. It occurs after unit testing and before validation testing. Integration testing takes as its input modules that have been unit tested, groups them in larger aggregates, applies tests defined in an integration test plan to those aggregates, and delivers as its output the integrated system ready for system testing

The challenge with writing true unit tests in Apex can also leave those wishing to follow practices like TDD struggling due to the lack of dependency injection and mocking support in the Apex runtime. We start to desire mocking support such as what we find for example in Java’s Mockito (the inspiration behind ApexMocks).

The lines between unit vs integration testing and which we should use and when can get blurred since Force.com does need Apex tests to invoke Apex Triggers for coverage (requiring actual test integration with the database) and if your using Workflows a lot you may want the behaviour of these reflected in your tests. So one cannot completely move away from writing integration tests of course. But is there a better way for us to regain some of the benefits other platforms enjoy in this area for the times we feel it would benefit us?

Problems writing Unit Tests for complex code bases…

Integration TestingThe problem is a true unit tests aim to test a small unit of the code, typically a specific method. However if this method ends up querying the database we need to have inserted those records prior to calling the method and then assert the records afterwards. If your familiar with Apex Enterprise Patterns, you’ll recognise the following separation of concerns in this diagram which shows clearly what code might be executed in a controller test for example.

For complex applications this approach per test can be come quite an overhead before you even get to call your controller method and assert the results! Lets face it, as we have to wait longer and longer for such tests, this inhibits our desire to write further more complex tests that may more thoroughly test the code with different data combinations and use cases.

 

 

 

What if we could emulate the database layer somehow?

Well those of you familiar with Apex Enterprise Patterns will know its big on separation of concerns. Thus aspects such as querying the database and updating it are encapsulated away in so called Selectors and the Unit Of Work. Just prior to Dreamforce 2014, the patterns introduced the Application class, this provides a single application wide means to access the Service, Domain, Selector and Unit Of Work implementations as apposed to directly instantiating them.

If you’ve been reading my book, you’ll know that this also provides access to new Object Orientated Programming possibilities, such as polymorphism between the Service layer and Domain layer, allowing for a functional frameworks and greater reuse to be constructed within the code base.

In this two part blog series, we are focusing on the role of the Application class and its setMock methods. These methods, modelled after the platforms Test.setMock method (for mocking HTTP comms), provide a means to mock the core architectural layers of an application which is based on the Apex Enterprise Patterns. By allowing mocking in these areas, we can see that we can write unit tests that focus only on the behaviour of the controller, service or domain class we are testing.

Unit Testing

Preparing your Service, Domain and Selector classes for mocking

As described in my Dreamforce 2014 presentation, Apex Interfaces are key to implementing mocking. You must define these in order to allow the mocking framework to substitute dynamically different implementations. The patterns library also provides base interfaces that reflect the base class methods for the Selector and Domain layers. The sample application contains a full example of these interfaces and how they are applied.


// Service layer interface

public interface IOpportunitiesService
{
	void applyDiscounts(Set<ID> opportunityIds, Decimal discountPercentage);

	Set<Id> createInvoices(Set<ID> opportunityIds, Decimal discountPercentage);

	Id submitInvoicingJob();
}

// Domain layer interface

public interface IOpportunities extends fflib_ISObjectDomain
{
	void applyDiscount(Decimal discountPercentage, fflib_ISObjectUnitOfWork uow);
}

// Selector layer interface

public interface IOpportunitiesSelector extends fflib_ISObjectSelector
{
	List<Opportunity> selectByIdWithProducts(Set<ID> idSet);
}

First up apply the Domain class interfaces as follows…


// Implementing Domain layer interface

public class Opportunities extends fflib_SObjectDomain
	implements IOpportunities {

	// Rest of the class
}

Next is the Service class, since the service layer remains stateless and global, i prefer to retain the static method style. Since you cannot apply interfaces to static methods, i use the following convention, though I’ve seen others with inner classes. First create a new class something like OpportunitiesServiceImpl, copy the implementation of the existing service into it and remove the static modifier from the method signatures before apply the interface. The original service class then becomes a stub for the service entry point.


// Implementing Service layer interface

public class OpportunitiesServiceImpl
	implements IOpportunitiesService
{
	public void applyDiscounts(Set<ID> opportunityIds, Decimal discountPercentage)
	{
		// Rest of the method...
	}

	public Set<Id> createInvoices(Set<ID> opportunityIds, Decimal discountPercentage)
	{
		// Rest of the method...
	}

	public Id submitInvoicingJob()
	{
		// Rest of the method...
	}
}

// Service layer stub

global with sharing class OpportunitiesService
{
	global static void applyDiscounts(Set<ID> opportunityIds, Decimal discountPercentage)
	{
		service().applyDiscounts(opportunityIds, discountPercentage);
	}

	global static Set<Id> createInvoices(Set<ID> opportunityIds, Decimal discountPercentage)
	{
		return service().createInvoices(opportunityIds, discountPercentage);
	}

	global static Id submitInvoicingJob()
	{
		return service().submitInvoicingJob();
	}	

	private static IOpportunitiesService service()
	{
		return new OpportunitiesServiceImpl();
	}
}

Finally the Selector class, like the Domain class is a simple matter of applying the interface.

public class OpportunitiesSelector extends fflib_SObjectSelector
	implements IOpportunitiesSelector
{
	// Rest of the class
}

Implementing Application.cls

Once you have defined and implemented your interfaces you need to ensure there is a means to switch at runtime the different implementations of them, between the real implementation and a the mock implementation as required within a test context. To do this a factory pattern is applied for calling logic to obtain the appropriate instance. Define the Application class as follows, using the factory classes provided in the library. Also note that the Unit Of Work is defined here in a single maintainable place.

public class Application
{
	// Configure and create the UnitOfWorkFactory for this Application
	public static final fflib_Application.UnitOfWorkFactory UnitOfWork =
		new fflib_Application.UnitOfWorkFactory(
				new List<SObjectType> {
					Invoice__c.SObjectType,
					InvoiceLine__c.SObjectType,
					Opportunity.SObjectType,
					Product2.SObjectType,
					PricebookEntry.SObjectType,
					OpportunityLineItem.SObjectType });	

	// Configure and create the ServiceFactory for this Application
	public static final fflib_Application.ServiceFactory Service =
		new fflib_Application.ServiceFactory(
			new Map<Type, Type> {
					IOpportunitiesService.class => OpportunitiesServiceImpl.class,
					IInvoicingService.class => InvoicingServiceImpl.class });

	// Configure and create the SelectorFactory for this Application
	public static final fflib_Application.SelectorFactory Selector =
		new fflib_Application.SelectorFactory(
			new Map<SObjectType, Type> {
					Opportunity.SObjectType => OpportunitiesSelector.class,
					OpportunityLineItem.SObjectType => OpportunityLineItemsSelector.class,
					PricebookEntry.SObjectType => PricebookEntriesSelector.class,
					Pricebook2.SObjectType => PricebooksSelector.class,
					Product2.SObjectType => ProductsSelector.class,
					User.sObjectType => UsersSelector.class });

	// Configure and create the DomainFactory for this Application
	public static final fflib_Application.DomainFactory Domain =
		new fflib_Application.DomainFactory(
			Application.Selector,
			new Map<SObjectType, Type> {
					Opportunity.SObjectType => Opportunities.Constructor.class,
					OpportunityLineItem.SObjectType => OpportunityLineItems.Constructor.class,
					Account.SObjectType => Accounts.Constructor.class,
					DeveloperWorkItem__c.SObjectType => DeveloperWorkItems.class });
}

Using Application.cls

If your adapting an existing code base, be sure to leverage the Application class factory methods in your application code, seek out code which is explicitly instantiating the classes of your Domain, Selector and Unit Of Work usage. Note you don’t need to worry about Service class references, since this is now just a stub entry point.

The following code shows how to wrap the Application factory methods using convenience methods that can help avoid repeated casting to the interfaces, it’s up to you if you adopt these or not, the effect is the same regardless. Though the modification the service method shown above is required.


// Service class Application factory usage

global with sharing class OpportunitiesService
{
	private static IOpportunitiesService service()
	{
		return (IOpportunitiesService) Application.Service.newInstance(IOpportunitiesService.class);
	}
}

// Domain class Application factory helper

public class Opportunities extends fflib_SObjectDomain
	implements IOpportunities
{
	public static IOpportunities newInstance(List<Opportunity> sObjectList)
	{
		return (IOpportunities) Application.Domain.newInstance(sObjectList);
	}
}

// Selector class Application factory helper

public with sharing class OpportunitiesSelector extends fflib_SObjectSelector
	implements IOpportunitiesSelector
{
	public static IOpportunitiesSelector newInstance()
	{
		return (IOpportunitiesSelector) Application.Selector.newInstance(Opportunity.SObjectType);
	}
}

With these methods in place reference them and those on the Application class as shown in the following example.

public class OpportunitiesServiceImpl
	implements IOpportunitiesService
{
	public void applyDiscounts(Set<ID> opportunityIds, Decimal discountPercentage)
	{
		// Create unit of work to capture work and commit it under one transaction
		fflib_ISObjectUnitOfWork uow = Application.UnitOfWork.newInstance();

		// Query Opportunities
		List<Opportunity> oppRecords =
			OpportunitiesSelector.newInstance().selectByIdWithProducts(opportunityIds);

		// Apply discount via Opportunties domain class behaviour
		IOpportunities opps = Opportunities.newInstance(oppRecords);
		opps.applyDiscount(discountPercentage, uow);

		// Commit updates to opportunities
		uow.commitWork();
	}
}

The Selector factory does carry some useful generic helpers, these will internally utilise the Selector classes as defined on the Application class definition above.


List<Opportunity> opps =
   (List<Opportunity>) Application.Selector.selectById(myOppIds);

List<Account> accts =
   (List<Account>) Application.Selector.selectByRelationship(opps, Account.OpportunityId);

Summary and Part Two

In this blog we’ve looked at how to defined and apply interfaces between your service, domain, selector and unit of work dependencies. Using a factory pattern through the indirection of the Application class we have implemented an injection framework within the definition of these enterprise application separation of concerns.

I’ve seen dependency injection done via constructor injection, my personal preference is to use the approach shown in this blog. My motivation for this lies with the fact that these pattern layers are well enough known throughout the application code base and the Application class supports other facilities such as polymorphic instantiation of domain classes and helper methods as shown above on the Selector factory.

In the second part of this series we will look at how to write true unit tests for your controller, service and domain classes, leveraging the amazing ApexMocks library! If in the meantime you wan to get a glimpse of what this might look like take a wonder through the Apex Enterprise Patterns sample application tests here and here.


// Provide a mock instance of a Unit of Work
Application.UnitOfWork.setMock(uowMock);

// Provide a mock instance of a Domain class
Application.Domain.setMock(domainMock);

// Provide a mock instance of a Selector class
Application.Selector.setMock(selectorMock);

// Provide a mock instance of a Service class
Application.Service.setMock(IOpportunitiesService.class, mockService);


4 Comments

Extending Lightning Process Builder and Visual Workflow with Apex

MyProcessBuilderI love empowering as many people to experience the power of Salesforce’s hugely customisable and extensible platform as possible. In fact this platform has taught me that creating a great solution is not just about designing how we think a solution would be used, but also about empowering how others subsequently use it in conjunction with the platform, to create totally new use cases we never dream off! If your not thinking about how what your building sits “within” the platform your not fully embracing the true power of the platform.

So what is the next great platform feature and what has it go to do with writing Apex code? Well for me its actually two features, one has been around for a while, Visual Workflow, the other is new and shiny and is thus getting more attention, Lightning Process Builder. However as we will see in this blog there is a single way in which you can expose your finely crafted Apex functionality to users of both tools. Both of them have their merits and in fact can be used together for a super charged clicks not code experience!

IMPORTANT NOTE: Regardless if your developing building a solution in a Sandbox or as part of packaged AppExchange solution, you really need to understand this!

What is the difference between Visual Workflow and Lightning Process Builder?

At first sight these two tools look to achieve similar goals, letting the user turn a business process into steps to be executed one after another applying conditional logic and branching as needed.

One of the biggest challenges i see with the power these tools bring is educating users on use cases they can apply. While technically exposing Apex code to them is no different, its important to know some of the differences between how they are used when talking to people about how to best leverage them with your extensions.

  • UI based Processes. Here the big difference is that mainly, where Visual Workflow is concerned its about building user interfaces that allow users to progress through your steps through a wizard style UI, created by the platform for you based on the steps you’ve defined. Such UI’s can be started when the end user clicks on a tab, button or link to start the Visual Workflow (see my other blogs).
    VisualFlowUI
  • Record based Processes. In contrast Process Builder is about steps you define that happen behind the scenes when users are manipulating records such as Accounts, Opportunities an in fact any Standard or Custom object you choose. This also includes users of Salesforce1 Mobile and Salesforce API’s. As such Process Builder actually has more of an overlap with the capabilities of classic Workflow Rules.
    ProcessBuilderActions
  • Complexity. Process Builder is more simplistic compared to Flow, that’s not to say Flow is harder to use, its just got more features historically, for variables, branching and looping over information.
  • Similarities. In terms of the type steps you can perform within each there are some overlaps and some obvious differences, for example there are no UI related steps in Process Builder, yet both do have some support for steps that can be used to create or update records. They can both call out to Apex code, more on this later…
  • Power up with both! If you look closely at the screenshot above, you’ll see that Flows, short for Visual Workflow, can be selected as an Action Type within Process Builder! In this case your Flow cannot contain any visual steps, only logic steps, since Process Builder is not a UI tool. Such Flows are known as Autolaunched Flows (previously known as ‘Headless Flows’), you can read more here. This capability allows you to model more complex business processes in Process Builder.

You can read more details on further differences here from Salesforce.

What parts of your code should you expose?

Both tools have the ability to integrate at a record level with your existing custom objects, thus any logic you’ve placed in Apex Triggers, Validation Rules etc is also applied. So as you read this, your already extending these tools! However such logic is of course related to changes in your solutions record data. What about other logic?

  • Custom Buttons, Visualforce Buttons, If you’ve developed a Custom Button or Visualforce page with Apex logic behind it you may want to consider if that functionality might also benefit from being available through these tools. Allowing automation and/or alternative UI’s to be build without the need to involve a custom Visualforce or Apex development or changes to your based solution.
  • Share Calculations and Sub-Process Logic, You may have historically written a single peace of Apex code that orchestrates in a fixed a larger process via a series of calculations in a certain order and/or chains together other Apex sub-processes together. Consider if by exposing this code in a more granular way, would give users more flexibility in using your solution in different use cases. Normally this might require your code to respond to a new configuration you build in. For example where they wish to determine the order your Apex code is called, if parts should be omitted or even apply the logic to record changes on their own Custom Objects.

Design considerations for Invocable Methods

Now that we understand a bit more about the tools and the parts of your solution you might want to consider exposing, lets consider some common design considerations in doing so. The key to exposing Apex code to these tools is leveraging a new Spring’15 feature known as Invocable Methods. Here is an example i wrote recently…

global with sharing class RollupActionCalculate
{
	/**
	 * Describes a specific rollup to process
	 **/
	global class RollupToCalculate {

		@InvocableVariable(label='Parent Record Id' required=true)
		global Id ParentId;

		@InvocableVariable(label='Rollup Summary Unique Name' required=true)
		global String RollupSummaryUniqueName;

		private RollupService.RollupToCalculate toServiceRollupToCalculate() {
			RollupService.RollupToCalculate rollupToCalculate = new RollupService.RollupToCalculate();
			rollupToCalculate.parentId = parentId;
			rollupToCalculate.rollupSummaryUniqueName = rollupSummaryUniqueName;
			return rollupToCalculate;
		}
	}

	@InvocableMethod(
		label='Calculates a rollup'
		description='Provide the Id of the parent record and the unique name of the rollup to calculate, you specificy the same Id multiple times to invoke multiple rollups')
	global static void calculate(List<RollupToCalculate> rollupsToCalculate) {

		List<RollupService.RollupToCalculate> rollupsToCalc = new List<RollupService.RollupToCalculate>();
		for(RollupToCalculate rollupToCalc : rollupsToCalculate)
			rollupsToCalc.add(rollupToCalc.toServiceRollupToCalculate());

		RollupService.rollup(rollupsToCalc);
	}
}

NOTE: The use of global is only important if you plan to expose the action from an AppExchange package, if your developing a solution in a Sandbox for deployment to Production, you can use public.

As those of you following my blog may have already seen, i’ve been busy enabling my LittleBits Connector and Declarative Lookup Rollup Summary packages for these tools. In doing so, i’ve arrived at the following design considerations when thinking about exposing code via Invocable Methods.

  1. Design for admins not developers. Methods appear as ‘actions’ or ‘elements’ in the tools. Work with a few admins/consultants you know to sense check what your exposing make sense to them, a method name or purpose from a developers perspective might not make as much sense to an admin. 
  2. Don’t go crazy with the annotations! Salesforce has made it really easily to expose an Apex method via simple Apex annotations such as @InvocableMethod and @InvocableVariable. Just because its that easy doesn’t mean you don’t have to think carefully about where you apply them. I view Invocable Methods as part of your solutions API, and treat them as such, in terms of separation of concerns and best practices. So i would not apply them to methods on controller classes or even service classes, instead i would apply them to dedicate class that delegates to my service or business layer classes. 
  3. Apex class, parameter and member names matter. As with my thoughts on API best practices, establish a naming convention and use of common terms when naming these. Salesforce provides a REST API for describing and invoking Invocable Methods over HTTP, the names you use define that API. Currently both tools show the Apex class name and not the label, so i would derive some kind of way to group like actions together, i’ve chosen to follow the pattern [Feature]Action[ActionName], e.g. LittleBitsActonSendToDevice, so all actions for a given feature in your application at least group together. 
  4. Use the ‘label’ and ‘description’ annotation attributes. Both the @InvocableMethod and @InvocableVariable annotations support these, the tools will show your label instead of your Apex variable name in the UI. The Process Builder tool currently as far as i can see does not presently use the parameter description sadly (though Visual Workflow does) and neither tool the method description. Though i would recommend you define both in case in future releases that start to use it.
    • NOTE: As this text is end user facing, you may want to run it past a technical author or others to look for typos, spelling etc. Currently, there does not appear to be a way to translate these via Translation Workbench.

     

  5. Use the ‘required’ attribute. When a user adds your Invocable Method to a Process or Flow, it automatically adds prompts in the UI for parameters marked as required.
    	global class SendParameters {
    		@InvocableVariable(
                       Label='Access Token'
                       Description='Optional, if set via Custom Setting'
                       Required=False)
    		global String AccessToken;
    		@InvocableVariable(
                      Label='Device Id' Description='Optional, if set via Custom Setting'
                      Required=False)
            global String DeviceId;
    		@InvocableVariable(
                       Label='Percent'
                       Description='Percent of voltage sent to device'
                       Required=True)
            global Decimal Percent;
    		@InvocableVariable(
                       Label='Duration in Milliseconds'
                       Description='Duration of voltage sent to device'
                       Required=True)
            global Integer DurationMs;
    	}
    
        /**
         * Send percentages and durations to LittleBits cloud enabled devices
         **/
        @InvocableMethod(Label='Send to LittleBits Device' Description='Sends the given percentage for the given duration to a LittleBits Cloud Device.')
        global static void send(List<SendParameters> sendParameters) {
        	System.enqueueJob(new SendAsync(sendParameters));
    	}
    

    Visual Workflow Example
    FlowParams

    Lightning Process Builder Example
    ProcessBuilderParams

     

  6. Bulkification matters, really it does! Play close attention to the restrictions around your method signature when applying these annotations, as described in Invocable Method Considerations and Invocable Variables Considerations. One of the restrictions that made me smile was the insistence on the compiler requiring parameters be list based! If you recall this is also one of my design guidelines for the Apex Enterprise Patterns Service layer. Basically because it forces the developer to think about the bulk nature of the platform. Salesforce have thankfully enforced this, to ensure that when either of these tools call your method with multiple parameters in bulk record scenarios your strongly reminded to ensure your code behaves itself! Of course if your delegating to your Service layer, as i am doing in the examples above, this should be a fairly painless affair to marshall the parameters across.
    • NOTE: While you should of course write your Apex tests to pass bulk test data and thus test your bulkification code. You can also cause Process Builder and Visual Workflow to call your method with multiple parameters by either using List View bulk edits, Salesforce API or Anonymous Apex to perform a bulk DML operation on the applicable object. 
  7. Separation of Concerns and Apex Enterprise Patterns. As you can see in the examples above, i’m treating Invocable Actions as just another caller of the Service layer (described as part of the Apex Enterprise Patterns series). With its own concerns and requirements.
    • For example the design of the Service method signatures is limited only by the native Apex language itself, so can be quite expressive. Where as Invocable Actions have a much more restricted capability in terms of how they define themselves, we want to keep these two concerns separate.
    • You’ll also see in my LittleBits action example above, i’m delegating to the Service layer only after wrapping it in an Async context, since Process Builder calls the Invocable Method in the Trigger context. Remember the Service layer is ‘caller agnostic’, thus its not its responsibility to implement Aysnc on the callers behalf.
    • I have also taken to encapsulating the parameters and marshalling of these into the Service types as needed, thus allowing the Service layer and Invocable Method to evolve independently if needed.
    • Finally as with any caller of the Service layer i would expect only one invocation and to create a compound Service (see Service best practices) if more was needed, as apposed to calling multiple Services from the one Invocable Method.

     

  8. Do I need to expose a custom Apex or REST API as well? So you might be wondering that since we now have a way to call Apex code declaratively and in fact via the Standard Salesforce REST API (see Carolina’s exploits here). Would you ever still consider exposing a formal Apex or REST API (as described here)? Well here are some of my current thoughts on this, things are still evolving so i stress these are still my current thoughts…
    • The platform provides as REST API generated from you Invocable Method annotations, so aesthetically and functionally may look different from what you might consider a true RESTful API, in addition will fall under a different URL path to those you defined via the explicit Apex REST annotations.
    • If your data structures of your API are such that they don’t fall within those defined by Invocable Methods (such as Apex types referencing other Apex types, use of Maps, Enums etc..) then you’ll have to expose the Service anyway as an Apex API.
    • If your parameters and data structures are no more complex then those required for an Invocable Method, and thus would be identical to those of the Apex Service you might consider using the same class. However keep in mind you can only have one Invocable Method per class, and Apex Services typically have many methods related to the feature or service itself. Also while the data structure may match today, new functional requirements may stress this in the future, causing you to reinstate a Service layer, or worse, bend the parameters of your Invocable Methods.
    • In short my current recommendation is to continue to create a full expressive Service driven API for your solutions and treat Invocable Methods as another API variant.

 

Hopefully this has given you some useful things to think about when planning your own use of Invocable Methods in the future. As this is a very new area of the platform, i’m sure the above will evolve further over time as well.

 

 

 


50 Comments

Declarative Lookup Rollup Summaries – Spring’15 Release

This tool has had a steady number of releases last year, containing a few bug fixes and tweaks. This Spring’15 release contains a few enhancements i’d like to talk about in more detail in this blog. So lets get started with the list…

  • Support for Count Distinct rollups
  • Support for Picklist / Text based rollups via Concatenate and Concatenate Distinct operations
  • Support for Rolling up First or Last child records ordered by fields on the child such as Created by Date
  • Support for Lightning Process Builder

UPGRADE NOTE: The Lookup Rollup Summaries object has some new fields and pick list values to support the above features. If you are upgrading you will need to add these fields and pick list values manually. I have listed these in the README.

NewRollup

The following sections go into more details on each of the features highlighted above.

Support for Count Distinct rollups

If you want to exclude duplicate values from the Field to Aggregate on the child object select the Count Distinct option. For example if your counting Car__c child records by Colour__c and it has 4 records, Red, Green, Blue, Red, the count would result in 3 being stored in the Aggregate Result Field. You can read more about the original idea here.

Support for Picklist / Text based rollups

TextRollupThe rollup operations Sum, Min, Max, Avg, Count and Count Distinct operations are still restricted to rolling up number and date/time fields as per the previous releases of the tool.

However this version of the tool now supports new operations Concatenate and Concatenate Distinct that support text based fields. Allowing you to effectively concatenate field values on children into a multi-picklist or text field on the parent object record.

By default children are concatenated based on ascending order of the values in the Field to Aggregate field. However you can also use the Field to Order By field to specify an alternative child field to order by. You can define a delimiter via the Concatenate Delimiter field. You can also enter BR() in this field to add new lines between the values or a semi-colon ; character to support multi-picklist fields. You can read more about the original idea here.

Support for Rolling up First or Last child records

RollupFirstBy using the new First and Last operations you can choose to store the first or last record of the associated child records in the parent field. As with the concatenate feature above, the Field to Aggregate is used to order the child records in ascending order. However the Field to Order By field can also be used to order by an alternative field. Original idea here.

Support for Lightning Process Builder

To date the tool requires a small Apex Trigger to be deployed via the Manage Child Trigger button. With added support for Lightning Process Builder actions. This allows rollup calculations to be performed when creating or updating child object records. Sadly for the moment Process Builder does not support record deletion, so if your rollups require calculation on create, update and delete, please stick to the traditional approach.

ProcessBuilderModeTo use the feature ensure you have selected Process Builder from the Calculation Mode field and specified a unique name for the rollup definition in the Lookup Rollup Summary Unique Name. RollupUniqueName

Once this is configured your ready to add an Action in the Process Builder to execute the rollup. The following Process shows an example configuration.

ProcessBuilder1

ProcessBuilder2

As you’ve seen from some of my more recent blogs, i’m getting quite excited about Process Builder. However i have to say on further inspection, its got a few limits that really stop it for now being really super powerful! Having support for handling deletion of records and also some details in terms of how optimal it invokes the actions prevent me from recommending this feature in production. Please give me your thoughts on how else you think it could be used.


4 Comments

Controlling Internet Devices via Lightning Process Builder

Lightning Process Builder will soon become GA once the Spring’15 rollout completes in early February, just a few short weeks away as i write this. I don’t actually know where to start in terms of how huge and significant this new platform feature is! In my recent blog Salesforce evolves customization to a new level! over on the FinancialForce blog, i describe Salesforce as ‘the most powerful and productive cloud platform on the planet’. The more and more i get into Process Builder and how as a developer i can empower users of it, that statement is already starting to sound like an understatement!

There are many things getting me excited (as usual) about Salesforce these days, in addition to Process Builder and Invocable Actions (more on this later), its the Internet of Things. I just love the notion of inspecting and controlling devices no matter where i am on the planet. If you’ve been following my blog from earlier this year, you’ll hopefully have seen my exploits with the LittleBits cloud enabled devices and the Salesforce LittleBits Connector.

pointingdeviceI have just spent a very enjoyable Saturday morning in my Spring’15 Preview org with a special build of the LittleBits Connector. That leverages the ability for Process Builder to callout to specially annotated Apex code that in turn calls out to the LittleBits Cloud API.

The result, a fully declarative way to connect to LittleBits devices from Process Builder! If you watch the demo from my past blog you’ll see my Opportunity Probability Pointer in action, the following implements the same process but using only Process Builder!

LittleBitsProcessBuilder

Once Spring’15 has completely rolled out i’ll release an update to the Salesforce LittleBits Connector managed package that supports Process Builder, so you can try the above out. In the meantime if have a Spring’15 Preview Org you can deploy direct from GitHub and try it out now!

How can developers enhance Process Builder?

There are some excellent out of the box actions from which Process Builder or Flow Designer users can choose from, as i have covered in past blogs. What is really exciting is how developers can effectively extend these actions.

So while Salesforce has yet to provide a declarative means to make Web API callouts without code. A developer needs to provide a bit of Apex code to make the above work. Salesforce have made it insanely easy to expose code to tools like Process Builder and also Visual Flow. Such tools dynamically inspects Apex code in the org (including that from AppExchange packages) and renders a user interface for the Process Builder user to provide the necessary inputs (and map outputs if defined). All the developer has to do is use some Apex annotations.

global with sharing class LittleBitsActionSendToDevice {

	global class SendParameters {
		@InvocableVariable
		global String AccessToken;
		@InvocableVariable
        global String DeviceId;
		@InvocableVariable
        global Decimal Percent;
		@InvocableVariable
        global Integer DurationMs;
	}
	
    /**
     * Send percentages and durations to LittleBits cloud enabled devices
     **/
    @InvocableMethod(
    	Label='Send to LittleBits Device' 
    	Description='Sends the given percentage for the given duration to a LittleBits Cloud Device.')
    global static void send(List&amp;lt;SendParameters&amp;gt; sendParameters) {
    	System.enqueueJob(new SendAsync(sendParameters));
	}	
}

I learn’t quite a lot about writing Invocable Actions today and will be following up with some guidelines and thoughts on how i have integrated them with Apex Enterprise Patterns Service Layer.


5 Comments

Creating, Assigning and Checking Custom Permissions

I have been wanting to explore Custom Permissions for a little while now, since they are now GA in Winter’15 i thought its about time i got stuck in. Profiles, Permission Sets have until recently, been focusing on granting permissions to entities only known to the platform, such as objects, fields, Apex classes and Visualforce pages. In most cases these platform entities map to a specific feature in your application you want to provide permission to access.

However there are cases where this is not always that simple. For example consider a Visualforce page you that controls a given process in your application, it has three buttons on it Run, Clear History and Reset. You can control access to the page itself, but how do you control access to the three buttons? What you need is to be able to teach Permission Sets and Profiles about your application functionality, enter Custom Permissions!

CustomPermissions
CustomPermission

NOTE: That you can also define dependencies between your Custom Permissions, for example Clear History and Reset permissions might be dependent on a Manage Important Process  custom permission in your package.

Once these have been created, you can reference them in your packaged Permission Sets and since they are packaged themselves, they can also be referenced by admins managing your application in a subscriber org.

SetCustomPermissions

The next step is to make your code react to these custom permissions being assigned or not.

New Global Variable $Permission

You can use the $Permission from a Visualforce page or as SFDCWizard points out here from Validation Rules! Here is the Visualforce page example given by Salesforce in their documentation.

<apex:pageBlock rendered="{!$Permission.canSeeExecutiveData}">
   <!-- Executive Data Here -->
</apex:pageBlock>

Referencing Custom Permissions from Apex

In the case of object and field level permissions, the Apex Describe API can be used to determine if an object or field is available and for what purpose, read or edit for example. This is not going help us here, as custom permissions are not related to any specific object or field. The solution is to leverage the Permission Set Object API to query the SetupEntityAccess and CustomPermission records for Permission Sets or Profiles that are assigned to the current user.

The following SOQL snippets are from the CustomPermissionsReader class i created to help with reading Custom Permissions in Apex (more on this later). As you can see you need to run two SOQL statements to get what you need. The first to get the Id’s the second to query if the user actually has been assigned a Permission Set with them in.


List<CustomPermission> customPermissions =
    [SELECT Id, DeveloperName
       FROM CustomPermission
       WHERE NamespacePrefix = :namespacePrefix];

List<SetupEntityAccess> setupEntities =
    [SELECT SetupEntityId
       FROM SetupEntityAccess
       WHERE SetupEntityId in :customPermissionNamesById.keySet() AND
             ParentId IN (SELECT PermissionSetId
                FROM PermissionSetAssignment
                WHERE AssigneeId = :UserInfo.getUserId())];

Now personally i don’t find this approach that appealing for general use, firstly the Permission Set object relationships are quite hard to get your head around and secondly we get charged by the platform to determine security through the SOQL governor. As a good member of the Salesforce community I of course turned my dislike into an Idea “Native Apex support for Custom Permissions” and posted it here to recommend Salesforce include a native class for reading these, similar to Custom Labels for example.

Introducing CustomPermissionReader

In the meantime I have set about creating an Apex class to help make querying and using Custom Permissions easier. Such a class might one day be replaced if my Idea becomes a reality or maybe its internal implementation just gets improved. One things for sure, i’d much rather use it for now than seed implicit SOQL’s throughout a code base!

Its pretty straight forward to use, construct it in one of two ways, depending if you want all non-namespaced Custom Permissions or if your developing a AppExchange package, give it any one of your packaged Custom Objects and it will ensure that it only ever reads the Custom Permissions associated with your package.

You can download the code and test for CustomPermissionsReader here.


// Default constructor scope is all Custom Permissions in the default namespace
CustomPermissionsReader cpr = new CustomPermissionsReader();
Boolean hasPermissionForReset = cpr.hasPermission('Reset');

// Alternative constructor scope is Custom Permissions that share the
//   same namespace as the custom object
CustomPermissionsReader cpr = new CustomPermissionsReader(MyPackagedObject.SObjectType);
Boolean hasPermissionForReset = cpr.hasPermission('Reset');

Like any use of SOQL we must think in a bulkified way, indeed its likely that for average to complex peaces of functionality you may want to check at least two or more custom permissions once you get started with them. As such its not really good practice to make single queries in each case.

For this reason the CustomPermissionsReader was written to load all applicable Custom Permissions and act as kind of cache. In the next example you’ll see how i’ve leveraged the Application class concept from the Apex Enterprise Patterns conventions to make it a singleton for the duration of the Apex execution context.

Here is an example of an Apex test that creates a PermissionSet, adds the Custom Permission and assigns it to the running user to confirm the Custom Permission was granted.

	@IsTest
	private static void testCustomPermissionAssigned() {

		// Create PermissionSet with Custom Permission and assign to test user
		PermissionSet ps = new PermissionSet();
		ps.Name = 'Test';
		ps.Label = 'Test';
		insert ps;
		SetupEntityAccess sea = new SetupEntityAccess();
		sea.ParentId = ps.Id;
		sea.SetupEntityId = [select Id from CustomPermission where DeveloperName = 'Reset'][0].Id;
		insert sea;
		PermissionSetAssignment psa = new PermissionSetAssignment();
		psa.AssigneeId = UserInfo.getUserId();
		psa.PermissionSetId = ps.Id;
		insert psa;

		// Create reader
		CustomPermissionsReader cpr = new CustomPermissionsReader();

		// Assert the CustomPermissionsReader confirms custom permission assigned
		System.assertEquals(true, cpr.hasPermission('Reset'));
	}

Seperation of Concerns and Custom Permissions

Those of you familiar with using Apex Enterprise Patterns might be wondering where checking Custom Permission fits in terms of separation of concerns and the layers the patterns promote.

The answer is at the very least in or below the Service Layer, enforcing any kind of security is the responsibility of the Service layer and callers of it are within their rights to assume it is checked. Especially if you have chosen to expose your Service layer as your application API.

This doesn’t mean however you cannot improve your user experience by using it from within Apex Controllers,  Visualforce pages or @RemoteAction methods to control the visibility of related UI components, no point in teasing the end user!

Integrating CustomerPermissionsReader into your Application class

The following code uses the Application class concept i introduced last year and at Dreamforce 2014, which is a single place to access your application scope concepts, such as factories for selectors, domain and service class implementations (it also has a big role to play when mocking).

public class Application {

	/**
	 * Expoeses typed representation of the Applications Custom Permissions
	 **/
	public static final PermissionsFactory Permissions = new PermissionsFactory();

	/**
	 * Class provides a typed representation of an Applications Custom Permissions
	 **/
	public class PermissionsFactory extends CustomPermissionsReader
	{
		public Boolean Reset { get { return hasPermission('Reset'); } }
	}
}

This approach ensures their is only one instance of the CustomPermissionsReader per Apex Execution context and also through the properties it exposes gives a compiler checked way of referencing the Custom Permissions, making it easier for application developers code to access them.

if(Application.Permissions.Reset)
{
  // Do something to do with Reset...
}

Finally, as a future possibility, this approach gives a nice injection point for mocking the status of Custom Permissions in your Apex Unit tests, rather than having to go through the trouble of setting up a Permission Set and assigning it in your test code every time as shown above.

Call to Action: Ideas to Upvote

While writing this blog I created one Idea and came across a two others, i’d like to call you the reader to action on! Please take a look and of course only if you agree its a good one, give it the benefit of your much needed up vote!

Follow

Get every new post delivered to your Inbox.

Join 240 other followers