Andy in the Cloud

From BBC Basic to Force.com and beyond…


Leave a comment

Great Contributions to Apex Enterprise Patterns!

Just one week before the start of Dreamforce 2015, the Apex Enterprise Pattern library will be 2 years old since it was first published. My community time is increasingly busy these days not only answering questions around this and other GitHub repositories, but also reviewing Pull Requests. Both are a sign of a healthy open source project for sure! In this short blog i just wanted to call out some of the new features added by the community. So lets get started…

  • Unit of Work, register Method flexibility.  
    The initial implementation of Unit Of Work used a List to contain the records registered with it. This ment that in some cases if the code path registered the same record twice an error would occur. This change means you don’t have to worry about this and if complex code paths happen to register the same record again, it will be handled without error.Thanks to: Thomas Fuda for this enhancement!
  • Unit of Work, customisable DML implementation via IDML interface. 
    This improvement allows you to implement the IDML interface to implement your own calls to the platforms DML methods. The use case that prompted this enhancement was to allow for fine grained control using the Database methods that permit options such as all or nothing control (the default being all).Thanks to: David Esposito for this enhancement!
  • Unit of Work and Application Factory, new newInstance(Set<SObjectType> types) method
    This enhancement provides the ability to leverage the factory but have it provide Unit of Work instances configured using a specific set of SObjectType’s and not the default one. In cases where you have only a few objects to register, perhaps dynamically or those different from the default Application set for a specific use case. Please read the comments for this method for more details.Thanks to: John Davis for this enhancement!
  • Unit of Work, Eventing API
    New virtual methods have been added to the Unit of Work class, allowing those that want to subclass it, to hook special handling code to be executed during commitWork. This allows you to extend in very custom way your application or services own work to be done at various stages, start, end and during the DML operations. For example common validation that can only occur once everything has been written but not before the request ends.Thanks to: John Davis for this enhancement!
  • Unit of Work, Bulkified Register Methods
    Its now possible to register lists of SObject’s with the Unit of Work in one method call rather than one per call. While the Unit of Work has always been internally bulkified, this enhancement helps callers who are dealing with lists or maps interact with the Unit Of Work more easily.Thanks to: MayTheSForceBeWithYou (now a FinancialForce employee) for this enhancement!
  • Selector, Better default Order By handling
    Not all Standard objects have a Name field, this excellent enhancement helped ensure that if this was the case the base class would look for the next best thing to sequence your records against by default. Alex has also made numerous other tweaks to the Selector layer in addition to this btw!Thanks to: Alex Tennant for this enhancement!

In addition to the above there has been numerous behind the scenes improvements to library as well, making it more stable, support various non standard aspects of standard objects and such like. I based the above on GitHub’s report of commits to the repository here.

In addition to code changes, there are also some great discussions and ideas in the pipeline…

  • Unit of Work and Cyclic Dependencies
    This limitation doesn’t come up often, but for complex situations is causing fans of the UOW some pain having to leave it behind when it does. This is the long standing request by Seth Stone, but has seen a few attempts since via Alex Tennant and more recently some upcoming efforts by john-m in the pipeline. Watch this space!
  • Helper methods to fflib_SObjectDomain for Changed Fields
    Another Force.com MVP, Daniel Hoechst, suggested this feature, inspired from those present in other trigger frameworks, join the conversation here.
  • Support for undelete Trigger events to Domain layer
    This idea raised by our good friends over at BigAssForce, is seeing some recent interest for contribution by Autobat, see here for more discussion.

Thank you one and all for being social and contributing to a growing community around this library!


Leave a comment

Using Action Link Templates to Declaratively call API’s

ActionLinkSetupSalesforce recently introduced a new platform feature which has now become GA called Action Link Templates. Since then its been staring me in the face and bugging me that i didn’t quite understand them until now…

While there is quite a lot of information in the Salesforce documentation, i was still a bit lost as to what even an Action Link was. It turns out that they are a means to define actions that can appear in a Chatter Post to call external or Salesforce web based API’s. Thus allowing users to do more without leaving their feed.

After realising its a means to link user actions with API’s. I could not resist exploring further with one my favourite external API’s, from LittleBits. The LittleBits cloud API can be used with cloud connected devices constructed by snapping together modules.

The following shows a Chatter Post i created with an Action Link button on it that without any code calls the LittleBits API to cause my device to perform an action. You can read more about my past exploits with Littlebits devices and Salesforce here.

ActionLinkPrimary

It appears for now at least, that such Chatter Posts need to be programatically created and as such lend themselves to integration use cases. While it is possible to create Chatter Posts with Action Links in code without using a template, thats more coding, and doesn’t encourage reuse of Action Link definitions which can also be packaged btw. So this blog focuses, as always on the best practice of balancing the best of declarative with minimal of coding to create the post itself. So first of all lets get some terminology out of the way…

  • Action Link, can actually be rendered as a button or a menu option that appears inline in the chatter post or in the overflow menu on the post. The button can either call an API, redirect to another web site or offer to download a file for the user. You have to add a Action Link to an Action Link Group before you can add it to a Chatter post.
  • Action Link Group, is a collection of one or more Action Links. The idea is the group presents a collection of choices you want to give to the user, e.g Accept, Decline. You can define a default choice, though the user can only pick one. Think of it like a group of radio controls or a choices type UI element. As mentioned above you can create both these 100% in code if you desire.
  • Action Link Group Template, is as the name suggests similar to the above, but allows for declarative definition and then programatic application of the buttons to be separated out. Once you start defining Action Links you’ll see they require a bit of knowledge about the underlying API. So in addition to the reuse benefit, a template is a good way to have someone else or package developer do that work for you. In order to make them generic, you can define place holders in Action Links, called bindings, that allow you to vary the information passed to the underlying API being called.

To define an Action Link you need to create the Action Link Group. Because we are using a template, this can be done using point and click. Under the Setup menu, under Create, you’ll find Action Link Templates, click New.

ActionLinkGroup

The Category field allows you to determine where the Action Link appears, in the body of the feed by selecting Primary action (as shown in the screenshot above) or in the overflow menu by selecting Overflow action, as shown in the screenshot below. Note that my example only defines one Action Link, you can define more.

ActionLinkOverflow

Through the Executions Allowed field, you can also determine if the Action Link can be invoked only once (first come first served) or once by each user who can see the chatter post (for example a chatter post to a group). You can read more about these and other fields here.

Your now ready to add an Action Link to the template, first study the documentation of your chosen web API, not that it can in theory be a SOAP based API, though REST is generally simpler. Hopefully, like the LittleBits API there are some samples that you can copy and paste to get you started. The following extract is what the LittleBits API documentation has to say about the API to control (output to) a device

This outputs 10% amplitude for 10 seconds:

curl -XPOST https://api-http.littlebitscloud.cc/devices/a84hf038ierj/output 
-H ‘Authorization: Bearer TOKEN’ 
-H ‘Accept: application/vnd.littlebits.v2+json’ 
–data ‘{“percent”:10,”duration_ms”:10000}’
“OK”

REST API documentation often uses a command line program called curl as an easy way to try out the API without having to write program code. In the screenshot below you can see how the curl parameters used in the extract above have been mapped to the fields when defining an Action Link. Note also that i have used the {!Bindings.var} syntax to define variable aspects, such as the deviceId, accessToken, percent and durationMs.

ActionLinkLittleBits

NOTE: The User Visibility setting is quite flexible and allows you to control who can actually press the button, as apposed to those who can actually see the Chatter Post.

Go back to your Action Link Group Template and check the Published checkbox. This makes it available for use when creating posts, but also has the effect of making certain aspects read only, such as the bindings. Though you can thankfully continue to tweak the API header and body templates defined on the Action Links.

Execute from Developer Console the following and it will create the Chatter Post shown in above. Currently neither Process Builder or Visual Flow are yet to support Action Link Templates when creating Chatter posts, which gives me an idea for a part two to this blog actually! For now please up vote this idea and review the following code.

// Specify values for Action Link bindings
Map<String, String> bindingMap = new Map<String, String>();
bindingMap.put('deviceId', 'yourdeviceid');
bindingMap.put('accessToken', 'youraccesstoken');
bindingMap.put('percent', '50');
bindingMap.put('durationMs', '10000');
List<ConnectApi.ActionLinkTemplateBindingInput> bindingInputs = new List<ConnectApi.ActionLinkTemplateBindingInput>();
for (String key : bindingMap.keySet()) {
    ConnectApi.ActionLinkTemplateBindingInput bindingInput = new ConnectApi.ActionLinkTemplateBindingInput();
    bindingInput.key = key;
    bindingInput.value = bindingMap.get(key);
    bindingInputs.add(bindingInput);
}

// Create an Action Link Group definition based on the template and bindings
ActionLinkGroupTemplate template = [SELECT Id FROM ActionLinkGroupTemplate WHERE DeveloperName='LittleBits'];
ConnectApi.ActionLinkGroupDefinitionInput actionLinkGroupDefinitionInput = new ConnectApi.ActionLinkGroupDefinitionInput();
actionLinkGroupDefinitionInput.templateId = template.id;
actionLinkGroupDefinitionInput.templateBindings = bindingInputs;
ConnectApi.ActionLinkGroupDefinition actionLinkGroupDefinition =
    ConnectApi.ActionLinks.createActionLinkGroupDefinition(Network.getNetworkId(), actionLinkGroupDefinitionInput);
System.debug('Action Link Id is ' + actionLinkGroupDefinition.actionLinks[0].Id);

// Create the post and utilise the Action Link Group created above
ConnectApi.TextSegmentInput textSegmentInput = new ConnectApi.TextSegmentInput();
textSegmentInput.text = 'Click to Send to the Device.';
ConnectApi.FeedItemInput feedItemInput = new ConnectApi.FeedItemInput();
feedItemInput.body = new ConnectApi.MessageBodyInput();
feedItemInput.subjectId = 'me';
feedItemInput.body.messageSegments = new List<ConnectApi.MessageSegmentInput> { textSegmentInput };
feedItemInput.capabilities = new ConnectApi.FeedElementCapabilitiesInput();
feedItemInput.capabilities.associatedActions = new ConnectApi.AssociatedActionsCapabilityInput();
feedItemInput.capabilities.associatedActions.actionLinkGroupIds = new List<String> { actionLinkGroupDefinition.id };

// Post the feed item.
ConnectApi.FeedElement feedElement =
    ConnectApi.ChatterFeeds.postFeedElement(
        Network.getNetworkId(), feedItemInput, null);

If you review the debug log produced the above code will output the Action Link Id. This can be used to retrieve response information from the Web API called. This is especially useful if the Web API callout failed, as only a generic failure message is shown to the end user. Once you have the Action Link Id paste the following code into Developer Console and review the debug log for the Web API response.

ConnectApi.ActionLinkDiagnosticInfo diagInfo =
    ConnectApi.ActionLinks.getActionLinkDiagnosticInfo(
        Network.getNetworkId(), '0AnG0000000Cd3NKAS');
System.debug('Diag output ' + diagInfo.diagnosticInfo);

Summary

Its true that Chatter Actions (formally Publisher Actions) are another means to customise the user experience of Chatter Posts, however these require development of Visualforce pages or Canvas applications. However by using Action Links you can provide a simpler platform driven user experience with much less coding.

By using Action Link Group Templates you can separate the concerns of delivering an integration, between those who know the external API’s and those that are driving the integration with Chatter via chatter posts referencing them. The bindings form the contract between the two.

Its also worth noting the Apex REST API‘s can be used from Action Links as well as other Salesforce API’s, in this case the authentication is handled for you, nice!


9 Comments

Where to place Validation code in an Apex Trigger?

Quite often when i answer questions on Salesforce StackExchange they prompt me to consider future blog posts. This question has been sat on my blog list for a while and i’m finally going to tackle the ‘performing validation in the after‘ comment in this short blog post, so here goes!

Salesforce offers Apex developers two phases within an Apex Trigger, before and after. Most examples i see tend to perform validation code in the before phase of the trigger, even the Salesforce examples show this. However there can be implications with this, that is not at first that obvious. Lets look at an example, first here is my object…

Screen Shot 2015-04-19 at 09.31.05

Now the Apex Trigger doing some validation…

trigger LifeTheUniverseAndEverythingTrigger1 on LifeTheUniverseAndEverything__c
   (before insert) {

	// Make sure if there is an answer given its always 42!
	for(LifeTheUniverseAndEverything__c record : Trigger.new) {
		if(record.Answer__c!=null && record.Answer__c != 42) {
			record.Answer__c.addError('Answer is not 42!');
		}
	}
}

The following test method asserts that a valid value has been written to the database. In reality you would also have tests that assert invalid values are rejected, though the test method below will suffice for this blog.

	@IsTest
	private static void testValidSucceeds() {

		// Insert a valid answer
		LifeTheUniverseAndEverything__c
			lifeTheUniverseAndEverything = new LifeTheUniverseAndEverything__c();
		lifeTheUniverseAndEverything.Answer__c = 42;
		insert lifeTheUniverseAndEverything;

		// Is the answer still the same, surely nobody could have changed it right?
		System.assertEquals(42,
			[select Answer__c
			   from LifeTheUniverseAndEverything__c
			   where Id = :lifeTheUniverseAndEverything.Id][0].Answer__c);
	}

This all works perfectly so far!

Screen Shot 2015-04-19 at 10.46.51

What harm can a second Apex Trigger do?

Once developed Apex Triggers are either deployed into a Production org or packaged within an AppExchange package which is then installed. In the later case such Apex Triggers cannot be changed. The consideration being raised here arises if a second Apex Trigger is created on the object. There can be a few reasons for this, especially if the existing Apex Trigger is managed and cannot be modified or a developer simply chooses to add another Apex Trigger.

So what harm can a second Apex Trigger on the same object really cause? Well, like the first Apex Trigger it has the ability to change field values as well as validate them. As per the additional considerations at the bottom of the Salesforce trigger invocation documentation, Apex Triggers are not guaranteed to fire in any order. So what would happen if we add a second trigger like the one below which attempts to modify the answer to an invalid value?

trigger LifeTheUniverseAndEverythingTrigger2 on LifeTheUniverseAndEverything__c
  (before insert) {

	// I insist that the answer is actually 43!
	for(LifeTheUniverseAndEverything__c record : Trigger.new) {
		record.Answer__c = 43;
	}
}

At time of writing in my org, it appears that this new trigger (despite its name) is actually being run by the platform before my first validation trigger. Thus since the validation code gets executed after this new trigger changes the value we can see the validation is still catching it. So while thats technically not what this test method was testing, it shows for the purposes of this blog that the validation is still working, phew!

Screen Shot 2015-04-19 at 10.50.12

Apex Trigger execution order matters…

So while all seems well up until this point,  remember that we cannot guarantee that Salesforce will always run our validation trigger last. What if ours validation trigger ran first? Since we cannot determine the order of invocation of triggers, what we can do to illustrate the effects of this is simply switch the code in the two examples triggers like this so.

trigger LifeTheUniverseAndEverythingTrigger2 on LifeTheUniverseAndEverything__c
  (before insert) {

  	// Make if there is an answer given its always 42!
	for(LifeTheUniverseAndEverything__c record : Trigger.new) {
		if(record.Answer__c!=null && record.Answer__c != 42) {
			record.Answer__c.addError('Answer is not 42!');
		}
	}
}

trigger LifeTheUniverseAndEverythingTrigger1 on LifeTheUniverseAndEverything__c
  (before insert, before update) {

	// I insist that the answer is actually 43!
	for(LifeTheUniverseAndEverything__c record : Trigger.new) {
		record.Answer__c = 43;
	}
}

Having effectively emulated the platform running the two triggers in a different order, validation trigger first, then the field modify trigger second. Our test asserts are now showing the validation logic in this scenario failed to do its job and invalid data reached the database, not good!

Screen Shot 2015-04-19 at 10.53.51

So whats the solution to making my triggers bullet proof?

So what is the solution to avoiding this, well its pretty simple really, move your logic into the after phase. Even though the triggers may still fire in different orders, one thing is certain. Nothing and i mean nothing, can change in the after phase of a trigger execution, meaning you can reliably check the field values without fear of them changing later!

trigger LifeTheUniverseAndEverythingTrigger2 on LifeTheUniverseAndEverything__c
  (after insert) {

  	// Make if there is an answer given its always 42!
	for(LifeTheUniverseAndEverything__c record : Trigger.new) {
		if(record.Answer__c!=null && record.Answer__c != 42) {
			record.Answer__c.addError('Answer is not 42!');
		}
	}
}

Thus with this change in place, even though the second trigger fires afterwards and attempts to change the value an inserted to 43, the first trigger validation still prevents records being inserted to the database, success!

Screen Shot 2015-04-19 at 10.50.12

Summary

This approach is actually referenced in a few Trigger frameworks such as Tony Scott’s ‘Trigger Pattern for Tidy, Streamlined, Bulkified Triggers‘ and the Apex Enterprise Patterns Domain pattern.

One downside here is that for error scenarios the record is executing potentially much more platform features (for workflow and other processes) before your validation eventually stops proceedings and asks for the platform to roll everything back. That said, error scenarios, once users learn your system are hopefully less frequent use cases.

So if you feel scenario described above is likely to occur (particularly if your developing a managed package where its likely subscriber developers will add triggers), you should seriously consider leveraging the after phase of triggers for validation. Note this also applies with the update and delete events in triggers.

 


2 Comments

Unit Testing, Apex Enterprise Patterns and ApexMocks – Part 2

In Part 1 of this blog series i introduced a new means of applying true unit testing to Apex code leveraging the Apex Enterprise Patterns. Covering the differences between true unit testing vs integration testing and how the lines can get a little blurred when writing Apex test methods.

If your following along you should be all set to start writing true unit tests against your controller, service and domain classes. Leveraging the inbuilt dependency injection framework provided by the Application class introduced in the last blog. By injecting mock implementations of service, domain, selector and unit of work classes accordingly.

What are Mock classes and why do i need them?

Depending on the type of class your unit testing you’ll need to mock different dependencies so that you don’t have to worry about the data setup of those classes while your busy putting your hard work in to testing your specific class.

Unit Testing

In object-oriented programming, mock objects are simulated objects that mimic the behavior of real objects in controlled ways. A programmer typically creates a mock object to test the behavior of some other object, in much the same way that a car designer uses a crash test dummy to simulate the dynamic behavior of a human in vehicle impacts. Wikipedia.

In this blog we are going to focus on an example unit test method for a Service, which requires that we mock the unit of work, selector and domain classes it depends on (unit tests for these classes will of course be written as well). Lets take a look first at the overall test method then break it down bit by bit. The following test method makes no SOQL queries or DML to accomplish its goal of testing the service layer method.

	@IsTest
	private static void callingServiceShouldCallSelectorApplyDiscountInDomainAndCommit()
	{
		// Create mocks
		fflib_ApexMocks mocks = new fflib_ApexMocks();
		fflib_ISObjectUnitOfWork uowMock = new fflib_SObjectMocks.SObjectUnitOfWork(mocks);
		IOpportunities domainMock = new Mocks.Opportunities(mocks);
		IOpportunitiesSelector selectorMock = new Mocks.OpportunitiesSelector(mocks);

		// Given
		mocks.startStubbing();
		List<Opportunity> testOppsList = new List<Opportunity> { 
			new Opportunity(
				Id = fflib_IDGenerator.generate(Opportunity.SObjectType),
				Name = 'Test Opportunity',
				StageName = 'Open',
				Amount = 1000,
				CloseDate = System.today()) };
		Set<Id> testOppsSet = new Map<Id, Opportunity>(testOppsList).keySet();
		mocks.when(domainMock.sObjectType()).thenReturn(Opportunity.SObjectType);
		mocks.when(selectorMock.sObjectType()).thenReturn(Opportunity.SObjectType);
		mocks.when(selectorMock.selectByIdWithProducts(testOppsSet)).thenReturn(testOppsList);
		mocks.stopStubbing();
		Decimal discountPercent = 10;
		Application.UnitOfWork.setMock(uowMock);
		Application.Domain.setMock(domainMock);
		Application.Selector.setMock(selectorMock);

		// When
		OpportunitiesService.applyDiscounts(testOppsSet, discountPercent);

		// Then
		((IOpportunitiesSelector) 
			mocks.verify(selectorMock)).selectByIdWithProducts(testOppsSet);
		((IOpportunities) 
			mocks.verify(domainMock)).applyDiscount(discountPercent, uowMock);
		((fflib_ISObjectUnitOfWork) 
			mocks.verify(uowMock, 1)).commitWork();
	}

First of all, you’ll notice the test method name is a little longer than you might be used to, also the general layout of the test splits code into Given, When and Then blocks. These conventions help add some documentation, readability and consistency to test methods, as well as helping you focus on what it is your testing and assuming to happen. The convention is one defined by Martin Fowler, you can read more about GivenWhenThen here. The test method name itself, stems from a desire to express the behaviour the test is confirming.

Generating and using Mock Classes

The Java based Mockito framework leverages the Java runtimes capability to dynamically create mock implementations. However the Apex runtime does not have any support for this. Instead ApexMocks uses source code generation to generate the mock classes it requires based on the interfaces you defined in my earlier post.

The patterns library also comes with its own mock implementation of the Unit of Work for you to use, as well as some base mock classes for your selectors and domain mocks (made know to the tool below). The following code at the top of the test method creates the necessary mock instances that will be configured and injected into the execution.

// Create mocks
fflib_ApexMocks mocks = new fflib_ApexMocks();
fflib_ISObjectUnitOfWork uowMock = new fflib_SObjectMocks.SObjectUnitOfWork(mocks);
IOpportunities domainMock = new Mocks.Opportunities(mocks);
IOpportunitiesSelector selectorMock = new Mocks.OpportunitiesSelector(mocks);

To generate the Mocks class used above use the ApexMocks Generator, you can run it via the Ant tool. The apex-mocks-generator-3.1.2.jar file can be downloaded from the ApexMocks repo here.

<?xml version="1.0" encoding="UTF-8"?>
<project name="Apex Commons Sample Application" default="generate.mocks" basedir=".">

	<target name="generate.mocks">
		<java classname="com.financialforce.apexmocks.ApexMockGenerator">
			<classpath>
				<pathelement location="${basedir}/bin/apex-mocks-generator-3.1.2.jar"/>
			</classpath>
			<arg value="${basedir}/fflib-sample-code/src/classes"/>
			<arg value="${basedir}/interfacemocks.properties"/>
			<arg value="Mocks"/>
			<arg value="${basedir}/fflib-sample-code/src/classes"/>
		</java>
	</target>

</project>

You can configure the output of the tool using a properties file (you can find more information here).

IOpportunities=Opportunities:fflib_SObjectMocks.SObjectDomain
IOpportunitiesSelector=OpportunitiesSelector:fflib_SObjectMocks.SObjectSelector
IOpportunitiesService=OpportunitiesService

The generated mock classes are contained as inner classes in the Mocks class and also implement the interfaces you define, just as the real classes do. You can choose to add the above Ant tool call into your build scripts or just simply retain the class in your org refreshing it by re-run the tool whenever your interfaces change.


</pre>
<pre>/* Generated by apex-mocks-generator version 3.1.2 */
@isTest
public class Mocks
{
	public class OpportunitiesService 
		implements IOpportunitiesService
	{
		// Mock implementations of the interface methods...
	}

	public class OpportunitiesSelector extends fflib_SObjectMocks.SObjectSelector 
		implements IOpportunitiesSelector
	{
		// Mock implementations of the interface methods...
	}

	public class Opportunities extends fflib_SObjectMocks.SObjectDomain 
		implements IOpportunities
	{
		// Mock implementations of the interface methods...
	}
}

To make it easier for those wanting to try out the sample application i have committed the generated Mocks class here, so you can review the full generated code if you wish. Though you can of course edit it, i would recommend against it as you will not be able to re-run the generator again without loosing changes.

Mocking method responses

Mock classes are dumb by default, so of course you cannot inject them into the upcoming code execution and expect them to work. You have to tell them how to respond when called. They will however record for you when their methods have been called for you to check or assert later. Using the framework you can tell a mock method what to return or exceptions to throw when the class your testing calls it.

So in effect you can teach them to emulate their real counter parts. For example when a Service method calls a Selector method it can return some in memory records as apposed to having to have them setup on the database. Or when the unit of work is used it will record method invocations as apposed to writing to the database.

Here is an example of configuring a Selector mock method to return test record data. Note that you also need to inform the Selector mock what type of SObject it relates to, this is also the case when mocking the Domain layer. Finally be sure to call startStubbing and stopStubbing between your mock configuration code. You can read much more about the ApexMocks API here, which resembles the Java Mockito API as well.

// Given
mocks.startStubbing();
List<Opportunity> testOppsList = new List<Opportunity> { 
	new Opportunity(
		Id = fflib_IDGenerator.generate(Opportunity.SObjectType),
		Name = 'Test Opportunity',
		StageName = 'Open',
		Amount = 1000,
		CloseDate = System.today()) };
Set<Id> testOppsSet = new Map<Id, Opportunity>(testOppsList).keySet();
mocks.when(domainMock.sObjectType()).thenReturn(Opportunity.SObjectType);
mocks.when(selectorMock.sObjectType()).thenReturn(Opportunity.SObjectType);
mocks.when(selectorMock.selectByIdWithProducts(testOppsSet)).thenReturn(testOppsList);
mocks.stopStubbing();

TIP: If you want to mock sub-select queries returned from a selector take a look at this.

Injecting your mock implementations

Finally before you call the method your wanting to test, ensure you have injected the mock implementations. So that the calls to the Application class factory methods will return your mock instances over the real implementations.

Application.UnitOfWork.setMock(uowMock);
Application.Domain.setMock(domainMock);
Application.Selector.setMock(selectorMock);

Testing your method and asserting the results

Calling your method to test is a straight forward as you would expect. If it returns values or modifies parameters you can assert those values. However the ApexMocks framework also allows you to add further behavioural assertions that add further confidence the code your testing is working the way it should. In this case we are wanting to assert or verify (to using mocking speak) the correct information was passed onto the domain and selector classes.

// When
OpportunitiesService.applyDiscounts(testOppsSet, discountPercent);

// Then
((IOpportunitiesSelector) 
	mocks.verify(selectorMock)).selectByIdWithProducts(testOppsSet);
((IOpportunities) 
	mocks.verify(domainMock)).applyDiscount(discountPercent, uowMock);
((fflib_ISObjectUnitOfWork) 
	mocks.verify(uowMock, 1)).commitWork();

TIP: You can verify method calls have been made and also how many times. For example checking a method is only called a specific number of times can help add some level of performance and optimisation checking into your tests.

Summary

The full API for ApecMocks is outside the scope of this blog series, and frankly Paul Hardaker and Jessie Altman have done a much better job, take a look at the full list of documentation links here. Finally keep in mind my comments at the start of this series, this is not to be seen as a total alternative to traditional Apex test method writing. Merely another option to consider when your wanting a more focused means to test specific methods in more varied ways without incurring the development and execution costs of having to setup all of your applications data in each test method.

Follow

Get every new post delivered to your Inbox.

Join 273 other followers