Andy in the Cloud

From BBC Basic to Force.com and beyond…


12 Comments

MavensMate Templates and Apex Enterprise Patterns

mavensmateI’ve been using MavensMate as alternative IDE for coding on Force.com for the past 6 months and loving it more and more to be honest, it receives a strong stream of updates, feels modern, responsive (as much as the underlying platform API’s permit) and is light weight. I’m also getting to grips with the power of the Sublime Text editor (which hosts MavensMate) and it’s excellent find and replace tools for example.  Recently i’ve been working on a feature with the help of the author of this fantastic tool Joe Ferraro.

One of the cool social features of MavensMate is it’s template engine, when ever you create a Apex class, Apex trigger, Visualforce Page or Visualforce Component, you can elect to start editing from scratch or pick one of a growing number of templates, ranging from Batch Apex, Controllers, Schedulers to Unit test templates, including a new one from a new friend in the push for BDD on Force.com, Paul Hardaker (also a FinancialForce.com employee btw). The whole thing is driven by a GitHub repository, if you contribute to that repository (and your pull request is merged) your template is instantly live across the world! Now that’s a social IDE!

I set about developing templates for the Apex Enterprise Patterns and quickly bumped into a gotcha! The previous template engine only took one parameter, the name of the Metadata component (Apex class or trigger name for example). When creating Domain classes or Selectors, the name of the class and the underlying custom object is required. After a quick Twitter conversation and GitHub issue, Joe had already nailed the design for a new feature to fix this situation! You can read more about how to use it here.

DomainTemplate

I’m pleased to report it worked like a charm the first time, great design and very flexible! The templates have now been submitted and successfully merged by Joe into the repository and are now live and ready to use!

  • DomainClass
  • DomainTrigger
  • SelectorClass
  • ServiceClass

The follow screenshot shows the template selection prompt, just start typing and it narrows the options down as you go, press enter and you are prompted for the template parameters. These parameters default to examples defined in the template configuration file, overwrite these with your own following the examples as a naming convention guide. Press enter again and MavensMate will create your class and present it back to you ready for you to start editing!

NewClassTemplateParametersInvoices

If you want to see a quick from zero to Domain class demo check out my demo video below. Thanks again Joe for such a great community tool and providing great support for it!


Leave a comment

Unit Testing with the Domain Layer

Writing true Apex unit tests that are quite granular can be hard in Apex, especially when the application gets more complex, as their is limited mocking support, meaning you have to create all your test data and move it through stages in its life cycle (by calling other methods) to get to the logic your unit test needs to test. Which of course is in itself a valid test approach of definitely needed, this blog is certainly not aiming to detract from those. But these are more of an integration or functional test. Wikipedia has this to say about unit tests…

In computer programmingunit testing is a method by which individual units of source code, sets of one or more computer program modules together with associated control data, usage procedures, and operating procedures are tested to determine if they are fit for use.[1] Intuitively, one can view a unit as the smallest testable part of an application. In procedural programming, a unit could be an entire module, but it is more commonly an individual function or procedure. In object-oriented programming, a unit is often an entire interface, such as a class, but could be an individual method.[2] Unit tests are short code fragments[3] created by programmers or occasionally by white box testers during the development process.

If as a developer you want to write these kind of low level unit tests around your domain classes (part of the Apex Enterprise Patterns), perhaps trying out a broader number of data input scenarios to your validation or defaulting code its hard using the conventional DML approach approach for the same reasons. Typically the result is you compromise on the tests you really want to write. The other downside is eventually your test class starts to take longer and longer to execute, as the DML overhead is expensive. Thus most developers I’ve spoken to, including myself, start to comment out test methods to speed things up while they work on a specific test method. Or perhaps you are a TDD fan, where incremental dev and unit test writing is important.

Stephen Willcock and I often discuss this balance, he is a big fan of DML’less testing and structuring code for testability, having presented a few times at Dreamforce on the topic. There is a framework thats been in the fflib_SObjectDomain base class for a while now that presents its own take on this in respect to Domain layer unit testing. This allows you to emulate DML statements and use the same base class trigger handler method to invoke your Domain methods in the correct sequence from tests. While also performing more granular assertions without actually doing any DML.

As most of you who are using the patterns know the Apex Trigger code is tiny, one line…

fflib_SObjectDomain.triggerHandler(Opportunities.class);

The mocking approach to DML statements used by the Domain layer here leverages this line of code directly in your tests (emulating the Apex Trigger invocation directly) without having to execute DML or setup any dependent database state information the record might not need for the given test scenario. But first you must setup your mock database with the records you want to test against.

fflib_SObjectDomain.Test.Database.onInsert(
   new Opportunity[] { new Opportunity ( Name = 'Test', Type = 'Existing Account' ) } );

The next thing you’ll want to do is use a slightly different convention when setting errors on your records or fields, this allows for you to assert what errors have been raised. This convention also improves from the less than ideal convention of try/catch and doing a .contains(‘The driods i’m looking for!’) in the exception text for the message you are looking for. So instead of doing this on your onValidate

opp.AccountId.addError(
   'You must provide an Account for Opportunities for existing Customers.');

You use the error method, this registers in a list of errors that can be asserted later in the test. Like the ApexPages.getMessages() method, it is request scope so contains all errors raised on all domain objects executed during the test incrementally.

opp.AccountId.addError(
   error('You must provide an Account for Opportunities for existing Customers.',
            opp, Opportunity.AccountId) );

The full test then looks like this…

@IsTest
private static void testInsertValidationFailedWithoutDML()
{
    // Insert data into mock database
    Opportunity opp = new Opportunity ( Name = 'Test', Type = 'Existing Account' );
    fflib_SObjectDomain.Test.Database.onInsert(new Opportunity[] { opp } );

    // Invoke Trigger handler and thus appropriate domain methods
    fflib_SObjectDomain.triggerHandler(Opportunities.class);

    // Assert results
    System.assertEquals(1,
        fflib_SObjectDomain.Errors.getAll().size());
    System.assertEquals('You must provide an Account for Opportunities for existing Customers.',
        fflib_SObjectDomain.Errors.getAll()[0].message);
    System.assertEquals(Opportunity.AccountId,
        ((fflib_SObjectDomain.FieldError)fflib_SObjectDomain.Errors.getAll()[0]).field);
}

NOTE:  The above error assertion approach still works when your using the classic DML and Apex Trigger approach to invoking your Domain class handlers, just make sure to utilise the error method convention as described above.

There are also methods on the fflib_SObjectDomain.Test.Database to emulate other DML operations such as onUpdate and onDelete. As I said at the start of this blog, this is really not ment as an alternative to your normal testing, but might help you get a little more granular on your testing, allowing for more diverse use cases and not to mention speeding up the test execution!


31 Comments

Spring’14 Visualforce Remote Objects Introduction

Salesforce have provided further support for JavaScript in the upcoming Spring’14 release. With a new flavour of the popular Visualforce Remoting facility. Visualforce Remote Objects is a pilot feature I have been trying out in a pre-release org. It’s aim is effectively to make performing database operations, like create, read, update and delete in JavaScript as easy as possible without the need for Apex, and without consuming the orgs daily API limits. This blog introduces the feature and contrasts it with its very much still relevant Visualforce Remoting brother.

Consider a master detail relationship between WorkOrder__c and WorkOrderLineItem__c. The first thing you need to do is declare your intent to access these objects via JavaScript on your Visualforce page with some new tags.

	<apex:remoteObjects >
	    <apex:remoteObjectModel name="WorkOrder__c" fields="Id,Name,AccountName__c,Cost__c"/>
	    <apex:remoteObjectModel name="WorkOrderLineItem__c" fields="Id,Name,Description__c,Hours__c,WorkOrder__c"/>
	</apex:remoteObjects>

The following JavaScript can now be used to access the JavaScript objects the above automatically injects into the page (note that while not shown there is further control over object and field naming, i used the defaults here).

It is an async API, so you provide a function call back to handle the result of the operations (create, update, delete and select are supported), it does not throw exceptions. In the example below if the insert of the Work Order master record is successful the child is then inserted. Note the event parameter actually contains the Id of the inserted record.

		function doSomethingJS(answer)
		{
			// Create work order
			var workOrder = new SObjectModel.WorkOrder__c();
			workOrder.set('AccountName__c','Hitchhikers.com');
			workOrder.set('Cost__c', answer * 100);
			workOrder.create(function(error, result, event)
				{
					// Success?
					if(error == null)
					{
						// Create work order line item
						var workOrderLineItem = new SObjectModel.WorkOrderLineItem__c();
						workOrderLineItem.set('Description__c', 'Answering the question');
						workOrderLineItem.set('Hours__c', answer);
						workOrderLineItem.set('WorkOrder__c', result[0]);
						workOrderLineItem.create(function(error, result, event)
							{
								// Errors?
								if(error!=null)
									alert(error);
								else
									alert('Success');
							} );
						return;
					}
					// Display error
					alert(error);
				} );
		}

As per the documentation, this means you no longer need an Apex controller to do this. You can also query using this object model as well, as per this the pre-release documentation querying a Wharehouse object. However before we cast aside our Apex thoughts, lets look at what the above would like implemented via a Remote Action.

		function doSomethingApex(answer)
		{
			// Create work order and line item via Apex
			Visualforce.remoting.Manager.invokeAction(
				'{!$RemoteAction.RemoteObjectDemoController.doSomething}',
				answer,
				function(result, event){
					alert(event.status ? 'Success' : event.message);
				});
		}

The following Apex code implements the remote action.

	@RemoteAction
	public static void doSomething(Integer answer)
	{
		WorkOrder__c workOrder = new WorkOrder__c();
		workOrder.AccountName__c = 'Hitchhikers.com';
		workOrder.Cost__c = answer * 100;
		insert workOrder;
		WorkOrderLineItem__c workOrderLineItem = new WorkOrderLineItem__c();
		workOrderLineItem.Description__c = 'Answering the question';
		workOrderLineItem.Hours__c = answer;
		workOrderLineItem.WorkOrder__c = workOrder.Id;
		insert workOrderLineItem;
	}

On the face of it, it may appear both accomplish the same thing, but there is a very important and critical architecture difference between them. To help illustrate this i created a page to invoke both of these options. Screen Shot 2014-01-22 at 08.48.56

Screen Shot 2014-01-22 at 09.03.07While also creating a strategically placed  Validation Rule on the Work Order Line item, one which would fail the insert if anything other than 42 was entered in the UI.

So given the validation rule in place, lets perform a test.

  1. Ensure there are no Work Order records present
  2. Enter an invalid value, say 32, click the buttons, observe the expect error
  3. Correct the value to 42, click the button again and observe the outcome on the database.
  4. The expected result is one Work Order record with a single Work Order Line Item record.

Testing the ‘Do Something (Apex)’ Button

After going through the test above, the button initially gives the error as expected…

Screen Shot 2014-01-22 at 09.00.05

When the value is corrected and button pressed again, the result is this…

Screen Shot 2014-01-22 at 08.54.07

The test passed.

Testing the ‘Do Something (JavaScript)’ Button

Going through the tests again, this button initially gives the error as expected…

Screen Shot 2014-01-22 at 09.00.25

When the value is corrected and button pressed again, the result is this…

Screen Shot 2014-01-22 at 08.56.54
While Visualforce Remote Objects technically performed as I believe Salesforce intended, this functional tests expectation failed. Since we have two Work Order records, one with no Work Order Lines and one that is what we expected. So why did this additional rogue Work Order record get created, what did we do wrong?

The answer lies in the scope of the database transaction created…

  • In the Visualforce Remote @RemoteAction use case the platform automatically wraps a transaction around the whole of the Apex code and rolls back everything if an error occurs, you can read more about this here.
  • In the Visualforce Remote Objects use case the database transaction is only around the individual database operations not around the whole JavaScript function. Hence by the time the Work Order Line Item fails to insert the Work Order has already been committed to the database. The user then corrects their mistake and tries again, hence we end up with two Work Orders and not one as the user expected.

Since Apex transaction management is so transparent most of the time (by design), its likely that the same assumption might be made of Remote Objects, however as you can see its not a valid one. Those of you that know core thoughts on patterns will also be thinking something else at this point, that presents a potentially even more compelling reason to be watchful over what logic you implement this way…

Separation of Concerns

As you may have gathered by now if you’ve been reading my blog for the last year, my other passion is Apex Enterprise Patterns. A key foundation of this is Separation of Concerns or SOC for short. SOC sees us layer our code so that aspects such as business logic are located in the same place, easily accessible and reusable (not mention easily testable) from other existing and future parts of our applications, such as API’s, Batch Apex etc.

While the above example is not that complex it illustrates when code in your JavaScript layer might start to become business logic and if so something you should ideally (if not solely for the reason above) consider keeping in your Service Layer and accessing via JavaScript Remoting @RemoteAction.

Summary

Despite this new feature the use of Visualforce Remoting with @RemoteAction Apex should still very much factor in your decision making.  And nor despite the issue above should we necessarily let the lack of transaction management count against our use of Visualforce Remote Objects either.

For sure this will become the best and lightest way to perform rapid client side querying without impacting API limits, and I am sure the alias feature (see docs) will be welcome to those preferring more elegant AngularJS or other bindings. All very nice! Furthermore if you are developing client logic that is essentially doing simple record editing then by all means let your Apex Triggers (Domain Layer) do its job and enforce the validity of those records.

Just keep in mind when your starting to write more and more JavaScript code that is orchestrating the updating, inserting or deleting of a set of  related records, you really need to be sure be sure you and your testers understand the transaction scope differences between the two approaches described here or switch over to Apex and let the platform manage the transaction scope for you. Finally its worth keeping in mind if you don’t have a client side automated testing strategy your automatically adding manual testing overhead to your backlog.

The full source code for this blog can be found here.

UPDATE: Since publishing this blog, Salesforce documentation team have written up an excellent additional topic covering some best practices and outlining the transactional differences described above. Its really great to see this type of content appearing in the standard Salesforce developer documentation, thank you Salesforce!

Other Notes and Observations

Here are some final points of note, given this is still pilot hopefully of use to Salesforce as feedback.

  • It does buffer requests to the server like Visualforce Remoting, very cool!
  • It does not complain when you set the wrong field or get the name wrong, like SObject.put and SObject.get do, maybe to be expected, though since we gave it the field list might have been nice?
  • It differs slightly from Dynamic Apex, it uses SObject.put, here WorkOrder__c.set is used
  • Success is expressed in ‘error’ being null, not quite what i expected, and different from Visualforce Remoting.
  • It does not implement field defaulting
  • Errors need to be hanlded by callbacks (as per VF Remoting), though by default errors are not emitted to the JavaScript console like Visualforce Remoting
  • References to fields on via the apex:remoteObjectModel fields attribute do not seem to surface as dependencies on the fields, which would be good given how soft references within the JavaScript are.


7 Comments

Ideas for Apex Enterprise Patterns Dreamforce 2013 Session!

ideaguy

Update: Dreamforce is over for another year! Thanks to everyone who supported me and came along to the session. Salesforce have now uploaded a recording of the session here and can find the slides here.

As part of this years Dreamforce 2013 event I will be once again running a session on Apex Enterprise Patterns, following up on my recent series of developer.force.com articles. Here is the current abstract for the session, comments welcome!

Building Strong Foundations: Apex Enterprise Patterns “Any structure expected to stand the test of time and change, needs a strong foundation! Software is no exception, engineering your code to grow in a stable and effective way is critical to your ability to rapidly meet the growing demands of users, new features, technologies and platform features. You will take away architect level design patterns to use in your Apex code to keep it well factored, easier to maintain and obey platform best practices. Based on a Force.com interpreation of Martin Fowlers Enterprise Architecture Application patterns and the practice of Separation of Concerns.” (Draft)

I’ve recently started to populated a dedicated Github repository that contains only the working sample code (with the library code in separate repo). So that i can build out a real working sample application illustrating in practical way the patterns in action. It already covers a number of features and use cases such as…

  • Layering Apex logic by applying Separation of Concerns
  • Visualforce controllers and the Service Layer
  • Triggers, validation, defaulting and business logic encapsulation via Domain layer
  • Applying object orientated programming inheritance and interfaces via Domain layer
  • Managing DML and automatic relationship ‘stitching’ when inserting records via Unit Of Work pattern
  • Factoring, encapsulating and standardising SOQL query logic via Selector layer

The following are ideas I’ll be expanding on in the sample application in preparation for the session…

  • Batch Apex and Visualforce Remoting (aka JavaScript callers) and the Service Layer
  • Apex testing without SOQL and DML via the Domain Layer
  • Exposing a custom application API, such as REST API or Apex API via Service Layer
  • Reuse and testing SOQL query logic in Batch Apex context via Selector Layer
  • Rich client MVC frameworks such as AngularJS and Service Side SOC

What do you think and what else would you like to see and discuss in this session?

Feel free to comment on this blog below, tweet me, log it on Github or however else you can get in touch.


10 Comments

Apex Enterprise Patterns – Selector Layer

A major theme in this series on Apex Enterprise Patterns has been ensuring a good separation of concerns, making large complex enterprise level code bases more clear, self documenting, adaptable and robust to change, be that refactoring or functional evolution over time.

In the past articles in this series we have seen Service logic that orchestrates how functional sub-domains work together to form the key business processes your application provides. Domain logic breaks down the distinct behaviour of each custom object in your application by bring the power of Object Orientated Programming into your code.

This article introduces the Selector, a layer of code that encapsulates logic responsible for querying information from your custom objects and feeding it into your Domain and Service layer code as well as Batch Apex jobs.

Selectors

You can read the rest of this article on the wiki.developerforce.com here, enjoy!

Series Summary

This completes the current series of articles on Apex Enterprise Patterns. The main goal was to express a need for Separation of Concerns to help make your Apex logic on the platform live longer and maintain its stability. In presenting three of the main patterns I feel help deliver this. If nothing else, I’d like to think you have started to think more about SOC, even if it’s just class naming. Though hopefully beyond that if you use some of the articles’ various base classes great, please feel free to contribute to them. If you don’t, or decide to skip some or use a different pattern implementation that’s also fine, thats what patterns are all about. If you do, I’d love to hear about other implementations.

Martin Fowler’s excellent patterns continue to be referenced on platforms, new and old. I’ve very much enjoyed adapting them to this platform and using them to effect better adoption of Force.com platform best practices along the way. Thanks to everyone for all the great feedback, long may your code live!


12 Comments

Batch Worker, Getting more done with less work…

Batch Apex has been around on the platform for a while now, but I think its fair to say there is still a lot of mystery around it and with that a few baked in assumptions. One such assumption I see being made is that its driven by the database, specifically the records within the database determine the work to be done.

construction_workerAs such if you have some work you need to get done that won’t fit in the standard governors and its not immediately database driven, Batch Apex may get overlooked in favour of @future which on the surface feels like a better fit as its design is not database linked in anyway .  Your code is just an annotation away to getting the addition power it needs! So why bother with the complexities of Batch Apex?

Well for starters, Batch Apex gives you an ID to trace the work being done and thus the key to improving the user experience while the user waits. Secondly, if any of your parameters are lists or arrays to such methods, your already having to consider again scalability. Yes, you say, but its more fiddly than @future isn’t it?

In this blog I’m going to explore a cool feature of the Batch Apex that often gets overlooked. Using it to implement a worker pattern giving you the kind of usability @future offers with the additional scalability and traceability of Batch Apex without all the work. If your not interested in the background, feel free to skip to the Batch Worker section below!

IMPORTANT NOTE: The alternative approach described here is not designed as a replacement to using Batch Apex against the database using QueryLocator. Using QueryLocator gives access to 50m records, where as the Iterator usage only 50k. Thus the use cases for the Batch Worker are more aligned with smaller jobs perhaps driven by end user selections or stitching together complex chunks of work together.

Well I didn’t know that! (#WIDKT)

First lets review something you may not have realised about implementing Batch Apex. The start method can return either a QueryLocator or something called Iterable. You can implement your own iterators, but what is actually not that clear is that Apex collections/lists implement Iterator by default!

Iterable<String> i = new List<String> { 'A', 'B', 'C' };

With this knowledge, implementing Batch Apex to iterate over a list is now as simple as this…

public with sharing class SimpleBatchApex implements Database.Batchable<String>
{
	public Iterable<String> start(Database.BatchableContext BC)
	{
		return new List<String> { 'Do something', 'Do something else', 'And something more' };
	}

	public void execute(Database.BatchableContext info, List<String> strings)
	{
		// Do something really expensive with the string!
		String myString = strings[0];
	}

	public void finish(Database.BatchableContext info) { }
}

// Process the String's one by one each with its own governor context
Id jobId = Database.executeBatch(new SimpleBatchApex(), 1);

The second parameter of the Database.executeBatch method is used to determine how many items from the list are pass to each execute method invocation made by the platform. To get the maximum governors per item and match that of a single @future call, this is set 1.  We can also implement Batch Apex with a generic data type know as Object. Which allows you to process different types or actions in one job, more about this later.

public with sharing class GenericBatchApex implements Database.Batchable<Object>
{
	public Iterable<Object> start(Database.BatchableContext BC) { }

	public void execute(Database.BatchableContext info, List<Object> listOfAnything) { }

	public void finish(Database.BatchableContext info) { }
}

A BatchWorker Base Class

The above simplifications are good, but I wanted to further model the type of flexibility @future gives without dealing with the Batch Apex mechanics each time. In designing the BatchWorker base class used in this blog i wanted to make its use as easy as possible. I’m a big fan of the fluent API model and so if you look closely you’ll see elements of that here as well. You can view the full source code for the base class here, its quite a small class though, extending the concepts above to make a more generic Batch Apex implementation.

First lets take another look at the string example above, but this time using the BatchWorker base class.

public with sharing class MyStringWorker extends BatchWorker
{
	public override void doWork(Object work)
	{
		// Do something really expensive with the string!
		String myString = (String) work;
	}
}

// Process the String's one by one each with its own governor context
Id jobId =
	new MyStringWorker()
            .addWork('Do something')
            .addWork('Do something else')
            .addWork('And something more')
            .run()
            .BatchJobId;

Clearly not everything is as simple as passing a few strings, after all @future methods can take parameters of varying types. The following is a more complex example showing a ProjectWorker class. Imagine this is part of a Visualforce controller method where the user is presented a selection of projects to process with a date range.

	// Create worker to process the project selection
	ProjectWorker projectWorker = new ProjectWorker();
		
	// Add the work to the project worker
	for(SelectedProject selectedProject : selectedProjects)		
		projectWorker.addWork(startDate, endDate, selectedProject.projectId);
			
	// Start the workder and retain the job Id to provide feedback to the user
	Id jobId = projectWorker.run().BatchJobId;		

Here is how the ProjectWorker class has been implemented, once again it extends the BatchWorker class. But this time it provides its own addWork method which takes the parameters as you would normally describe them. Then internally wraps them up in a worker data class. The caller of the class, as you’ve seen above is is not aware of this.

public with sharing class ProjectWorker extends BatchWorker
{	
	public ProjectWorker addWork(Date startDate, Date endDate, Id projectId)
	{
		// Construct a worker object to wrap the parameters		
		return (ProjectWorker) super.addWork(new ProjectWork(startDate, endDate, projectId));
	}
	
	public override void doWork(Object work)
	{
		// Parameters
		ProjectWork projectWork = (ProjectWork) work;
		Date startDate = projectWork.startDate;
		Date endDate = projectWork.endDate;
		Id projectId = projectWork.projectId;		
		// Do the work
		// ...
	}
	
	private class ProjectWork
	{
		public ProjectWork(Date startDate, Date endDate, Id projectId)
		{
			this.startDate = startDate;
			this.endDate = endDate;
			this.projectId = projectId;
		}
		
		public Date startDate;
		public Date endDate;
		public Id projectId;
	}
}

As a final example, recall the fact that Batch Apex can process a list of generic data types. The BatchProcess base class uses this to permit the varied implementations above. It can also be used to create a worker class that can do more than one thing. The equivalent of implementing two @future methods, accept that its managed as one job.

public with sharing class ProjectMultiWorker extends BatchWorker 
{
	// ...

	public override void doWork(Object work)
	{
		if(work instanceof CalculateCostsWork)
		{
			CalculateCostsWork calculateCostsWork = (CalculateCostsWork) work;
			// Do work 
			// ...					
		}
		else if(work instanceof BillingGenerationWork)
		{
			BillingGenerationWork billingGenerationWork = (BillingGenerationWork) work;
			// Do work
			// ...		
		}
	}
}

// Process the selected Project 
Id jobId = 
	new ProjectMultiWorker()
		.addWorkCalculateCosts(System.today(), selectedProjectId)
		.addWorkBillingGeneration(System.today(), selectedProjectId, selectedAccountId)
		.run()
		.BatchJobId;

Summary

Hopefully I’ve provided some insight into new ways to access the power and scalability of Batch Apex for use cases which you may not have previously considered or perhaps used less flexible @future annotation. Keep in mind that using Batch Apex with Iterators does reduce the number of items it can process to 50k, as apposed to the 50m when using database query locator. At the end of the day if you have more than 50k work items, your probably wanting to go down the database driven route anyway. I’ve shared all the code used in this article and some I’ve not shown in this Gist.

Post Credits
Finally, I’d like to give a nod to an past work associate of mine, Tony Scott, who has taken this type of approach down a similar path, but added process control semantics around it. Check out his blog here!


42 Comments

Managing your DML and Transactions with a Unit Of Work

plumbing-equipment

A utility class I briefly referenced in this article was SObjectUnitOfWork. I promised in that article to discuss it in more detail, that time has come! Its main goals are.

  • Optimise DML interactions with the database
  • Provide transactional control
  • Simplify complex code that often spends a good portion of its time managing bulkificaiton and  ‘plumbing’ record relationships together.

In this blog I’m going to show you two approaches to creating a reasonably complex set of records on the platform, comparing the pros and cons of the traditional approach vs that using the Unit of Work approach.

Complex Data Creation and Relationships

Lets first look at a sample peace of code to create a bunch Opportunity records and related, Product, PricebookEntry and eventually OpportunityLine records. It designed to have a bit of a variable element to it, as such the number of lines per Opportunity and thus Products varies depending on which of the 10 Opportunties is being processed. The traditional approach is to do this a stage at a time, creating things and inserting things in the correct dependency order and associating child and related records via the Id’s generated by the previous inserts. Lists are our friends here!

			List opps = new List();
			List productsByOpp = new List();
			List pricebookEntriesByOpp = new List();
			List oppLinesByOpp = new List();
			for(Integer o=0; o<10; o++)
			{
				Opportunity opp = new Opportunity();
				opp.Name = 'NoUoW Test Name ' + o;
				opp.StageName = 'Open';
				opp.CloseDate = System.today();
				opps.add(opp);
				List products = new List();
				List pricebookEntries = new List();
				List oppLineItems = new List();
				for(Integer i=0; i<o+1; i++)
				{
					Product2 product = new Product2();
					product.Name = opp.Name + ' : Product : ' + i;
					products.add(product);
					PricebookEntry pbe = new PricebookEntry();
					pbe.UnitPrice = 10;
					pbe.IsActive = true;
					pbe.UseStandardPrice = false;
					pbe.Pricebook2Id = pb.Id;
					pricebookEntries.add(pbe);
					OpportunityLineItem oppLineItem = new OpportunityLineItem();
					oppLineItem.Quantity = 1;
					oppLineItem.TotalPrice = 10;
					oppLineItems.add(oppLineItem);
				}
				productsByOpp.add(products);
				pricebookEntriesByOpp.add(pricebookEntries);
				oppLinesByOpp.add(oppLineItems);
			}
			// Insert Opportunities
			insert opps;
			// Insert Products
			List allProducts = new List();
			for(List products : productsByOpp)
			{
				allProducts.addAll(products);
			}
			insert allProducts;
			// Insert Pricebooks
			Integer oppIdx = 0;
			List allPricebookEntries = new List();
			for(List pricebookEntries : pricebookEntriesByOpp)
			{
				List products = productsByOpp[oppIdx++];
				Integer lineIdx = 0;
				for(PricebookEntry pricebookEntry : pricebookEntries)
				{
					pricebookEntry.Product2Id = products[lineIdx++].Id;
				}
				allPricebookEntries.addAll(pricebookEntries);
			}
			insert allPricebookEntries;
			// Insert Opportunity Lines
			oppIdx = 0;
			List allOppLineItems = new List();
			for(List oppLines : oppLinesByOpp)
			{
				List pricebookEntries = pricebookEntriesByOpp[oppIdx];
				Integer lineIdx = 0;
				for(OpportunityLineItem oppLine : oppLines)
				{
					oppLine.OpportunityId = opps[oppIdx].Id;
					oppLine.PricebookEntryId = pricebookEntries[lineIdx++].Id;
				}
				allOppLineItems.addAll(oppLines);
				oppIdx++;
			}
			insert allOppLineItems;

Lists and Maps (if your linking existing data) are important tools in this process, much like SOQL, its bad news to do DML in loops, as you only get 150 DML operations per request before the governors blow. So we must index and list items within the various loops to ensure we are following best practice for bulkificaiton of DML as well.  If your using ExternalId fields, you can avoid some of this, but to much use of those comes at a cost as well, and your not always able to add these to all objects, so traditionally the above is pretty much the most bulkified way of inserting Opportunities.

Same again, but with a Unit Of Work…

Now thats take a look at the same sample using the Unit Of Work approach to capture the work and commit it all to the database in one operation. In this example notice first of all its a lot smaller and hopefully easier to see what the core purpose of the logic is. Most notable is that there are no maps at all, and also no direct DML operations, such as insert. 

img_strategy_targetInstead the code registers the need for an insert with the unit of work, for it to perform later via the registerNew methods on lines 8,13,19 and 24. The unit of work is keeping track of the lists of objects and is also providing a kind of ‘stitching’ service for the code, see lines 19, 23 and 24. Because it is given a list of object types when its constructed (via MY_SOBJECT) and these are in dependency order, it knows to insert records its given in that order and then follow up populating the indicated relationship fields as it goes. The result I think is both making the code more readable and focused on the task at hand.


			SObjectUnitOfWork uow = new SObjectUnitOfWork(MY_SOBJECTS);
			for(Integer o=0; o<10; o++)
			{
				Opportunity opp = new Opportunity();
				opp.Name = 'UoW Test Name ' + o;
				opp.StageName = 'Open';
				opp.CloseDate = System.today();
				uow.registerNew(opp);
				for(Integer i=0; i<o+1; i++)
				{
					Product2 product = new Product2();
					product.Name = opp.Name + ' : Product : ' + i;
					uow.registerNew(product);
					PricebookEntry pbe = new PricebookEntry();
					pbe.UnitPrice = 10;
					pbe.IsActive = true;
					pbe.UseStandardPrice = false;
					pbe.Pricebook2Id = pb.Id;
					uow.registerNew(pbe, PricebookEntry.Product2Id, product);
					OpportunityLineItem oppLineItem = new OpportunityLineItem();
					oppLineItem.Quantity = 1;
					oppLineItem.TotalPrice = 10;
					uow.registerRelationship(oppLineItem, OpportunityLineItem.PricebookEntryId, pbe);
					uow.registerNew(oppLineItem, OpportunityLineItem.OpportunityId, opp);
				}
			}
			uow.commitWork();

The MY_SOBJECT variable is setup as follows, typically you would probably just have one of these for your whole app.

	// SObjects (in order of dependency)
	private static List MY_SOBJECTS =
		new Schema.SObjectType[] {
			Product2.SObjectType,
			PricebookEntry.SObjectType,
			Opportunity.SObjectType,
			OpportunityLineItem.SObjectType };

Looking into the Future with registerNew and registerRelationship methods

Screen Shot 2013-06-09 at 15.06.11These two methods on the SObjectUnitOfWork class allow you to see into the future. By allowing you to register relationships without knowing the Id’s of records your inserting (also via the unit of work). As you can see in the above example, its a matter of providing the relationship field and the related record. Even if the related record does not yet have an Id, by the time the unit of work has completed inserting dependent records for you, it will. At this point, it will set the Id on the indicated field for you, before inserting the record.

Delegating this type of logic to the unit of work, avoids you having to manage lists and maps to associate related records together and thus keeps the focus on the core goal of the logic.

Note: If you have some cyclic dependencies in your schema, you will have to either use two separate unit of work instances or simply handle this directly using DML.

Deleting and Updating Records with a Unit Of Work…

This next example shows how the unit of work can be used in a editing scenario, suppose that some logic has taken a bunch of OpportunityLineItem’s and grouped them. You need to delete the line items no longer required, insert the new grouped line and also update the Opportunity to indicate the process has taken place.

			// Consolidate Products on the Opportunities
			SObjectUnitOfWork uow = new SObjectUnitOfWork(MY_SOBJECTS);
			for(Opportunity opportunity : opportunities)
			{
				// Group the lines
				Map<Id, List> linesByGroup = new Map<Id, List>();
				// Grouping logic
				// ...
				// For groups with more than one 1 line, delete those lines and create a new consolidated one
				for(List linesForGroup : linesByGroup.values() )
				{
					// More than one line with this product?
					if(linesForGroup.size()>1)
					{
						// Delete the duplicate product lines and caculate new quantity total
						Decimal consolidatedQuantity = 0;
						for(OpportunityLineItem lineForProduct : linesForGroup)
						{
							consolidatedQuantity += lineForProduct.Quantity;
							uow.registerDeleted(lineForProduct);
						}
						// Create new consolidated line
						OpportunityLineItem consolidatedLine = new OpportunityLineItem();
						consolidatedLine.Quantity = consolidatedQuantity;
						consolidatedLine.UnitPrice = linesForGroup[0].UnitPrice;
						consolidatedLine.PricebookEntryId = linesForGroup[0].PricebookEntry.Id;
						uow.registerNew(consolidatedLine, OpportunityLineItem.OpportunityId, opportunity);
						// Note the last consolidation date
						opportunity.Description = 'Consolidated on ' + System.today();
						uow.registerDirty(opportunity);
					}
				}
			}
			uow.commitWork();

Transaction management and the commitWork method

Database transactions is something you rarely have to concern yourself within Apex…. or do you? Consider the sample code below, in it there is a deliberate bug (line 22). When the user presses the button associated with this controller method, the error occurs, is caught and is displayed to the user via the apex:pagemessages component. If the developer did not do this, the error would be unhandled and the standard Salesforce white page with the error text displayed would be shown to the user, hardly a great user experience.

	public PageReference doSomeWork()
	{
		try
		{
			Opportunity opp = new Opportunity();
			opp.Name = 'My New Opportunity';
			opp.StageName = 'Open';
			opp.CloseDate = System.today();
			insert opp;
			Product2 product = new Product2();
			product.Name = 'My New Product';
			insert product;
			// Insert pricebook
			PricebookEntry pbe = new PricebookEntry();
			pbe.UnitPrice = 10;
			pbe.IsActive = true;
			pbe.UseStandardPrice = false;
			pbe.Pricebook2Id = [select Id from Pricebook2 where IsStandard = true].Id;
			pbe.Product2Id = product.Id;
			insert pbe;
			// Fake an error
			Integer x = 42 / 0;
			// Insert opportunity lines...
			OpportunityLineItem oppLineItem = new OpportunityLineItem();
			oppLineItem.Quantity = 1;
			oppLineItem.TotalPrice = 10;
			oppLineItem.PricebookEntryId = pbe.Id;
			insert oppLineItem;
		}
		catch (Exception e)
		{
			ApexPages.addMessages(e);
		}
		return null;
	}

However using try/catch circumvents the standard Apex transaction rollback during error conditions. “Only when all the Apex code has finished running and the Visualforce page has finished running, are the changes committed to the database. If the request does not complete successfully, all database changes are rolled back.”. Therefore catching the exception results in the request to complete successfully, thus the Apex runtime commits records that lead up to the error occurring. This results in the above code leaving an Opportunity with no lines on the database.

The solution to this problem, is to utilise a Savepoint, as described in the standard Salesforce documentation. To avoid the developer having to remember this, the SObjectUnitOfWork commitWork method creates a Savepoint and manages the rollback to it, should any errors occur. After doing so, it throws again the error so that the caller can perform its own error handling and reporting. This gives a consistant behaviour to database updates regardless of how errors are handled by the controlling logic.

Note: Regardless of using the commitWork method or manually coding your Savepoint logic, review the statement from the Salesforce documentation regarding Id’s.

Summary

As you can see between the two samples in this blog, there is significant reduction of over half the source lines when using the unit of work. Of course the SObjectUnitOfWork class does have its own processing to perform. Because it is a generic library, its never going to be as optimum as if you would write this code by hand specifically for the use case needed as per the first example.

perfect-balance

When writing Apex code, optimisation of statements is a consideration (consider Batch Apex in large volumes). However so are other things such as queries, database manipulation and balancing overall code complexity, since smaller and simpler code bases generally contains less bugs. Ultimately, using any utility class, needs to be considered on balance with a number of things and not just in isolation of one concern. Hopefully you now have enough information to decide for yourself if the benefit is worth it in respect to the complexity of the code your writing.

Unit Of Work is a Enterprise Application Architecture pattern by Martin Fowler.


32 Comments

Apex Enterprise Patterns – Domain Layer

In the previous article, the Service Layer was discussed as a means to encapsulate your application’s programmatic processes. Focusing on how services are exposed in a consistent, meaningful and supportive way to other parts your application, such as Visualforce Controllers, Batch Apex and also public facing API’s you provide. This next article will deal with a layer in your application known as the Domain Layer.

Domain (software engineering).“a set of common requirements, terminology, and functionality for any software program constructed to solve a problem in that field”

Read more at developer.force.com!

AEP_figure0


3 Comments

Preview : Apex Enterprise Patterns – Domain Layer

I’ve been busy writing the next article in my series on Apex Enterprise Patterns. This weekend I completed the draft for review before submitting to Force.com. Its the biggest article yet, with over three and half thousand words. And an update to the Dreamforce 2012 sample code to support some additional OO and test aspects I wanted to highlight.

The following is a sneak preview of the upcoming article. Also if your interested in taking a look at some of the updated sample code you’ll find this in the Github repo here. Enjoy!

Domain Model, “An object model of the domain that incorporates both behavior and data.”, “At its worst business logic can be very complex. Rules and logic describe many different cases and slants of behavior, and it’s this complexity that objects were designed to work with…” Martin Fowler, EAA Patterns

Who uses the Domain layer?

There are two ways in which “Domain layer logic” is invoked.

  • Database Manipulation. CRUD operations, or more specifically Create, Update and Delete operations, occur on your Custom Objects. As users or tools interact via the standard Salesforce UI or via one of the platforms API’s. These can be routed to the appropriate Domain class code corresponding to that object and operation.

  • Service Operations. The Service layer implementations should be easily able to identify and reuse code relating to one or more of the objects each of its operations interact with via Domain classes. This also helps keep the code in the service layer focused on orchestrating whatever business process or task its exposing.

This diagram brings together the Service Layer and Domain Layer patterns and how they interact with each other as well other aspects of the application architecture.

Screen Shot 2013-04-07 at 12.21.32

To be continued on developer.force.com….


16 Comments

Apex Enterprise Patterns – Service Layer

In the previous blog post, SOC (Separation of Concerns) was discussed as a means to focus software architects into thinking about layering application logic. This next article in the series focuses on arguably the most important layer of them all, the Service layer. This is because, it is the entry point to the most important development investment and asset of all. Your applications business logic layer, it’s very heart!

Service Layer, “Defines an application’s boundary with a layer of services that establishes a set of available operations and coordinates the application’s response in each operation.” Martin Fowler / Randy Stafford, EAA Patterns

Continue reading more at developer.force.com

Service_layer