Andy in the Cloud

From BBC Basic to and beyond…


Salesforce StackExchange : Addiction Warning

Those of you following my new blog please don’t worry, I have a number of posts in various states ready to go soon! In the meantime I thought I would share another outlet I’ve been using to help and share ideas in the ever growing community that is Salesforce. Salesforce StackExchange.

It is an increasingly more popular place to ask questions and get answers (in some cases in minutes!), than the usual Salesforce developer forums. If you have not given it a try I recommend you do! There are many experts and also Salesforce employees watching the site and eager to help. It features the ability to rank questions and answers, so unlike typical forums you get a better feel for the quality of both!

Here is a list of a few I’ve been helping with over the last couple of weeks. Of course if you find any of these answers useful feel free to give them whats called an upvote via the little arrows show to the left of the answer. Or even contribute to improve or comment on what you see!

But be warned, if you start contributing, it gets addictive! So don’t leave me any comments saying its taken over my life and my wife / partner is no longer speaking to me! You have been warned!

I’ve added my StackExchange profile to the links in the sidebar.



Leave a comment

Dreamforce 2013, bring it on!

Dreamforce 2012 was my 3rd and also my best ever! Not only was it bigger and better in almost every aspect. Which in itself was great, but to witness in such a tangible and visual way such significant incremental growth in just 4 short years was just an amazing and inspiring feeling!

My personal involvement grew from 1 shared session to 2 owned sessions this year. Then looking at it in respect to R&D contributions that figure rises to 6 sessions! I am very proud to be part of such a great industry event and find myself already thinking what contributions can be even bigger and bolder for Dreamforce 2013!

I’ve uploaded my slide decks below and of course you can visit the githib repos for all the sample code and updates from the event. Finally if you fancy reading about my ramblings on the plane home checkout my ‘Implementing Dreamforce’ blog on here!





Salesforce Winter’13 Chaining Batch Jobs… With great power…

… comes great responsibility!

Ok so I am a Spiderman fan, you rumbled me!

But seriously thats the first phrase that popped into my head when I read the Winter’13 release notes regarding this ability. Previously you could not start another Batch Apex job from another Batch Apex job. So now you can! So the next question is why would you want to? Closely followed by should you? Before answering those, lets take a step back and consider what it takes to design and implement a single Batch Apex job…

As you probably already know the transaction semantics of Batch Apex means that you have to give careful consideration to what happens if one or more of the scopes or ‘chunks’ of the job fails and others succeed. Since only the changes to the database made by the failing chunk of the job are rolled back.

A Batch Apex Design and Implementation Checklist

So here is my Batch Apex design check list for developers and business analysis to discuss before a single line of code is written…

  1. How do users start the job? VF Page, Custom Buttons, Scheduler?
  2. How do you deliver feedback about the job? When is it queued, in progress and completed. Under high service load the fact that a job is queued can be quite important information from a usability and end user satisfaction perspective. Also an interesting System Admin vs End User consideration is permitting the default error handling by the platform to occur vs catching your exceptions and feeding them back some other way?
  3. Can multiple jobs of the same type be run at the same time? If yes, how do you manage the dataset each job consumes to avoid overlap with other jobs in a concurrent user environment? If no, how do you plan to stop users running multiple jobs?
  4. How do users get informed about failures? Chatter posts? Email? Custom Objects? Status fields? Does the information have enough context about the job, the user and the record that failed?
  5. How do end users recover and retry from failures? Can they rerun selective parts of the job and/or cancel others? Consider combining point 4’s implementation, so that they can review, address and restart all from one place.
  6. Can or should they just restart the job again? If yes, how does the job sense parts of the dataset a previous run has failed to process? If no, how does the system clean up likely volumes of data to allow the whole job to restart again? Has another job processed those records in the meantime?

These are all great design questions developers and your business analysis need to work through when planning even a single Batch Apex Job implementation. And each in their own right worthy of more technical discussion than I have time to share on this blog entry right now. The main thing is to ensure these questions are asked and your test engineers assert the answers match the software!

To Chain or Not to Chain?

So where does this leave effectively expanding the already considerable scope of the above design, by adding more complexity by linking multiple jobs together? Well first of all why would you do this?

  • Maybe a second jobs dataset does not exist or is incomplete? And as such needs to wait for the first one to create or update data it needs.
  • Maybe you want to have one job spawn sub-jobs to process information in parallel into different buckets or outputs.
  • Maybe you want to kick off a clean up job, following partial failure of a job that cannot support incremental running.

So if you do plan to do utilise this, keep in mind the above questions get quite a lot trickier to answer, design and implement if do start chaining jobs.

One final thought, is if your planning on utilising daisy chaining because you want to process a dataset of mixed record types / custom objects. Thats certainly a use case, though I would personally first consider using a custom Iterator for the job. This would allow you to construct a logic data set that drives a single job which can multiple physical datasets. Thats after all why Salesforce added this often overlooked feature of Batch Apex.

Thanks for reading and please leave comments regarding other implementation considerations you have come across to add to the checklist above!