A Pattern for Apex Test Code So You Can Forget About Required Fields

Published by:

I recently had the opportunity to start writing some code in a brand new org, which got me thinking about the best way to do test object creation. You know, that annoying problem where you need to generate an Account in a test class about 77 times. Here’s a pattern I came up with.

This method involves some overhead but it has the advantages of (1) allowing you to set defaults for certain objects so you don’t have to worry about them – for some objects create them with all defaults with just one line, and (2) a generic, repeatable pattern.

It’s based on an abstract class that implements the base operations using some generic sObject magic outlined here.

A lot of code in this post, bear with me.

public abstract class TestObjectFactory{

    public abstract Schema.SObjectType getSObjectType();
    public abstract String getSObjectAPIName();
    List<Test_Class_Default__mdt> defaults = new List<Test_Class_Default__mdt>();
private Map<SobjectField,Object> values = new Map<SobjectField,Object>();

    public void setFieldValue(sObjectField field,Object value)
    {
    	values.put(field,value);
    }

    public void clearFieldValues()
    {
    	values = new Map<SobjectField,Object>();
    }

   public SObject createObject( Map<SObjectField, Object> valuesByField, Boolean doInsert) {
        // Initialize the object to return
        SObject record = this.getSObjectType().newSObject(null, true);

        // Fill defaults 
        record = fillDefaults(record,false);

        // Populate the record with values passed to the method
        if (valuesByField != null)
	        for (SObjectField eachField : valuesByField.keySet()) {
	            record.put(eachField, valuesByField.get(eachField));
	        }
        if (doInsert)
        	insert record;
        // Return the record
        return record;
    }

    // Overload version that just uses defaults
	public SObject createObject( Boolean doInsert) {
        // Initialize the object to return
        SObject record = this.getSObjectType().newSObject(null, true);
        // Fill defaults 
        record = fillDefaults(record, false);
        if (doInsert)
        	insert record;
        return record;
    }
    public void updateObject(sObject record, Map<SObjectField, Object> valuesByField)
    {
    	if (record.Id == null)
    		throw new MyException('Test Object Factory: Attempt to Update a Record with a Null ID');
       	for (SObjectField eachField : valuesByField.keySet()) {
            	record.put(eachField, valuesByField.get(eachField));
        }
        update record;

    }

    public SObject upsertObject(sObject record, Map<SObjectField, Object> valuesByField)
    {
    	for (SObjectField eachField : valuesByField.keySet()) {
            record.put(eachField, valuesByField.get(eachField));
        }
        upsert record;
        return record;
    }

    public void deleteObject(sObject record)
    {
    	String recordId = record.Id;
    	database.delete(recordId);
    }
}

For each object you will use, you need to create a class that implements the abstract factory class.

public class TestObjectFactoryAccount extends TestObjectFactory{

    public override Schema.SObjectType getSObjectType(){
        return Account.SObjectType;
    }
   
    public override String getSObjectAPIName(){
        return 'Account';
    }
    
    public Account createRecord(Map<SObjectField, Object> valuesByField, Boolean doInsert){
        return (Account) createObject(valuesByField, doInsert);
    }
    
    public Account createRecord(Boolean doInsert){
        return (Account) createObject( doInsert);
    }
    
    public void updateRecord(sObject record, Map<SObjectField, Object> valuesByField){
        updateObject(record, valuesByField);
    } 
    
    public SObject upsertRecord(sObject record, Map<SObjectField, Object> valuesByField){
        return (Account) upsertObject(record, valuesByField);
    }
    
    public void deleteRecord(sObject record)
    {
    	deleteObject(record);
    }

}

Then, in your test code you would need to instantiate the factory for each object, and set values in the Map using setFieldValue.

TestObjectFactoryContact contactFactory = new TestObjectFactoryContact();
		contactFactory.setFieldValue(Contact.Role__c, contactRole);
		contactFactory.setFieldValue(Contact.AccountId,childAccount.Id);
		contactFactory.setFieldValue(Contact.LastName,'child Contact');
		contactFactory.setFieldValue(Contact.Job_Level__c, contactLevel);
		Contact childContact = contactFactory.createRecord(contactValues,true);

NB: You will need to be careful about the Map (setFieldValue calls), since in all subsequent calls the values class variable will still contain its prior values. That could cause logic issues if you’re setting fields you didn’t intend to. The abstract class contains a clearFieldValues() method to clear it out if necessary.

Seems like.. That’s a Lot of Work For Not a Lot of Benefit?

No doubt, it’s wordy. But here’s why it’s worth it. The big benefit is that all your defaults/required fields are taken care of – if you had a required field on your Contact, the code would pre-fill it for you. Or, you can override the default by setting the value in the Map.

I created a custom metadata type that allows you to fill in defaults; if you look in the abstract class above, it calls a fillDefaults() method. This goes into the custom metadata and populates the required fields. For example, you can set the default Name for Accounts.

The setDefault method looks at this, and if you don’t override it, sets the Name field automatically on every record the test generator makes.

Now, in your abstract class, add this method:

public sObject fillDefaults(sObject record, Boolean isTest)
    {
    	// Public so TestObjectFactoryTest can see it.
		String objectName;
		if (isTest) 
			objectName = 'TestObjectFactoryTest';
		else
			objectName = getSObjectAPIName();
		SObjectType objectType = getSObjectType();
		String testName;

		if (defaults.size() == 0)
		{
	    	defaults = 
	    		[SELECT 
	    			Checkbox_Value__c
	    			,DateTime_Value__c
	    			,Date_Value__c
	    			,Email_Value__c
	    			,Field__c
	    			,Number_Value__c
	    			,Percent_Value__c
	    			,Phone_Value__c
	    			,Picklist_Value__c
	    			,Text_Area_Value__c
	    			,Text_Value__c
	    			,Type__c
	    			,URL_Value__c
	    			FROM Test_Class_Default__mdt
	    			WHERE Object__c = :objectName
	    			AND Field__c != null];
	    		}
    	
    	for (Test_Class_Default__mdt d : defaults)
    	{
    		Object value = null;
    		if (d.Type__c == 'Checkbox')
    			value = d.Checkbox_Value__c;
    		if (d.Type__c == 'Date')
    			value = d.Date_Value__c;
    		if (d.Type__c == 'DateTime')
    			value = d.DateTime_Value__c;		
	    	if (d.Type__c == 'Email')
	    		value = d.Email_Value__c;
	    	if (d.Type__c == 'Number')
	    		value = d.Number_Value__c;
	    	if (d.Type__c == 'Percent')
	    		value = d.Percent_Value__c;
	    	if (d.Type__c == 'Phone')
	    		value = d.Phone_Value__c;
	    	if (d.Type__c == 'Picklist')
	    		value = d.Picklist_Value__c;
	    	if (d.Type__c == 'Text')
	    		value = d.Text_Value__c;
	    	if (d.Type__c == 'Text Area')
	    		value = d.Text_Area_Value__c;
	    	if (d.Type__c == 'URL')
	    		value = d.URL_Value__c;

	    	// Get all the fields from the Object as sObjectField records
	    	// Then set the sObjectField value as a plain Object
	    	Map<String,Schema.SObjectField> mFields = objectType.getDescribe().fields.getMap();

	    	try {
	    		if (value != null && !isTest)
	    		{
	    			record.put(mFields.get(d.Field__c.toLowerCase()),value);
	    		}
	    		else
	    			testName = d.Text_Value__c;
	    	}
	    	catch (System.Exception e){
	    		String error = 'TestObjectFactory: Unable to set default field ' + d.Field__c + ' to value ' + value + ' On Object ' +objectName + ' Error:' + e.getMessage();
	    		System.debug(error);
	    		throw new MyException(error);
	    	}
   	}
    	if (isTest)
    		return new Account(Name = testName);
    	else
    		return record;
}

The non-insert use cases are probably less valuable, but it least provides a consistent interface. For objects that don’t require other objects (like a parent Account) you can just use the factory.createObject(true); which will create the entire object, insert it, and return it to you.

Test Code Implications

Since this is an ordinary non-test class (has to be, since abstract is not available when @isTest) you need to create some test code to verify it actually works, and to maintain coverage requirements. My test class has a short and quick way to test each piece, for coverage purposes. If you do it carefully, you can set so you can find/replace the object name (i.e., change ‘Account’ to ‘Contact’) which can get you 80% of the way there.

@isTest
private class TestObjectFactoryTest {

	@isTest static void AccountTest() {
		List<Account> accounts = new List<Account>();

		Test.startTest();
		TestObjectFactoryAccount factory = new TestObjectFactoryAccount();
		system.assertEquals(factory.getSObjectAPIName(),'Account');
		system.assertEquals(factory.getSObjectType(),Account.SobjectType);

		// Create
		factory.clearFieldValues();
		factory.setFieldValue(Account.Name,'Test');
		Account record = factory.createRecord(true,true);
		accounts = [SELECT Id, Name From Account WHERE Id = :record.Id];
		system.assertEquals(accounts.size(), 1);
		system.assertEquals(accounts[0].Name,'Test');

		// Overloaded Create
		record = factory.createRecord(true);
		Accounts = [SELECT Id, Name From Account WHERE Id = :record.Id];
		system.assertEquals(Accounts.size(), 1);

		// Update
		factory.setFieldValue(Account.Name,'Update Test');
		factory.updateRecord(record);
		accounts = [SELECT Id, Name From Account WHERE Id = :record.Id];
		system.assertEquals(accounts.size(), 1);
		system.assertEquals(accounts[0].Name,'Update Test');

		//Upsert with Insert
		Account upsertInsert = factory.createRecord(true);
		factory.setFieldValue(Account.Name,'Upsert Insert Test');
		record = (Account) factory.upsertRecord(upsertInsert);
		accounts = [SELECT Id, Name From Account WHERE Id = :record.Id];
		system.assertEquals(accounts.size(), 1);
		system.assertEquals(accounts[0].Name,'Upsert Insert Test');

		// Upsert with Update		
		factory.setFieldValue(Account.Name,'Upsert Update Test');
		record = (Account) factory.upsertRecord(record);
		accounts = [SELECT Id, Name From Account WHERE Id = :record.Id];
		system.assertEquals(accounts.size(), 1);
		system.assertEquals(accounts[0].Name,'Upsert Update Test');


		// Double check that we have 3 records now
		accounts = [SELECT Id, Name From Account ];
		System.assertEquals(accounts.size(), 3);

		// Delete
		factory.deleteRecord(record);
		accounts = [SELECT Id, Name From Account ];
		System.assertEquals(accounts.size(), 2);

		System.assertEquals(Account.SObjectType, factory.getSobjectType());
		System.assertEquals(factory.getSObjectAPIName(),'Account');

		Test.stopTest();
	}

Like it? Hate it? Like I said, it’s wordy, but consistent, repeatable, and eliminates the need to think about default values. Feedback welcome below, or at @BobHatcher.Demand_Unit_Persona__cDemand_Unit_Persona__c

Salesforce: A Practical Approach to Queueables

Published by:

Salesforce triggers are great, but they run synchronously by default. What if you want to speed up the user experience and run noncritical tasks in the background? That’s what Queueable is for. But it’s not as simple as it’s made out to be.

I think most organizations have a simple process where they have a Trigger that calls a Trigger Handler. With a Queueable, you might think you can do something like this.

Trigger:

    if (Trigger.isBefore && Trigger.isUpdate) {
        ContactTriggerHandler.fireMyProcess(Trigger.oldMap, Trigger.newMap);
    } 

Trigger Handler:

public static void fireMyProcess(Map<Id,Contact>; oldMap, Map<Id,Contact>; newMap)
    {
        List<Contact> contacts = new List<Contact>();
        for (Contact c : newMap.values())
            if (should be processed asynchronously)
                contacts.add(c);

        jobId = System.enqueueJob(new ContactTriggerHandlerQueueable(contacts)); 
    }

Then a Queueable to implement your business logic.

 public class ContactTriggerHandlerQueueable implements Queueable {

        public ContactTriggerHandlerQueueable(List&lt;Contact&gt; contacts)
        {
            // Save off my Contacts into a class variable
        }

        public void execute(QueueableContext context) {
            // My business Logic
        }
    }

The problem with this approach is, as Brian Fear (aka sfdcfox) said in his outstanding response to my StackExchange Question: “The rule is that if you’re synchronous, you get 50 jobs for that transaction. Once you go asynchronous, you get only one child allowed.”

Brian went on to provide an example, which works great. Basically it detects if you’ve “Gone Synchronous” and runs the Queueable appropriately.

Every time I hear “Gone Synchronous” it makes me think of this.

This Works Great.. But

The issue you’ll soon run into is that you can’t chain Queueables in test classes. This is maddening, really. They give you this cool way to do asynchronous processing, but cut you off at the knees in mandatory test code.

The workaround is to detect if you are working in test, and run the Queueable synchronously instead.

if (!Test.isRunningTest()  || isBulkTest)
		{
			QueueableUtilities.enqueueJob(new ContactTriggerHandlerQueueable(a, b, c, d));
		else
		{
			 ContactTriggerHandlerQueueable ctq = new ContactTriggerHandlerQueueable(a, b, c, d);
			ctq.execute();
		}

The only issue with this is that synchronous processing gets lower limits than asynchronous/queueable, which I found out when my bulk test code ran afoul of a CPU time limit. So in this case I created a public static Boolean variable in my Trigger Handler that I can set explicitly from the test class, that forces the test to run as a Queueable, above as isBulkTest. You just need to be careful that downstream chaining doesn’t do the same thing from the same test method.

Please let me know if this is helpful or if you have any questions. You can reach me at @BobHatcher on your friendly Twitter machine.

Salesforce: Check if Record Has Been Manually Edited

Published by:

We recently had a requirement in Salesforce that was a little unique. An ongoing automated process would update the associated Contact to a record, however they didn’t want the automated process to do the update if the field had been updated by a human. Makes sense – if someone takes the time to make a manual association of a Contact to a record, then it’s probably better information than the automatic matching algorithm.

As far as I can tell it’s impossible at runtime to know if the edit is due to a UI edit or is coming from code, so that’s out.

Here’s how we made it happen. The idea is to have a “last updated by code” checkbox that will be set by a workflow. But in order for it to know if it’s been updated by code, you need a 2nd checkbox that is only updated in Apex.

First, make two checkboxes on your record:
Contact_Change_Trailing_Lock__c
Contact_Last_Changed_By_Code__c

Set the latter to true by default when a new record is created.

When your code updates the record, set Contact_Change_Trailing_Lock__c = true.

Then, make two workflows.
if(and(Contact_Change_Trailing_Lock__c=false,ischanged(Contact__c)),true,false)
-> Field Update – Contact_Last_Changed_By_Code__c = false

And
if(and(ischanged(Contact_Change_Trailing_Lock__c),Contact_Change_Trailing_Lock__c=true),true,false)
–> Field Update: Contact_Last_Changed_By_Code__c = true
–> Field Update: Contact_Change_Trailing_Lock__c = false

This results in you being able to use the Contact_Last_Changed_By_Code__c checkbox downstream in your logic to know not to overwrite the manual edit.

MavensMate: TLS 1.0 Deprecation Error (UNSUPPORTED_CLIENT)

Published by:

If you use MavensMate for SublimeText (on Mac) for your Salesforce development, you might have encountered an issue after your org was updated to Summer ’16. Apparently Salesforce deprecated TLS 1.0 and MavensMate doesn’t support it in versions less than 7.0.

There is a lot of information out there but it’s all over the place. Also the MavensMate community has a maddening tendency to assume everyone’s an uber-developer and understands all the shortcuts and terminology.

Here’s how to nix the problem.

  1. Download and install the MavensMate.app from GitHub.
  2. In Sublime Text, edit your Package Control settings for your user. You can’t edit the global ones.
    Screen Shot 2016-06-28 at 3.55.45 PM
  3. Add the following line to your user settings.
  4. "install_prereleases":
    [
    "MavensMate"
    ],

    Mine looked like this when I was done.
    Screen Shot 2016-06-28 at 3.57.33 PM

  5. Quit and restart Sublime Text.
  6. In Sublime, use COMMAND+SHIFT+P and type “Package Control”. Run “Upgrade Package”.
  7. Now, in the MavensMate app you installed, you need to Settings in the hamburger menu at the top left. Then, add the path to your source files – in my case, /users/bhatcher/documents/source.Screen Shot 2016-06-28 at 4.10.09 PM

 

May the force be with you. Send me a tweet – @BobHatcher – if you have any questions.

Salesforce: Who Has Permission? (And what Profiles/Permission Sets Grant It?)

Published by:

“Can you run me a report to see who has edit permissions on Cases?”

It’s the kind of thing that makes you cringe since it’s such a simple question and not remotely easy to achieve since the config screens are a disaster and it could be in any number of Profiles or Permission Sets.

Here’s how to find out. The following query will give you a quick list. (H/T Adam Torman)

SELECT Assignee.Name
FROM PermissionSetAssignment
WHERE PermissionSetId
IN (SELECT ParentId
FROM ObjectPermissions
WHERE SObjectType = 'My_Object__c' AND
(PermissionsCreate = true OR PermissionsEdit = true))
and Assignee.isActive = true

You can run this in the Developer Console under “Query Editor” tab.

Note that you have to change “My_Object__c” to your object name, and this filters by active users and Create or Edit permissions. You can run this using Workbench, dump it to Excel and deduplicate it. Problem solved.

Great. So What Profiles/Permission Sets Grant Access?

So the next question is going to be, OK, what profiles or permission sets grant that access?

This gets a little tricky because Profiles actually use Permission Sets under the hood. So when you use the above query, you are looking at Permission Set Assignments related to Users, including those assigned via a Profile. This is a little confusing. It’s driven by the isOwnedByProfile flag on PermissionSet – if the flag is true, it’s a hidden Permission Set that underpins a Profile.

So basically you need to do this twice: first, find the Profiles that include an underlying Permission Set (isOwnedByProfile = true). Then find the regular Permission Sets.

You can run the following Apex in the Anonymous Window. (If you’re not technical, don’t be scared, it’s just Your Name -> Developer Console -> Debug -> Execute Anonymous.)


List<PermissionSetAssignment> pa = [SELECT Assignee.ProfileId, PermissionSet.ProfileId, Assignee.Profile.Name
FROM PermissionSetAssignment
WHERE PermissionSetId
IN (SELECT ParentId
FROM ObjectPermissions
WHERE SObjectType = 'My_Object__c' AND
PermissionsEdit = true)
(PermissionsCreate = true OR PermissionsEdit = true))
and PermissionSet.isOwnedByProfile = true];
    
Set <Id> paIds = new Set<Id>();
for (PermissionSetAssignment thisPa : pa)
    paIds.add(thisPa.Assignee.ProfileId);

List <Profile> profiles = [Select Name, Description from Profile where Id in :paIds];
for (Profile thisProfile : profiles)
    System.debug('Profile: ' + thisProfile.Name);

This will (inelegantly) output all the Profiles that grant the permission you’re looking for. (To see the output, go to Logs tab in the Dev Console, open the topmost Log, and check the box that says “Debug Only.”)

Finally, you can check the regular Permission Sets that contain the privilege by using this query:

SELECT PermissionSet.Name
FROM PermissionSetAssignment
WHERE PermissionSetId
IN (SELECT ParentId
FROM ObjectPermissions
WHERE SObjectType = 'My_Object__c' AND
(PermissionsCreate = true OR PermissionsEdit = true))
and Assignee.isActive = true
and PermissionSet.isOwnedByProfile = false

Dump to Excel and dedup.

It’s an ugly solution but it should be enough to answer the question when it’s asked.

Salesforce vs Dynamics CRM: Security Model

Published by:

To follow up on my popular Dynamics vs Salesforce: The War From the Trenches, I thought I’d dig a bit deeper into the security models. The models are considerably different and have their own strengths and weaknesses.

When deciding between Salesforce and Microsoft, the security model is perhaps the most important difference. When implementing CRM, who can see what is an enormous time suck and causes a lot of trouble. It’s the bad side of the 80-20 rule – this is the thing that’s 20% of your project but will consume 80% of your time.

Dynamics – Simple & Consistent, But Susceptible to Exceptions

In Dynamics, you have user roles, and hierarchy governed by something called Business Units. This means you can implement a clean org-chart security model very easily. The East Region VP sees everything in the New York and Atlanta branches, but nothing in San Francisco. And Johnny Sales Rep in New York can see only his own stuff but not the guy in the next cube’s.

The model starts and ends with ownership of a given record. It’s pretty straightforward: Johnny Sales Rep can edit Opportunities he owns but only view Opportunities he doesn’t, within his level in the hierarchy or below him.

To manage this you have a user role concept, which a user can have many of and are only additive. So if Johnny Sales Rep has Role A that allows him to edit a record, and Role B that wouldn’t allow him to edit it, he can edit it. Usually you can end up with a nice layered approach – a base role then a VP role that adds some capabilities, for example.

Dynamics Security Role Example

Dynamics Security Role Example

Dynamics’ security model has recently been improved to include a managerial/organizational hierarchy that reduces the need for Business Units.

The major weakness of this model is that it is focused on what’s at and below a given user in the tree, which makes it extraordinarily hard to operate cross-functionally. Say your customer GlobalCo is based in New York and is owned by Joe Sales Rep, so it rolls up to the North America unit. But GlobalCo has a branch in Johannesburg owned by Sally SuperSales, so that branch rolls up to your EMEA unit. The fact that they are in different branches makes it very hard for Sally to see what’s going on in New York. And if you have a strategic account manager that needs to see all branches of GlobalCo? Forget it. You’re left with workflows implementing automatic sharing, which isn’t reportable or traceable in any meaningful way, or even achievable using out-of-the-box tools. Addressing this stuff will consume your project budget.

This diagram illustrates what I mean.

It’s easy to create visibility within or below a branch of the hierarchy (shaded area). But crossing (the red line) is a disaster.

The good news is that sharing is applicable to almost anything – you can share individual Dashboards, Views, Reports, etc., in addition to records.

There is a Team concept that is marginally helpful here, but seems to always have one limitation that prevents you from using it.

This model is very clear-cut and it applies everywhere. Run a report, it’s relative to what you’re allowed to see, period.

Salesforce – Complicated, But More Crossfunctional

Salesforce’s model is conceptually complex but in the end is vastly more practical.

First, you need to decide if everyone should see everything. If so, you’re done. Otherwise, it includes three main components:

  • Profiles, of which everyone has exactly one. This governs the base of what you can see. By default, everyone has permission to do whatever they want to records they own. If you want to prevent deleting Accounts, you need to take that away in the Profile.
  • Permission Sets – like a Profile but users can have more than one; you can use this to grant permission in individual cases.
  • Roles – this is a hierarchical role. Marketing Specialist rolls up to Marketing Director rolls up to Marketing VP.

On top of this you have automatic sharing. In my mind, this is the #1 advantage Salesforce has over Microsoft. You can do stuff like grant a Sales Operations team control over certain opportunities.

Autoshare to Sales Ops Example

Autoshare to Sales Ops Example

You could think of other examples and applications – like maybe, grant an arbitrary group of product managers read-only access to any Opportunities lost to a certain competitor over features.

But this is messy as well. You have to know and keep track of:

  • When there’s a conflict between a Profile and a Permission Set, the highest permission wins.
  • When there’s a conflict between Sharing and a Profile, the lowest permission wins.
  • And so on.

There’s no consistent, global rules, like how Microsoft’s user roles are always additive. For example, you could create a Dashboard that shows company-wide data and publish it to the whole company. You may or may not want the ability to do this. Or, Dashboards are only editable by the person who created them.

Also, SFDC formula fields, workflow and custom (Apex) code ignore field-level security. Microsoft always runs in the context of a given user and applies their security settings.

Like I said in my first blog – everything in Salesforce has an exception.

Feedback? Leave a comment or find me on Twitter: @BobHatcher.

Dynamics vs Salesforce: The War From the Trenches

Published by:

I’ve been pretty deep in Salesforce.com (SFDC) the last few weeks. There are a lot of articles out there that compare the two solutions at a CIO level, so I thought I’d take a moment to outline some of the more nitty-gritty pros and cons.

2491924-mortal+kombat+2+game+playHere are some of the key differences in my mind.

Where Salesforce Wins

  • VisualForce makes for a much more customizable interface. You’re not limited to dropping fields on a form – you can make a custom webpage part of your process. Custom buttons too. So what you see on the screen is not limited to drag and drop fields.
  • Salesforce’s implementation of Apex unit testing is outstanding. You can create test classes and you must run them before you can promote code into an instance. It’s brilliant and makes you a lot more confident deploying code.
  • Salesforce has a lot of native functionality that Microsoft still hasn’t gotten to. Lead assignment rules, approval processes, autonumbering, rich text text areas, dependent picklists, and multiselect picklists come to mind. Formula fields in SFDC make life a lot easier too.
  • Login as. OMG, login as. Microsoft, if you can hear me, you must allow admins to login as other users and see what they’re seeing.
  • Salesforce’s autosharing rules by role are very helpful. It is very hard and time-consuming to share across the organizational hierarchy in Dynamics. If you have crossfunctional roles like strategic account managers, this is huge. See my follow-up post that addresses this in detail.

Where Microsoft Wins

  • Salesforce has no equivalent to Advanced Find and how I miss it so.
  • Dynamics’ workflow concept is much more robust, and the equivalent things in Salesforce are kind of scattershot. SFDC’s workflow rules only do a small set of things. For example, if you want to copy a Lookup value from a parent record onto a child record it’s a 60 second workflow in Dynamics. In Salesforce, it’s code. A lot of code. (And I needed to copy the value because Salesforce email templates don’t let you access related records as recipients.)
    • Edit: Salesforce has a tool called Process Builder, which is a visual workflow tool that on the surface is like Microsoft’s. However, it’s been out for a while now and has very low adoption. As I understand it, this is due to weak “bulkification” – the need in Salesforce to do everything in a batch process (i.e., update 100 records with one statement rather than execute 100 statements). Bulkification is a persistent concern with everything in Salesforce. So the processes tend to fail when performing high volume processing such as imports.
  • I miss Microsoft’s idiot-proof data import function.
  • It feels like everything in Salesforce is an exception. Want to require a field at the database level? No problem.. unless it’s a picklist. Want to map a field to a child record? No problem.. unless it’s a Lookup. Role hierarchy can be disabled but only for custom objects/entities. Dynamics is much more of a UI on top of a relational database, which means it’s a lot less restrictive.
  • Microsoft has a more complete vision for first-in, first-out queues. There is an intersection entity that enables users to view and claim work. Salesforce’s Queue is essentially a Dynamics Team – basically the record is owned by more than one person.

Other Thoughts

STUART SAVES HIS FAMILY, Al Franken, 1995.

  • Settings/Config search in Salesforce is great but it’s hard to switch between two things, like the config page for an object (entity) and a child object (entity). Microsoft’s tree structure is harder to find things, easier to toggle.
  • Microsoft’s picklist implementation is far superior, using a database value and a text label that’s swapped out at runtime. Salesforce picklist values are stored as text, and the only way to refer to picklist values in code is by their labels, so you basically can’t change picklist values once they’re set (although there is a global replace function for picklist values, but that won’t help your code.)
  • Salesforce’s config screens are maddening sometimes, how lists are not sortable or searchable.
  • Microsoft’s browser compatibility is not great. I’ve tried it on Windows with Chrome, IE, Edge, and Firefox and the one that works best is… Firefox? And on Mac only Safari is supported.. poorly.
  • Microsoft’s cascading rules are more flexible than Salesforce’s Parent-Child and Lookup relationships. Salesforce doesn’t natively support N:N either.
  • Salesforce has a lot of development tools, and so far I’ve tried Sublime/MavensMate, Cloud9, and the native Developer Console. None of them are simultaneously reliable and a good development experience. Visual Studio isn’t great either, and it’s heavy and expensive, but it’s reliable, and you can actually set breakpoints and look into objects at runtime. And you don’t get [expletive]blocked by multitenant hiccups like “Admin Operation Already in Progress.”
  • Salesforce’s Custom Settings make it much easier to manage things like the email address your custom code sends to. In Microsoft you can do it with a custom entity, but why should you have to?
  • I’ll say this again: how Microsoft doesn’t have multi-select and dependent picklists by now is beyond me.
  • Support for client side scripting on Salesforce screens (layouts) is poor.

I’ll post again when I have more to ramble about. Feel free to leave a comment to clarify or correct me.

Getting Started with Dynamics Marketing – Some Lessons Learned

Published by:

The connector from CRM to Dynamics Marketing (MDM) can be tricky. The definitive guide for setting it up is Jon Birnbaum’s blog found here.

Since comments on his blog are disabled, here are some things to keep in mind as you go through it:

  • You do not need to bother with the Azure Service Bus pieces unless you want to use the SDK. The writeup goes into a lot of detail about setting up your own queues but managed queues will suffice for simple use cases. Even if you need your own queues, I recommend you start with the managed ones just to keep it simple.
    • Note: If you do set up your own queues, you do have to do it via PowerShell. I tried the UI.
  • The CRM integration user you create must have the MDM user role, even if it is an admin.
  • The install files for the CRM package are in the MDM installer. You have to download the on-premise connector and install it, even if you are not connecting to on-premise, just to get the ZIP file.
  • The initial sync can take a long time, even in a demo environment without a lot of data.

Another thing that’s important, and buried in a paragraph, is that you need to add the “Sync Enabled” field to the CRM form and toggle it to “yes” for any records you want to go over to MDM.

How-To: Overcome Rollup Field Limitations with Rolling Batch Processing.. Even In the Cloud

Published by:

Rollup fields are great. But their filters are limited. The most common use case I can imagine is something like year to date sales. But you can’t do that with rollup fields because the filters don’t have relative dates. Want quarter-to-date sales? Sorry folks.

Here’s how to make it happen. This works.. even in the cloud. It should also work in 2011 although I have not tested it there.

In this scenario, we are using a Connection between a Product and an Account to hold quarter to dates sales numbers. I want to regularly calculate the sum of Invoices against the Account for the given Product and save it on the Connection. I’m calling the Connection my “target entity.”

Overcoming the Depth Property and Timeouts

Normally, you could create a recursive workflow that processes something, then calls itself. But you run into a problem with the Depth property; CRM will stop the process after so many iterations and consider it a runaway. So if your workflow calls itself, the 2nd time it runs the Depth property will be 2. After so many times, if Depth = x (I think 15 by default), CRM will kill the process.

The secret comes from Gonzalo Ruiz: the Depth property is cleared after 60 minutes.

The other issue is CRM’s timeouts; you need to make sure the update process doesn’t croak because it’s chugging through too many records.

So we’re going to chunk up our data into 1000 batches and run each batch asynchronously every 61 minutes. A lot of processes doing a little bit of work each. I don’t recommend this with synchronous processing.

The Approach

Here’s the process we’re going to create.

  1. Upon create, assign your target entity a random batch number. I’m using 1000 batches.
  2. An instance of a custom entity contains a batch number (1000 batch controller records for 1000 batches). A workflow fires on this custom entity record every 61 minutes.
  3. The workflow contains a custom workflow activity that updates all target records in its batch with a random number in a “trigger” field.
  4. A plugin against your target entity will listen for the trigger and fire the recalc.

2015-10-06 15_27_28-Drawing1 - Visio Professional

First, create a custom entity. I’ve called mine rh_rollingcalculationtrigger. All you need on it is one integer field.

Now, on your Connection (or whatever entity you want to store the rolling calculated fields), create two fields: rh_systemfieldrecalculationtrigger, and rh_systemfieldrecalculationbatch.

Now create a simple plugin to set the batch number to between 0-999 when the record is created. If you have existing records, you can export to Excel and reimport them with a random batch assignment – the Excel formula randbetween() is great for this.

protected void ExecutePostConnectionCreate(LocalPluginContext localContext)
{
	if (localContext == null)
	{
	throw new ArgumentNullException(localContext);
	}
	Random rand = new Random();
	IPluginExecutionContext context = localContext.PluginExecutionContext;
	Entity postImageEntity = (context.PostEntityImages != null && context.PostEntityImages.Contains(this.postImageAlias)) ? context.PostEntityImages[this.postImageAlias] : null;
	ITracingService trace = localContext.TracingService;
	IOrganizationService service = localContext.OrganizationService;

	// super-simple update
	Entity newConnection = new Entity("connection");
	// Add a random number between 0 and 1000.
	newConnection["rh_systemfieldcalculationbatch"]= rand.Next(0, 1000);
	newConnection.Id = postImageEntity.Id;
	service.Update(newConnection);
}

(Side note: In C#, Random.Next() returns a value exclusive of the upper bound. So each record will get a value 0-999 inclusive.)

Now, we create a custom workflow activity. This inputs the batch number and fires a process called FireBatch. So when this workflow runs on a rh_rollingcalculationtrigger entity with Batch ID = 5, it will call FireBatch against all records with batch ID = 5.

In my case, I like to assemble the service, context and tracing service in the plugin and call my own class.

public sealed class WorkflowActivities : CodeActivity
    {
        /// <summary>
        /// Executes the workflow activity.
        /// </summary>
        /// <param name="executionContext">The execution context.</param>
        protected override void Execute(CodeActivityContext executionContext)
        {
            // Create the tracing service
            ITracingService tracingService = executionContext.GetExtension<ITracingService>();

            if (tracingService == null)
            {
                throw new InvalidPluginExecutionException("Failed to retrieve tracing service.");
            }

            tracingService.Trace("Entered Class1.Execute(), Activity Instance Id: {0}, Workflow Instance Id: {1}",
                executionContext.ActivityInstanceId,
                executionContext.WorkflowInstanceId);

            // Create the context
            IWorkflowContext context = executionContext.GetExtension<IWorkflowContext>();

            if (context == null)
            {
                throw new InvalidPluginExecutionException("Failed to retrieve workflow context.");
            }

            tracingService.Trace("Class1.Execute(), Correlation Id: {0}, Initiating User: {1}",
                context.CorrelationId,
                context.InitiatingUserId);

            IOrganizationServiceFactory serviceFactory = executionContext.GetExtension<IOrganizationServiceFactory>();
            IOrganizationService service = serviceFactory.CreateOrganizationService(context.UserId);

            try
            {
                IWorkflowContext wfContext = executionContext.GetExtension<IWorkflowContext>();
                IOrganizationServiceFactory wfServiceFactory = executionContext.GetExtension<IOrganizationServiceFactory>();
                IOrganizationService wfService = wfServiceFactory.CreateOrganizationService(context.InitiatingUserId);
                int batchId = (int)this.BatchNumber.Get(executionContext);
                Guid thisGuid = ((EntityReference)this.ThisEntity.Get(executionContext)).Id;
                RollupCalculations.FireBatch(service, tracingService, batchId, thisGuid, context.Depth);
            }
            catch (FaultException<OrganizationServiceFault> e)
            {
                tracingService.Trace("Exception: {0}", e.ToString());

                // Handle the exception.
                throw;
            }

            tracingService.Trace("Exiting Class1.Execute(), Correlation Id: {0}", context.CorrelationId);
        }
        [Input("Batch Number")]
        public InArgument<int> BatchNumber { get; set; }
        [Input("This Config Entity")]
        [ReferenceTarget("rh_rollingcalculationtrigger")]
        public InArgument<EntityReference> ThisEntity { get; set; }
    }

FireBatch: query all records with batchID = x and update them with a random number.

// In this case, the processingSequenceNumber is going to be the value from the CRM batch controller entity.
public static void FireBatch(IOrganizationService service, ITracingService trace, int processingSequenceNumber)
        {
            // First, get all Connections with that batch ID.
            String fetch = @"
            <fetch version='1.0' output-format='xml-platform' mapping='logical' distinct='false'>
              <entity name='connection'>
                <attribute name='connectionid' />
                <order attribute='description' descending='false' />
                <filter type='and'>
                  <condition attribute='rh_systemfieldcalculationbatch' operator='eq' uiname='' uitype='product' value='" + processingSequenceNumber + @"' />
                </filter>
              </entity>
            </fetch>";

            // Now do a bulk update of them. 
            EntityCollection result = service.RetrieveMultiple(new FetchExpression(fetch));
            trace.Trace("Processing " + result.Entities.Count + " records on batch " + processingSequenceNumber);

            ExecuteMultipleRequest multipleRequest = new ExecuteMultipleRequest()
            {
                Settings = new ExecuteMultipleSettings()
                {
                    ContinueOnError = false,
                    ReturnResponses = false
                },
                Requests = new OrganizationRequestCollection()
            };

            Random rand = new Random();
            
            int testLimit = 0;
            if (result != null && result.Entities.Count > 0)
            {
                // In this section _entity is the returned one
                foreach (Entity _entity in result.Entities)
                {
                    Guid thisGUID = ((Guid)_entity.Attributes["connectionid"]);
                    var newConnection = new Entity("connection");
                    newConnection.Id = thisGUID;
                    // Note here that we're just dropping a random number in the field. We don't care what the number is, since all it's doing is triggering the subsequent plugin.
                    newConnection["rh_systemfieldrecalculationtrigger"] = rand.Next(-2147483647, 2147483647);

                    UpdateRequest updateRequest = new UpdateRequest { Target = newConnection };
                    multipleRequest.Requests.Add(updateRequest);
                    //trace.Trace("Completed record #" + testLimit);
                    testLimit++;
                }
                service.Execute(multipleRequest);

            }
        }

Warning: use ExecuteMultiple to avoid timeouts.

Now, create a plugin against the Connection to do whatever it is you want. It should fire on change of the rh_systemfieldrecalculationtrigger field.

        protected void ExecutePostConnectionUpdate(LocalPluginContext localContext)
        {
            if (localContext == null)
            {
                throw new ArgumentNullException("localContext");
            }

            IPluginExecutionContext context = localContext.PluginExecutionContext;
            Entity postImageEntity = (context.PostEntityImages != null && context.PostEntityImages.Contains(this.postImageAlias)) ? context.PostEntityImages[this.postImageAlias] : null;
            ITracingService trace = localContext.TracingService;
            IOrganizationService service = localContext.OrganizationService;

            // In this method I do the logic I want to do against the specific record.
            RollupCalculations.SalesToDate(service, trace, postImageEntity);
        }

The final piece is to, well, get it rolling. First, create 1000 instances of rh_rollingcalculationtrigger, setting Batch ID’s 0-999.

Remember, we can create a recursive workflow with the 60 minute workaround. I’m setting it to 61 just to be safe.

2015-10-06 15_09_06-Document1 - Word

Manually fire the workflow once on each of your 1000 recalculation entities. Congratulations, you have perpetual recalculation.

I recommend setting up bulk deletion jobs to remove the system job records this creates. It can be a lot.

Outlook Plugin: Server-Side Sync vs CRM For Outlook

Published by:

In Microsoft Dynamics CRM 2015 Online with Exchange Online, you have the option for the Outlook client to work over “Server-Side Synchronization” or “CRM for Outlook.”

Which one should you choose?

Server-side sync uses server-to-server communication and the Outlook client basically runs everything through the user’s desktop computer. Things to keep in mind:

  1. If you use the Outlook version, and a tracked event occurs such as sending an email from the web client, it won’t be processed unless, or until, Outlook is open and connected on the user’s desktop.
  2. If you user Server-Side Sync, tracking activities in Outlook can take some time. You can adjust the sync interval in Settings -> Administration -> System Settings -> Outlook to as little as one minute. This may not be a problem, but if you are planning to use the “Convert To” or “Add Connection” functions in any meaningful fashion, you’re going to have to wait for the sync to occur before you can use those buttons. They even take away the Synchronize button! Your pilot users will undoubtedly yell “why is Convert To greyed out?!?” or “why is Add Connection greyed out??”
  3. Server-Side Sync, with CRM Online 2015 Update 1, can synchronize with Exchange based on e-mail folders.

Microsoft also says Server-Side Sync can improve Outlook performance, which makes sense, but I haven’t seen such a huge difference.

Fret Not: You can have the best of both worlds.

I’ve found that if I set outbound e-mail to process by Server-Side Sync, and inbound emails and other activities via the Outlook client, when you track items in Outlook they go up to the server instantaneously, enabling the Convert To and Add Connection buttons. But you can also send email from the mobile or web apps even if your laptop is closed in your bag.

You can set this globally in Settings -> Email Configuration -> Email Configuration Settings.

2015-08-16 14_05_11-Email Configuration - Microsoft Dynamics CRM