“What’s the worst that could happen?”
A question we ask on quite a regular basis! It helps our project teams identify key risks inherent to any implementation and forms part of our risks, assumptions, issues and dependencies (RAID) log.
When we put this question to our customers, the answer is simple: ‘Data loss’….
Our customers have a lot of attachment to their data – rightly so as it represents their time, effort and in many cases intellectual property. One of the niceties of working with cloud technology is that beyond the basic tools for undelete or recycle bin, there aren’t a lot of options for restoring data.
If you have an old school on-premise computer system, you’re responsible for managing absolutely everything – computers, servers, automated backups, disaster recovery and so on.
Cloud technology removes most of these requirements – instead, they are handled by the cloud provider themselves. No tapes, no server stacks, no backups running at 12am.
However in passing control of this data to the cloud provider, the user loses the ability to get their hands on physical backups of the system.
With cloud technologies, it is unrealistic to constantly preserve and backup data throughout regular development.
With this in mind, we work to minimise the risk of data loss. There are a number of basic measures we put in place before making significant changes to a production system.
- Ensure we understand what it is our customers want to achieve
- Ensure our clients understand the impacts (if any) of the work required to achieve this goal
- Make all changes in a protected system (a sandbox) which contains the same data as a live system but is completely separate from it
- Ensure our clients carry out extensive testing in the sandbox before any changes are made to the live (‘production’) environment
- If required, create an offline backup of key data (which can be costly and time-consuming)
- Replicate any changes we have made to the configuration of the sandbox into production
- Test and release
This is by far a simplified view – there are in fact many more steps in the process than these – but by following these processes, we can avoid significant data loss. 2cloudnine have completed over 100 implementations and major projects for customers on Salesforce.com with no data loss to date.
Advice for Clients
When making bulk changes to data, Salesforce.com operates four basic operations – insert, update, upsert and delete.
- Insert = create something new
- Update = update something existing
- Upsert = check if it exists; if it does, update it. If it doesn’t, create it.
- Delete =remove it
Note that there’s no undelete, no restore, no backup!
It’s up to the user or the consultant to make sure the data is backed up before it is modified as anything that the user does in a live (production) system happens immediately and irrevocably.
If you are a Salesforce.com or Jobscience user there are tools which are available to manage bulk updates.
Our advice is simple: DON’T USE BULK UPDATE TOOLS IN A PRODUCTION SYSTEM.
Instead, get in contact with our support team if you intend to make changes to your data – we can assist in ensuring that you have backups of your data before making any changes.