Enterprise Mobility – Fulfil the Promise, Avoid the Pitfalls

mobile_cartoonWe see the pattern time and again. “Everyone” agrees that a new technology will transform business and you must be part of it or risk being left behind.  Businesses caught up in the hype rush to implement optimistic and poorly thought out projects.  Something goes wrong resulting in massive costs and reputational damage.  Finally, we take a more cautious and realistic approach to building the new technology into our business models and the technology starts to meet its early promise.

Such is Enterprise mobility.   The notebook, smartphone, and broadband wireless are enabling technologies, allowing us to break away from the office and have accelerated a transformation of how we think of the workplace.  Benefits from anywhere access to data and tools include a boost to productivity, improved customer service, and flexibility for employees. The concept appears to be a clear win/win with evangelist’s spruiking the undeniable benefits, but often ignoring the security implications.  We are a long way down the road to mobile maturity, but we are not quite there yet.

Early mistakes were made, and records show it takes time for an industry to adapt and learn.  In 2006, millions of health records in the US were exposed from a stolen laptop, resulting in a class action that cost tens of millions on top of the privacy and identity theft issues.  Lesson learned?  Perhaps not, try googling breaches from lost and unencrypted notebooks and smartphones and you will find the same mistake made time and again.

A variety of risks and mistakes continue to be documented.   Just this month a Chinese firm admitted to installing hidden software that sends the users text messages, call log, contact list, location history, and app data back to Chinese servers – software that may have been preinstalled on as many as 700 million phones!  What happens when such a phone is brought inside your corporate network as a BYOD device?

So how to reduce these risks?  Any solution must take into account the diverse range of devices, technologies, and user awareness that is present across an organisation as well as trade off security for ease of access and use.

Attempting to implement a specific solution for each disparate device, scenario, and individual is prone to failure and akin to wack a mole.  Instead, a multilayered approach can work with a fundamental focus on data, authorisation, and compliance rather than the device or specific risks.  Applying broad strategies that can cover unforeseen risks as well as known risks – make the system as intrinsically safe as practical.  Build a consistent, secure environment across devices and applications, and quarantine and protect that environment from unregulated parts of the system.

The most successful solutions will allow a company to maintain control of its data while not getting in the way of work.

Elements of a Mobile Security Strategy

In order to develop a robust mobile security strategy, consider a wide range of technologies and techniques, then pull them together to meet your security objectives and implement a consistent strategy.

Manage the Human Factor

The greatest vulnerability in any corporate security system are its people.  People want to get their job done, not fight with the tools and access they need to do that job.  Where security gets in the way, then they will work around it and introduce new risks.

Staff will use weak passwords that are easy to remember.  They will click on random email attachments with no thought that they may be a virus.  They will help the nice man, purportedly from Microsoft, remotely take over their PC to fix the “computer problems” he generously rang them about.  They will enter their credentials into a fake website, just, because.  They will jailbreak their phone.  They will let little Jonny install a game that comes with a special payload of malware.  They will not do these things to harm their company, boss, or IT staff, but rather because their focus is on their work and because they don’t have the knowledge or awareness to know better.

People don’t like to feel needlessly constrained in what they can do with their tools, or even which tools they are allowed to use, and that is doubly so when they are using personal devices for work.  Security policies will be more effective if they take into account user expectations and behaviour.  Enforce password policies but perhaps also support alternative and easier authorisation methods, say fingerprint access.  To share files, the standard corporate fileserver may not cut it for staff used to using Dropbox or OneDrive, so perhaps look at cloud options that can be implemented in a secure way.  Solicit requests from staff about current pain points and any tools or functions they feel are missing and work out a way to help them out – with security integrated.

Work with staff to meet their needs rather than try to dictate from on high what staff must use.

Source: Microsoft Enterprise Mobility and Security Blog

Source: Microsoft Enterprise Mobility and Security Blog

Redefine “The Workplace”

In the world of enterprise mobility, the “Workplace” is now a collection of locations, devices, data, and communication channels.  Not all of these elements are under direct control of the corporate and edges to the corporate environment are necessarily blurred.

Defining a mobile security environment then necessitates a focus on defining and monitoring flows and storage of information and identifying where boundaries are set and how to control movement of data across those boundaries.

Set and Enforce Mobility Security Policies

To limit risks of unauthorised access, a strict mobile security policy is essential.

The basics include enforcing a lock policy on devices, and device encryption.  You can also set compliance requirements for devices such as ensuring patches and anti virus are up to date, and check that the device is not jail broken or has risky software installed on it.

To implement such policies you need some control over the device, and that can cause issues in the case of BYOD where policies may conflict with personal use of the device, or where enforcement of compliance may not be realistic on the device.


Application Control

Application control aims to reduce to risk posed by security flaws in particular applications.  At a basic level using a white or blacklist of approved applications and versions might be enforced alongside centralized provisioning and management.  More advanced methods that have emerged in recent years include security and management protocols baked into applications.  Again, in many cases where staff are using personal devices, enforcing application control can be a point of conflict.


Protect Data in Transit, Layer Security

Mobile devices may access corporate resources across a changing variety of network infrastructure including public and unsecured wireless hotspots.  Ensuring traffic that transits across such networks is secured by appropriate encryption protocols is essential.

Some small businesses allow remote users to login work machines directly with the windows RDP protocol.  Don’t.  While RDP is generally secure, you only need one bug or weak password and you have a breach.  Require a VPN to carry your RDP traffic (remember CVE-2012-002 which allowed RDP servers exposed to the internet to be compromised.  You don’t want that.) A VPN may itself have bugs or other vulnerabilities, but two reasonably independent layers are much less likely to be penetrated than one.


In some environments Choose your Own device rather than Bring your Own Device is a popular trade off where policy allows staff to choose from a wide range of acceptable devices that are owned by their company rather than allow an open slather approach.  This approach can reduce the range of potential vulnerabilities and will reduce conflict over acceptable use of the device by maintaining hardware ownership within the company.

Protect Documents at the File Level

Rights Management technologies can be used to secure access to company documents by default, and to restrict movement of those documents outside of a secured environment.  At a basic level that means encrypt all documents and only unlock those documents after appropriate authentication is applied.  This means if a document is accidently emailed, or a device with the documents stolen, the document will still not be accessible.  It also means that if authorisation is revoked for a user, they lose access to corporate information, even if that information is still on their personal devices.

Restrict Printing, Emailing, or Copy/Paste of Corporate information

Following document encryption, the potential exists for decryption to occur at a whitelisted application level where the approved application can also restrict the ability to copy or print sensitive documents.

Encrypt Everything

lockedWhole device encryption is slowly becoming standard on smartphones (much to the highly publicized concern of some government authorities) and is a must to ensure data on devices can not be read, even if an unauthorised person gains direct access to the devices file storage.

Technology such as bitlocker has been available for some time and is underused on notebooks and desktops.  Trusted Platform Modules (TPM) is now quite common on business focused laptops and allows for simple access with bitlocker enabled on a notebook.

File level encryption may be more appropriate where personal devices are in use and to better protect documents that may be transmitted to other users or to remove file servers or cloud storage.  Using both technologies is reasonable and largely invisible to the user.

Use Multi Factor Authentication

Typical authentication requires knowledge or access to a single authentication key, such as a password or a physical device.  The problem then is when that access method is discovered or becomes accessible to an unauthorised person, then the attacker is straight in.

Two factor authentication requires access to two different categories of authentication keys, selected such that if one authentication method becomes exposed, it remains unlikely that the second method is also exposed so the attacker still cannot gain access.  For example, an online portal might be secured with a password but also requires access to a separate security fob that generates a changing one time password.  If the set password is exposed, an attacker still cannot log in without physical access to the security token.  For highly sensitive information, additional authentication requirements might be added.

The main drawback of multi factor authentication is the additional time and nuisance of entering two or more authentication keys every time data is accessed.  This issue should be managed by considering the value of the protected content and apply realistic policies to find a reasonable balance.  For example, when accessing data at an online portal from a particular device policy may require the password entered on every access (or after a short timeout) but the changing security token to be applied only once per day when access can be verified to be from a previously authorised device.

Push Notification for Microsoft Authenticator app on iOS

Push Notification for Microsoft Authenticator app on iOS


Device Access Control

Maintaining a registered list of approved devices (corporate and personal owned) can allow for access to be restricted to those devices, reducing the issues with an open slather approach.

Partitioning Personal and Corporate Data

When accessing corporate data and systems on personal devices, isolating corporate from personal data and usage can help maintain privacy for the user and secure corporate data from unsanctioned access or copying.  Access to corporate data can then be restricted to approved applications and allow a remote wipe function on corporate data without touching personal data.

Use Data Analytics and Context – Conditional Access

Increasingly intelligent authorisation systems can be used to detect and block unusual activity and tailored to complementary systems that are in use.

Fred might log into a company cloud storage in the evening for an hour or two accessing from his home internet originating from an IP address in Brisbane.  He might access the same information the following day from a wireless hotspot while at lunch, also in Brisbane.  An hour later, he tries to access the information from a IP registered in Melbourne and different device.  That may raise a flag and an advanced authorisation system might block that one and lock his account in case it’s an attempt using leaked credentials.


Use an Enterprise Mobility Solution

A range of enterprise mobility solutions are available from major IT corporates and are under rapid development.  A number of packages have reached a level of maturity and include many of the technologies discussed in this article along with excellent reporting tools and risk management systems.  They are worth considering as an excellent starting point and core component of your mobile strategy.

Enterprise mobility solutions can be assessed by features including:

  • support a wide range of devices, environments, and applications.
  • include threat detection based on known attacks and vulnerabilities, and abnormal behaviour.
  • wipe all corporate data from a device when an employee leaves an organisation
  • set policy restrictions on, for example, restrict the ability to cut and paste content to unprotected files.
  • prevent access on devices or in environments that do not comply with security policies, such as jail broken phones, and lock or remove data on devices that become non-compliant.
  • provide a end user based a self-service portal for users to enrol their own devices
  • include single sign on so once authenticated, multiple applications and sites are accessible.
  • support bulk deployment tools to enrol devices, change rules, and install applications on large scale.

Bringing it all Together

Enterprise Mobile Security requires wide-ranging integration of technologies, procedures, and policies and is one of the toughest and most important systems to get right in your organisation.  It requires a good knowledge of your business but also of the technologies available.

My advice is to keep your eye on the big picture and continuously weigh up risk against productivity while reviewing the systems effectiveness, and feed those reviews back into incremental improvements.  The more traditional rigid approach of ticking boxes and believing you are secure is a sure path to failure.

For smaller organisations, draw on the experience of external experts, but don’t buy into a prepacked, “standard” solution (there is no such thing).  Work with consultants to help them understand your business, and work with them to tailor the technology and policies to your needs.

Further Reading

Cyber Security Report

Pre-installed Backdoor On 700 Million Android Phones Sending Users’ Data To China

Why stolen laptops still cause data breaches, and what’s being done to stop them

Microsoft EMS Blog


Protect your Business and Family with OpenDNS

opendns_logoOpenDNS offers a suite of Internet filtering and security features targeted at levels from the home environment to corporates.  The service works by replacing the Internet’s standard address book system (DNS) with a custom service which then allows you to limit, track, and manage access to the Internet and to help protect your staff and family from malware, phishing, and inappropriate content. 

OpenDNS was bought by Cisco in late 2015 and is now being integrated with their extensive product and service portfolio.  With Cisco’s backing, the service will likely continue to grow in the corporate market and at this stage it appears that cisco will continue to support the home and small business markets.  Basic filtering and lookup services remain available at no cost with more sophisticated services available for a fee.

The nature of the service means that security can be set up and managed at a single point where all new devices plugging into the network are automatically included, simplifying administration and reducing or removing the need to load custom software onto each device.

Many of our clients have migrated away from device specific software lockdown tools to OpenDNS in an effort to reduce the impact of device specific tools on performance, and to reduce administration costs.  The service is not perfect and is best used as part of a layered strategy, but it is a low impact, high value service that we recommend for most sites.

How can OpenDNS help your Family or Business?

The OpenDNS service can help with:

  • Protect families and employees from adult content on the internet by blocking these sites.
  • Improve productivity and conserve bandwidth by blocking time wasting sites, or blocking all sites except for a group selected by you.
  • Reduce the risk of malware infection by blocking sites that are known to contain malicious software and exploits.
  • Block “phishing” sites that try to trick you into giving up your identity and login information.
  • Improve the responsiveness of web browsing in cases where ISP DNS servers have poor response times.
  • Improved confidentiality, Integrity, and availability of the DNS service by implementing improvements to standard DNS protocols such as support for CurveDNS and DNSCrypt.
  • Gain visibility of your internet traffic with access to logs of sites visited by your users and sites that were blocked.

How it Works – Take control of the Domain Name Service (DNS)

When you enter the name of a web site like www.computeralliance.com.au into your browser, you are asking something like “Please show me the Computer Alliance web site”.  Easy you might think, but one little problem crops up: your browser and computer may have no idea where the Computer Alliance web site is located.  Oops.

The web “address” you type into a browser is more akin to a name than a street address.  A comparable request to a friend might be something like “Please head down to Computer Alliance and check out their stock and pricing for me”.  If your friend knowns our physical address, all good, otherwise they have no idea how to reach us.

So what does the friend do?  Well, they ask!  You are the closest source who might know the address so they ask you first, and if you know, then problem solved, but if you don’t you might then look it up for them using a more comprehensive address book.

Internet addressing works in a similar way, using what we call the Domain Name System.  DNS is a protocol that lets computers look up domain names like www.computeralliance.com.au and find out their numerical internet address (something like, the one more like the street address that you must have in order to contact the site.

Similar to the idea of your friend finding the address by checking their own memory, and then asking you, DNS works on a hierarchy where your computer will check to see if it has recently looked up the address and already knows it, and if not will then ask its closest DNS service, probably your modem/router.  If that service doesn’t know either, it will know about a more comprehensive server further up the chain to ask.  Eventually the address will come back down the chain to be passed back to your browser.


The address of your local DNS server is normally assigned to your computers automatically and in most cases will point to a DNS service on your router or local server.  That server will then pass queries on to your internet providers DNS servers.

With OpenDNS as your DNS provider, replacing your ISPs DNS servers, you can still receive the full range of usual address lookups, but also take some control over the responses to manage access to the address.  In this way you can make use of OpenDNS systems that identify and categorise sites by content and by security risk, and then automatically block sites that you don’t want accessed.  You can also select specific sites to block, or block everything and only allow specific sites.

How to Set up – Manually Set DNS

You can make changes to which DNS server your devices use when they don’t already know an address.  One method is to change the entries for DNS servers on all individual client devices, such as PCs, but it is normally easier and more secure to only change the local DNS server on your network that all other devices rely on for their initial DNS query.

Your router may use DHCP to automatically give out local addresses and its own address as a DNS server to all devices attached to the network (that’s the standard setup for most environments).  Then all you need do is change the routers DNS settings that look to external DNS servers so that it in turn looks to OpenDNS servers for its DNS queries to pass back to your devices.

Set DNS on Router

Limitations – Bypassing OpenDNS

You may see an evident flaw in this method.  If a user manually changes their computers DNS server settings to some alternate (for example Googles DNS servers) rather than the router, they may be able to bypass filtering.  This can be a problem and is one reason OpenDNS should be used as one element in a multilayered security strategy.

In environments that are heavily locked down, using best practice security measures, users (and any malware that takes control of a user account) will not be able to change local DNS settings and attempts to query other servers may be blocked.  Even in those environments there may be possibilities to bypass OpenDNS and a range of other systems may be used to detect or block those bypasses.

The potential of bypassing OpenDNS depends on the intent of users, the knowledge of users, and the security setup of the environment.  In most environments the risks of inadvertent or intentional bypassing is minimal.

Setting up Blocking a Policy

The level of control available varies with the level of service chosen, but all levels allows you some broad blocking options that is generally appropriate for most sites.  You can select a typical collection of settings with a single button, or take more control and manually set which categories to allow or disallow, and even go to the detail of blocking specific sites.


Various reports on Internet activity are available to help you keep an eye on sites that users are attempting to access.  This can sometimes show up possible problems on your network where a certain application may continually attempt to access a particular site (such as malware trying to talk back to its servers or to infect other computers).


Blocking Malicious Sites

OpenDNS receives reports of malicious activity related to web sites and adds those sites to its blocking lists.  A default security policy is applied in addition to the general blocking policy specifically to deal with these threats.


If a user attempts to go to one of those sites, or as often the case, if a computer is directed there by malware, then they will be blocked and so protected from malware that may be present on the site.

In cases where OpenDNS may get it wrong or the user may feel they have good reason to access a site, the interface allows users to submit a request to the site administrator to review the site and approve access.


Attempts to access that site are also logged so you can monitor if there may be an ongoing problem of malware infection with one of your computers.


OpenDNS Active Directory Integration

Most business sites use Active Directory to manage users, devices, and their access permissions.  If your business has a server based environment, you probably use Active Directory.

By integrating OpenDNS with your Active Directory environment you can set up group and user based settings for OpenDNS.  That might mean, for example, that senior managers might have less restrictive access to various web sites where staff with less need to access the Internet might be heavily locked down.

AD Integration requires two components:

1. Virtual Appliance

  • Runs in a virtualized server environment (Hyper-V, VMWare)
  • Forwards local DNS queries to your existing DNS servers and
  • Forwards external DNS queries with non-sensitive metadata to the OpenDNS service.

2. Connector

  • Runs in your Active Directory environment,
  • Securely communicates non-sensitive user and computer login info to the Virtual Appliances.
  • Securely communicates non-sensitive user and computer group info to the OpenDNS service

The Process of Setting up OpenDNS

Given that the basic OpenDNS service is free and straightforward to set up at the router level, I usually suggest simply signing up and giving it a go.  If it causes issues for you, its quick to revert to your prior DNS settings.

Once you get a feel for its possibilities, its time to look at the more advanced options and see if a paid plan is worthwhile.  To take it further in a business environment you should also review your site security (usually a good idea anyway) and consider how OpenDNS is best integrated.

Much of this work can be done by tech savvy senior staff or home users, or for advanced setup options you might want to consult with our technical staff at ABT (Alliance Business Technologies).

Further Reading


Active Directory Integration


DNSCurve Protocol

Guide to Personal and Small Business Backups – Storage and Tools

Rack_StorageThis article will examine options for backup storage and tools, provide advice on how to choose between them, explain how they can be effectively employed, and give examples of common implementation pitfalls.

Prior articles have worked through the high level conceptual framework and technical concepts that relate to backup systems.

Related Articles:



Backup Storage Infrastructure

Your backup system will copy all your important electronic Stuff from one or more storage locations to some other storage location.  It is all about storage, so it naturally follows that the choice of the storage used for backups has a major impact on the effectiveness of your system.

To help select the type of storage that best suits, you might review the desirable attributes of a backup system that I outlined in our first backup article and consider how selecting storage types will influence attributes of the final backup system.  As a reminder, the attributes were: simple, visible, automated, independent, timely, cost effective, and secure.

Hard Disk Drive (HDD) Storage

HDD_StackHDDs have been the mainstay of IT storage for decades.  The technology is slowly being replaced with SSDs but are still the most common primary storage and are one of the best options for backups.

HDDs allow random access, are fast, reliable, cheap, and generally have much going for them.

Internal Hard Disk Drives

Internal HDDs refer to the HDDs built into PCs and devices.  There are a number of options to incorporate internal drives into your backup system, though for the most part they play an incomplete role.

Where your PC has a single internal HDD, it will be of limited use as a backup drive.  In general, you don’t want to set a backup to the same physical device as the source files as it fails the test of independence – if the drive dies you lose source and backup files at the same time.

Internal_HDDThere are minor exceptions to this rule.  You could maintain copies of older and deleted files on the drive to offer limited protection against accidentally deleting or overwriting files.  You could direct an image backup to itself in circumstances where external storage may not be always available but you want frequent and regular automated snapshots available (remember to exclude the image location or recursion will run you out of space!).  In that case you would move the files to external storage when available allows.

Some PCs have more than one internal storage device, for example, you might use one fast HDD or a SSD for Windows and program files (for the speed) and a cheap, perhaps slower mechanical HDD with large capacity for other files or as a dedicated backup drive (for the cheap space and low cost).  A second drive adds options and given the minimal cost, I suggest adding a second internal HDD to a PC specifically for use in backups.

With a second internal HDD you could create a system image backup of the primary HDD to the second HDD and if the primary HDD fails, your backup will (hopefully) but still available on the second drive.  That design lets you schedule and run the image backup with certainty that the destination will be available (reliable automation), and provides some independence between source files and the backup, but it is still not great.

I have seen clients use this technique alone as their backup system only to lose their data when a power surge, virus, theft, or other event destroys the data on both drives.  The design is vulnerable to these significant risks because it still largely fails our test for independence, where the backup destination should be as far removed from the source files as possible.

To improve the independence of the destination drive, you might use an internal HDD in a computer on the same network rather than in the same machine by setting up a network share.  That’s a little better in terms of independence, but again it is not ideal as certain events can still destroy the data on both drives.

RAID stands for redundant array of independent disks and is another way to use multiple internal drives to reduce your risk of data loss.  One of the simple types of RAID is called a mirror.  A mirror uses two drives in an array with the system automatically mirroring any writes to the disk onto both disks in realtime.  In terms of operation, it looks like you are working with a single disk, but any time you save a file, it will be available on either disk.  If one disk fails, you won’t lose any data and in fact the PC will keep working like normal.

There are many other types of raid, some allowing for protection against one or more drives failing, but also some where if any drive fails, all your data will be lost.  You can set up a RAID array on a PC but I rarely suggest that as a good option as its cost/benefit tends to be marginal against other options.  The technology is most commonly used for server systems and storage arrays that use the hardware best suited to supporting raid arrays and in those environments I consider RAID to be essential.

Never confuse RAID with a complete backup solution; I have come across some spruikers who convince people that RAID is some magic technology that fully protects your data, never true.

The most common way to achieve a high level of independence for a backup system in a home or SME environment is to use multiple external HDDs.

External Hard Disk Drives

External HDDs are the bread and butter of home and SME backups.  They are awesome, and you should buy some!

WD_External_HDDThere are two basic types.  The physically larger drives, often called desktop HDDs, are 3.5” in size and will need a separate power supply (until USB-C drives become common).  The other type are physically smaller, often called portable HDDs, are 2.5” in size and can be powered from USB ports.  Value for money and for large capacity, the 3.5” drives are better with the smaller drives easier to cart around.  Either are fine for backups.

External drives come with various connectors.  The most common is USB.  Be aware that USB 2 drives are limited to about 35MB/s transfer rates, due to the limits of USB2.  Practically all current drives are USB 3.1 which allows for faster transfer, limited by the physical speed of the disk.  You will typically get 100+MB/s with USB3 drives so backups take much less time.  In terms of our preferred characteristics, “timely” means go with USB 3, though using an old USB 2 drive is fine as long as backups can still finish in a timely fashion.

You can get away with backing up to a single external drive, but your risk of losing data will be much higher than using two or more drives.  If you leave a single external HDD attached so it can take backups at any time, it may as well be an internal drive with the same vulnerabilities.  A single drive that you plug in only when backing means you need to plug it in manually every time you want to back up.  If you get lazy and leave the single drive plugged in, you will find a virus, power spike etc will kill your backup and source files at the same time, and you are stuffed.   If you don’t get around to the hassle of plugging it in to backup for a long time, then your PC HDD will die with all recent files lost.

Allocate at least 2 x external drives to your backup system and preferably three or more.  One can stay attached so scheduled backups work without thinking about it, and every now and then you should swap the attached drive with one stored elsewhere.  If you can afford a third or more drives, don’t swap them in a sequential cycle, leave one drive you swap in much less recently to allow you to keep some backups for a longer period on those and to reduce the risk that damaged files might be overwritten across all backups.

USB Pen Drives

Small, light, reliable, and increasingly large and fast, USB pen drives can be used as an alternative to external HDDs for backups.  At time of writing, they tend to be slower and smaller at a given price compared to a HDD, but where your backup needs are modest, a pen drives may do the job nicely.  Use them the same way you would a external HDD.

Solid State Disk Drives (SSD)

SSDs are slowly replacing mechanical HDDs in computing devices for their speed and (potentially) reliability advantages.  At time of writing they are still expensive for bulk storage and not generally recommended for backup solutions.  There are rare exceptions where their raw speed to shorten the period needed to run a backup makes them worthwhile, but for home and SME users, don’t buy SSDS for backups unless you have some special reason.

Network Attached Storage (NAS)

qnap_NASA NAS box is essentially a mini PC dedicated to file storage.  Most run a Linux OS with a web based GUI for setup and management and other features in the form of “apps”.  Their file storage can be accessed across your network, and even from outside your network.

You normally will buy a NAS without HDDs, and then populate the unit with size and brand you need.  It is important to match the unit with drives listed on the manufacturer’s compatibility list to ensure no glitches with the operation of the unit.  Drive manufacturers now make HDDs specifically for NAS units, like the WD Red range, and drives designed for NAS devices are normally your best choice over cheaper options.

RAID is a standard protocol used by NAS units, where all files are stored on at least two physical disks.  With this protection, if a drive fails, you won’t lose any data.  Remember when you add HDDs to a NAS with RAID redundancy enabled you will lose some capacity to allow the data to be replicated.

For home use, you might store some less important bulky files on the NAS given you have some protection with the RAID only (eg movies for media streaming), and additionally use the NAS as your primary backup device for image backups and/or file mirrors of your critical data.

If you buy a two bay NAS and add 2 x 4TB HDDs, you will only have total space of about 4TB available (a mirror), with three drives of 4TB you would have about 8TB, and similarly with 4 x 4TB about 12TB.  Also remember than drive manufacturers use a generous way to calculate capacity, so the NAS will report a little lower capacity than you might expect.

Some brands, such as Netgear, allow you to add drives as you need and have the available capacity automatically increased without need to wipe and recreate the array.  You can start with a 4 bay unit with 2 HDDs, then add a third and fourth as needed.

NAS units are attached to your network with a standard network cable and can be located in another room, or building, from your main devices.  They can be powered up and down remotely using wake on LAN commands.  They are excellent for automated backups and can act as a central backup location for all your devices.  For NAS units containing critical data, adding a small UPS and/or a surge protector is a good idea.

The main drawback of using a NAS as your only backup is while it is not physically attached to your devices, it is still prone to some of the events that could destroy its data and that of the originating device at the same time.  Power surges, theft, and some viruses are common risks.  One way around that issue is to rotate external HDDs attached to the NAS to take data from the NAS offsite.  You can also reduce the risks by using certain techniques, such as network passwords to prevent a virus that has access to your other PCs from accessing the device.

There are various limits and risks to using a NAS in your backup system but that can be a useful element in any backup system and I recommend them for most designs.


Cloud Storage

Let us all pause for a moment, and be thankful that our government vastly accelerated the rollout of massive bandwidth services by building an awesome NBN so we now lead the world in connectivity.  We can now easily work from home, backup everything into the cloud with a click, and offer our professional skills to a world market.

Oh, wait, sorry, delusion setting in again.  Happens when you spend too much time in this industry.  This is, after all, the Australian Government.  Let’s instead spend billions on roads so we can allow more people to move from A to B while producing nothing except pollution.  That’s productivity for you, Australia style.

Back to reality.  Cloud Storage refers to storage capacity you can access through the internet, normally third party storage but sometimes your own.  It’s a big deal nowadays as industry behemoths fight to get you on their cloud.  In theory, it’s a great way to back up your stuff.  Unfortunately, there is a big gotcha, the bottleneck that is your internet access.

You most likely have a low speed ADSL connection with upload speeds of under 100KB/s (uploads are much slower than downloads with ADSL).  That means it takes at least a few hours to upload a single gigabyte of data, while clogging up your internet connection so it’s barely usable for anything else.  Cloud backups are viable with slow connections, but limited and must be managed carefully.

So what is a cloud backup?  Nothing fancy, it just means that instead of using local storage, like an external HDD, you can use Cloud Storage to save your stuff.  It’s a great idea because the instant you have completed the backup, those files are offsite, and depending on service used, protected across multiple sites managed by professionals who are probably less likely to lose your Stuff than you are!

If you are one of the lucky people who enjoy a 100Mb/s or more upload service, great, then you are probably able to backup everything to the cloud.  For the rest of us with a low bandwidth internet connection, cloud backups are best used in a targeted way.  In other words, back up your small and important files rather than everything and use more traditional means alongside cloud backups.

“The Cloud” is a relatively new phenomenon and service providers are still working out viable business models.  New services appear, and disappear on a monthly basis.  For the most part, I suggest looking at services provided by the big guys such as Microsoft, Amazon, EMC, Google, and similar.  I expect most of the small players to be absorbed or disappear.

All we need to send backups to the cloud is available capacity.  It is not essential to sign up to a service that is specifically targeted at backups (though there are advantages with some designs).  The most common service available, and one you may already have access to without realising, is OneDrive, Microsoft’s cloud storage service.  If you have an Office 365 subscription, you will have access to a practically unlimited storage capacity on Microsoft’s servers that you can use to move files around, share stuff, and backup stuff.  OneDrive is not designed as a backup solution, but it can be used as part of a backup system where it sets up a folder on your PC and all files saved there are automatically uploaded to your cloud service.  Great for documents, not so viable for large files such as video or image snapshots.

Cloud storage services specifically developed for backups are also available and are more appropriate in a business environment.   Some, like Mozy (EMC) have been around a while, and most recently the other majors are aggressively moving into this market with Azure (Microsoft) and AWS offering various solutions.

Cloud backup probably should form part of your backup system, and in some cases can form the core of your design.

Other Storage Options

Tape Drives were, for many years, the go to backup option for business.  Tapes were cheap and relatively reliable but needed to be written to in a linear way.  I won’t go more into the details of tape drives, rather than simply say, don’t use tape drives.  On a small scale tapes drives are messy and unreliable compared to other options.

SAN arrays are like NAS units but further up the food chain.  For medium and larger business, a SAN in your backup system makes sense, often including replication to offsite SANs at a datacentre or a site dedicated to disaster recovery.  If you need this sort of system, you probably have your own IT people who can setup and manage and they are a bit beyond the scope of this article.

Others?  Yes there are even more options, but I think that about covers the most common options.

Backup and Archiving Longevity

LongevityI once found a decade old stack of floppy disks, my primary backup store during my Uni days.  I went through all of them to make copies of old documents and photos and was surprised to find almost half of them still had retrievable data.  At that age I expected them to all be dead (Verbatim, quality FDDs!).  There was nothing critical on them, but it’s an interesting lesson, you can’t afford to set and forget any data.

Remember when writable CDs emerged?  The media were reporting how this awesome optical technology would allow data to be archived for least 100 years.  Only a few years later we had clients bringing disks in to us after failing to retrieve “archived” data with the disks physically falling apart.

Will your data be there when you need it?  The failure rates of modern storage hardware is low, but physical stuff never lasts forever and a realistic lifespan can be difficult to predict.  It is likely that the external HDD you have sitting in the cupboard for the last five years will power up when plugged in, but the longer you leave it, the more chance that the device or data on it will be gone.

Keep any data you may need on newish devices, and replicated on multiple devices.  When that old external HDD is just too small to fit all your backups, perhaps keep it with that old set of data on it and chuck it in a cupboard but copy at least the critical files to a new, larger device as well.  Cloud based storage may be an option for long term storage, but trusting others to look after your stuff also introduces risk, so ensure you manage that risk.  Hint: free is bad and companies (especially start-ups) and the data they hold can disappear with little notice.

If you produce too much data to cost effectively maintain all the data on new devices, give careful thought on how best to store “archived” data and weigh the risks of data loss against cost of storage.

Backup (Software) Tools

toolsThere are a large number of software tools that you can use to build a backup system.  Do not fall into the trap of assuming that throwing money at a product will lead to a desirable result, though at the same time don’t rule out a high cost commercial option where it’s a good fit.

Google is your friend.  Look around online and check what the professionals use.  Making use of unpopular, emerging, or niche products is sometimes OK, but only adopt such tools where you see substantiative advantage in your environment.  In general, go with what everyone else uses to get a particular job done.  This will reduce your risk.

Consider the attributes of a backup system that I outlined in our first backup article and relate them to outcomes possible with the various tools: simple, visible, automated, independent, timely, cost effective, and secure.

Block Level (Image) Backup Tools

A block level backup tool is able to copy all data on a storage device, including open and system files, so you can be sure to get all your files stored on a partition or disk.

Windows has a basic imaging tool built in, though I’m not a fan of its interfaces limited features.  There are some better free tools available, such as AOEMI Backupper, and a wide range of paid tools such as Acronis and ShadowProtect.  The free tools such as Backupper are adequate in many situations, though their features tend to be more limited and you may need to use supplementary tools when handling related functions such as retention and cleanup.

With any block level tool you intend to use, look for features including:

  • Support for Full and incremental backups (and differential if you need it, but you probably don’t)
  • Automate scheduled backups.
  • Options to encrypt and/or compress backups.
  • Process to verify condition of backup archive (test if files are damaged)
  • Fast mounting of image files.
  • Replication (copy images to additional locations)
  • Retention (automatically clean up older backups to manage space based on age and/or size basis)
  • Ability to exclude specific files or folders. This is very handy, and not offered with all image tools so pay particular attention to this one.
  • Bare metal restore to different hardware.
  • Support for “continuous” backups and related consolidation and retention (advanced feature where frequent incrementals are merged into the archive and older files stripped out to manage space – excellent when uploading images offsite via the Internet)
  • Deduplication (useful for larger sites – eg if you back up a dozen windows desktops, but only store one of each system file instead of 12 to save a lot of space)
  • Central Management (manage backups across multiple devices from a single interface. Important for large sites)
  • Ability to mount and run image of backup files in a VM.

You probably don’t need all of these features, and some can be implemented outside the program.  For example, you could use robocopy and windows task scheduler for replication.  Don’t just tick off features, go with a product that does what you need reliably.

There are many implementation tricks that may not be obvious.  A common possible issue is when you create an image on an attached HDD, then swap the drive, you will at best end up with a different set of backups on each drive, maybe acceptable but not ideal.  Instead it is often better to create the archive in a location that’s always available, such as a NAS or internal HDD, then replicate the entire set to the external drives.  Think through what you need to achieve and make sure the tool you select can support those outcomes.

A block level backup tool should be used in nearly all backup systems.

File Level Backup Tools

A file level backup tool can be any software that lets you copy files.  The Windows File Explorer could be consider a file backup tool.  To be more useful however, we need to look at additional features such as

  • automation,
  • compression,
  • encryption,
  • incremental backups,
  • retention,
  • and others depending on your needs.

File level backups can be very simple, quite transparent, and very reliable.  This type of backup process is excellent to backup discrete files where you are not concerned about backing up locked files or keeping old versions and deleted files.  They can also be used as a replication tool to copy image backups.

My favourite file level backup tool is one you probably already have, a program called Robocopy that is built into windows and accessed by the command line.  Its quite a powerful utility that can be automated with use of a batch file and the task scheduler.  If you are not comfortable with the command line or creating *.bat files, a better option many be one of the many free graphical interface based utilities, or a GUI shell for Robocopy.  Rather than list the many options, I suggest using google to find recommendations from reputable sources (try searching google for “Robocopy GUI”).   There are many other similar tools, Fastcopy is another we occasionally find useful.

File level tools may be adequate for a very basic backup system, where you don’t care about backing up windows, applications, or locked files, but for the most part they should be used alongside block level image backup tools.

Batch Files and the Task Scheduler

A batch file is a simple text file you can create in notepad saved with the .bat extension in place of .txt.  If you double click the file, windows will read the file one line at a time and try to run the command listed on each line in order.

A batch file can be used to automate file level backups or replication when you set to run on a schedule with the windows task scheduler.  For example, if you typed a line something like robocopy d:\MyFiles f:\MyFiles /e /purge and ran it within a batch file, you could mirror your files to a different drive.

If you get a bit more creative you can use the technique for many useful functions including backup systems that retain older and deleted files, and to manage the file retention of image backups.  You can also look at Powershell and other scripting options to implement more advanced backup designs.

Emergency Measures

Designing a backup system is all well and good, but if its too late or your backup system has failed, is there anything you can do?


Deleted files on a mechanical hard disk can often be retrieved by file recovery tools such as Recurva.  On a SSD you may be out of luck as with modern SSDs the old files are actively scrubbed shortly after being deleted.

Copies of files may be located in places you would not expect, cached files, online services.

A failed mechanical HDD will usually contain data that can be retrieved.  Data recovery experts may be able to help, however costs are often in the $1000s.

If you look to have lost important files, leave the device powered down and ring us.

Bringing it all Together

This third part of our Guide to Personal and Small Business Backups outlined the Storage and Tools commonly by Backup systems.  Prior articles have covered the high level conceptual framework around which you can build an efficacious backup system, and many of the technical concepts you need to develop assess an appropriate backup design.

Our final article in this series will get to the nitty gritty by presenting and explaining solutions in detail as they relate to common home and small business environments.

USB goes Full Circle with USB-C

USB-CUniversal Serial Bus (USB) was developed in the late 90s to replace a mess of slow PC connections with a high speed, one size fits all plug and data transfer protocol.  All newish devices had the plug, it was good, and there was no real decision or gotchas to look out for when buying new devices.

A decade or two later, things are again a mess.  Incremental changes to USB have offered progressive technical improvements, but at the cost of modified plugs and sometimes questionable backwards compatibility.  Mobile devices using the small variants of USB ports, or worse still proprietary plugs, have diversified the cables in use and ensured X won’t plug into Y.  New connection standards such as DVI, Firewire, HDMI, and Display Port emerged to meet specific needs for what, at a basic level, is a demand for local bandwidth that could in theory be carried by one cable.

An effort is again underway to resist the evil forces of cable proliferation and focus on the holy grail, one cable to carry them all, resulting in USB-C (also called USB Type C).  This new type of connector is intended to replace the mishmash of USB plugs, and potentially other existing data plugs to, become the standard connector to attach any peripheral to a computer or mobile device and to provide enough electrical power to run many of them.

USB-C is now available.

Physical Characteristics of USB-C

USB-C on the left compared to Type A on the right

USB-C on the left compared to Type A on the right

The original USB connector was a spade shaped plug on your PC end (Type-A plug), and a squarish plug on the peripheral end (Type-B Plug).  Smaller variants are available for mobile devices.

They would only plug in one way round.  When taking random stabs at plugging a USB cable into the back of a PC, under zero visibility, and while fighting off the tech eating spiders and poisonous dust clouds, it would be impossible to get the cable around the right way.

Problem solved.  The new USB Type-C is a small, reversible connector about the same size as a micro-USB plug but a little thicker, and pretty easy to get the cable into.

The USB-C Plug was developed by the same industry group who define and control the USB data transfer protocols, but should not be confused those protocols.  USB-C can carry USB 3.1 and other USB standards but is not limited to those standards.

USB Protocols and how USB-C fits in

Superspeed_USB_10GbpsThe last major USB standard release was USB 3.1 in 2013, a relatively minor upgrade from USB 3.0 that itself dates back to 2008.  USB 3.1s main call to fame was doubling theoretical transfer speeds from 5Gb/s to 10Gb/s.

For reasons that I fail to understand, the body that handles USB recently decided to make a naming change:

  • USB 3.0 changed to USB 3.1 Gen 1 (5Gb/s) also known as SuperSpeed USB
  • USB 3.1 changed to USB 3.1 Gen 2 (10Gb/s) also known as SuperSpeed USB 10Gbps

For comparison sake, the older and still very common USB standard, USB 2.0, supports 480Mb/s of raw data.

The new USB-C plug is able to carry data complying with either USB 3.1 protocol or USB 2 (with adapters), but is not limited to USB transfer protocols.  USB-C can handle other useful high bandwidth data streams including video and network traffic, and will be able to handle future high speed protocols. This is done through the use of alternate modes, where the system can hand over control of certain pins within the connector to carry traffic defined by protocols unrelated to USB.

Where faster Data Transfer Speeds Matter

USB transfer protocols indicate a theoretical maximum throughput in bits per second of raw data.  There are 8 bits to a byte, so for USB 2.0 – 480Mbits/s = 60MBytes/s, but then we need to reduce this figure further to take into account “overhead”, essentially the part of the data stream that is used to get the information we need transferred across.  To test this, try to transfer a large file from a USB 2 HDD to your PC and look at the speed you get.  It will be no higher than 35MB/s, not 60, being limited by the USB 2 standard less significant overhead.


USB 2 HDD transfer speed is capped by the USB Protocol

Your mouse, keyboard, and many other devices are just fine with USB 2.0, but HDDs and the occasional other USB device will work better with the faster protocols.

Modern Hard disk drives can transfer files at a much faster rate than 35MB/s so the upgrade to USB 3.0 can significantly increase transfer speeds.


USB 3 HDD transfer speed is capped by the maximum speed of the physical disk

Marketing people confuse the issue by often quoting the USB 3.0 or 3.1 maximum speed on the box of many supported products, particularly HDDs.  This is not even close to the transfer rates you will get with the usual mechanical drives, as they will be limited by the transfer rate of the drive, not the transfer protocol.

Only the fastest of the modern SSD drives are now surpassing USB 3.0 speeds though given external drives are still mainly mechanical drives owing to cost, USB 3.0 is fine for now (or USB 3.1 Gen 1, if you prefer the new terminology!).

So what is the advantage of USB-C when transferring files using USB?  For time taken to transfer a file, not much (well, till you go with a SSD external), but have you ever been annoyed by that bulky AC Adapter powering your external HDD?  Well, USB-C can do something about that.

USB-C Power Delivery

The original USB specification allowed for up to 150mA of power at 5V, just 0.75W of power.  That was fine to power a mouse or keyboard but was never going to be adequate to power external HDDs or other more demanding devices.

USB 2 gave us a useful increase up to 2.5W and then USB 3.0 to 4.5W, and a power charging option was introduced at 7.5W but not simultaneous with data transfer.  Power supply was only at 5V.  Small steps, but it allowed for 2.5” HDDs to be powered off USB and mobile devices such as phones to be charged off USB ports, if over a long period of time.

With USB-C connectors we are finally seeing a major increase in overall power, and this time, at varying voltages.  The new plugs can support up to 5A at 5, 12, and 20V, potentially giving 20V x 5A = 100W of power.  Simultaneous data transfer is supported even at maximum power output.


This sort of power delivery will allow for substantial devices to be powered from the same cable carrying data.  A desktop sized monitor needing just one cable from PC to screen is a good start, getting rid of AC adapters from 3.5” external HDDs is another use.

The new power spec allows for power to run in either direction. Get home and plug your notebook into your desktop monitor by USB-C and let you monitor charge the notebook.  Goodbye power brick.

The management of power between devices has also seen major improvement.  Where power is limited and needs to be shared across multiple devices, the protocol allows for a varying of power supplied as demands vary.  For example, if a laptop is pulling a lot of power off a desktop USB port to charge, and then the monitor, also powered off USB is turned on, the power available to the laptop can be reduced to allow the monitor to run.  How well devices play together will be interesting to see!

Alternate Mode to support DisplayPort, Thunderbolt, and more

So getting back to that file transfer speed, and more generally, moving data fast, perhaps faster than USB 3.1 allows, does USB-C give us any other options beyond the 10Gb/s of USB 3.1?

Well, yes.  The USB-C specification allows data protocols outside the USB specification to be supported through an alternate mode system where the device can negotiate control of certain pins to be reconfigured to support data streams outside the USB data transfer specifications.

When no other demands are placed on the cable, USB 3.1 can use 4 lanes for USB Superspeed data transfer, but when using alternate mode some or all of these lanes can be used for other types of data signals.  These signals are not encapsulated within a USB signal, rather the signal is supported directly on the wires with the alternate mode protocol managing the signals allowed to be sent.  Depending on the configuration, various levels of USB transfer can coexist with these alternate protocols.

The most popular protocol supported by current USB-C implementations is Display Port, allowing a 4k screen to be driven at 60Hz with 2 lanes while still allowing USB Superspeed simultaneously.  Adapters are also available to attach an older monitor with DisplayPort or HDMI to a USB-C connector.  Not all USB-C plugs will support Display Port Alt Mode, so take care to check the specs of devices where you might want to use this feature.

Thunderbolt 3 (40Gb/s) is supported in some implementations of USB-C, providing a wide range of data types including high resolution video, audio, and high speed, 10Gb/s Peer to Peer Ethernet.  That last one is particularly interesting as a modern version of a laplink cable by cutting out the middle man and providing a high bandwidth device to device connection ten times faster than available across most LANs.

USB-C Availability

At time of writing, USB-C is available across a modest range of products, and many newly announced products are supporting the technology as they hit the market.

We are seeing niche products such as portable monitors powered through USB-C, but do not expect to see widespread peripheral support until the installed base of USB-C reaches critical mass, probably less than a year away.

To help build that installed base, we have motherboards (used for desktop computers), notebooks, and tablets with USB-C support, as well as expansion cards that enable USB-C on legacy desktops.

If buying now, USB-C is not a essential purchase, but for a machine you expect to have in use for at least the next few years, its a desirable feature and worth considering during the hunt for your next computer.

Guide to Uninterruptible Power Supplies (UPS)

Eaton_1500VA_smallThe UPS is a misunderstood beast, so we have written this guide to clear up misconceptions and provide information to help you work out if you need one.

At its core, a UPS is a battery that sits between the mains power supply and your equipment.  When the power drops out, the battery is there to keep your gear running long enough to save your work and shut it down normally.

Avoiding an abrupt power cut to a PC or server is always a good thing.  A power cut can damage your files, OS, and occasionally hardware.  Complex systems and those under load, such as servers, are at particular risk.  For example, if a database write is in progress when the power is cut then your database will likely be damaged.

A UPS with large battery capacity can also be used to keep critical systems running for hours.  Many people assume a long uptime is the main reason to buy a UPS, but in fact that is a relatively uncommon, secondary purpose.  If you need extended runtime, a UPS can be configured for that purpose, but you might also want to look at generators in combination with a UPS.

Another benefit provided by most UPS units is built in power protection, from basic surge protection to more advanced power conditioning.  A UPS does not necessarily replace a good surge protector, and in fact using a surge protector behind a UPS may add a better quality of power protection compared to entry level UPS protection, and will at least protect the UPS itself.

Types of UPS Technology

There are three technologies typically used on consumer and SMB level UPS devices.

Standby UPS for Backup Power

Standby_UPS_CircuitEntry level UPS units use a “standby” design where power flows directly from the mains to your devices when mains power is available.  At the same time, the battery is charged using mains power.

When mains power is cut off, battery power is automatically switched on through an inverter to power your devices.  It takes a small but appreciable period of time to switch to the battery, and that time can cause issues for some sensitive equipment.  For example, some modern active PFC power supplies might pull excessive current for a moment as it comes back online if the switch takes too long, overloading the UPS.

Most of these units employ a basic surge protection in the circuit to protect against surges and brown outs (not shown in diagram).

The signal output tends to be a simulated sine wave, which is fine for most gear but can cause issues with some power supplies.

Line Interactive UPS with Power Conditioning

Line_Interactive_UPS_CircuitA line interactive UPS is often assumed to be an “online” system where the battery is always feeding power directly to your equipment.  Not true.

A line interactive UPS is similar to a standby UPS but adds an additional component able to regulate voltage.  When the mains voltage goes a little above or below an acceptable level then this additional component can adjust the voltage sent to your equipment, and so the battery does not need to be drained to handle it.  The UPS will usually click when this kicks in.  If voltages vary too much, then the battery will take over as the power source, same as for a standby UPS.

The output signal from a line interactive UPS may be a simulated or pure sine wave output, depending on the model.

Line interactive UPSs are relatively cheap and are the best value type of UPS for many circumstances.

Online Double Conversion UPS: Clean and Continuous Power

Double_Conversion_UPS_CircuitDouble conversion UPS models are the high end option in the for consumer and SMB market.

This technology keeps the inverter online at all times so any power interruption does not require switching and the output power quality is consistent, clean, with a true sine wave output at all times.

Because the battery and inverter are always online, additional stress is placed on these components and there is some loss of efficiency with associated waste heat.  Its not a serious problem, but is a reason why you need to ensure the UPS is well ventilated and personally I prefer to stick a decent surge protector behind these units.

A similar but more recent technology called an online delta conversion UPS is very similar to a double conversion unit but adds components to improve efficiency and reduce issues with the traditional online UPS.  This type of technology tends to be found in only large UPS units, over 5kVA but a worthwhile upgrade for higher end commercial demands.

Online double conversion systems are the best choice for critical systems with modest sized installs, but do cost significantly more than line interactive or standby UPS models.

Whats with Simulated vrs Pure Sine Wave??

Mains AC power is supplied in the form of a sine wave that smoothly alternates between positive and negative values.  Recreating that form from a DC source at the other end of a UPS battery can be expensive.  Very basic equipment will produce a square wave where the voltage jumps straight from positive to negative 240V.  Harsh.  Most cheapish UPSs will do a better job with a modified square wave form, also called a simulated sine wave that is closeish to the real thing, but not a smooth curve, just some steps.  Better units including all online UPS units will produce a nice smooth sine wave that works best.

squarewave modsquarewavesinewave

Modern, efficient computer power supplies (anything with the 80+ certification) feature active power factor correction (PFC) and they do not always play well with simulated sine waves, let alone the even more horrible square waves.

If you find yourself with  an active PFC power supply and anything less than a pure sine wave output, you may find your PC randomly rebooting and/or your power supply struggling and dropping efficiency when it switches to battery.  Many PSs will handle it, and as long as it works for the short time you are on battery you can get away with a cheaper unit, but its not ideal.

Note there are simulated sine waves, and then there are simulated sine waves.  In other words some of the modem UPS units do a pretty good job of producing a nearly accurate simulated curve, others not so much.  You tend to get what you pay for in that regard and closer to a pure sine wave, the better.

In general, try to get a UPS with a pure sine wave output but if the cost is excessive for your use, its likely a simulated wave for good brand name UPS units will be OK.

The much misunderstood “VA” vrs Watts

Once you decide on the UPS technology you want, you need to match it up with the battery specifications.

UPS units are usually quoted with “VA” specification, often in their model name.  VA stands for Volts x Amps, and if you remember your high school science you will know Volt*Amps= Watts so you would reasonably expect that VA figure to represent the output capacity in watts.  But then you might notice that a UPS will also specify maximum output in Watts, and the number will be different, and lower, than the VA number.  So whats up with that?


The reason for the difference is because the VA represents the maximum theoretical output, called apparent output, but available power will be less and convention says we should not use more than 60% of the apparent output, meaning the watt rating will be about 60% of the VA.  A 1000VA UPS should be rated at 600W.

Some manufacturers will use a relatively high maximum wattage quoted, maybe 70% or more of VA.  There may be some justification to do that in some cases, though I think more often it’s the marketing people sticking their nose in.  Personally, I prefer to look at the VA and work out wattage based on 60% of that.

That convention, and the variance it implements, brings up a good point.  Do not overload your UPS.  A reasonable target to run the UPS at about half the rated load, so if your equipment uses around a 300W and may burst up a bit past that, then go for a UPS rated to handle about 600W, or 1/0.6 * 600W = 1000VA.  Its usually OK if you run it up to that 60% figure, and potentially a little past that for short periods, but lower is safer and will give you better runtime.

Another issue many fail to consider is the behaviour of active PFC power with switching UPS units supplies (ie, all new PC power supplies nowadays).  Where the switchover to battery takes a while, the power supply may try to slurp up a lot of power to catch up in the moment that power is switched back on, and that can overload the UPS.  Consider the nominal rating of the power supply, plus a bit, against the UPS rating and aim for a UPS that is rated a little higher than the power supply to avoid this problem.  For example, a 750W PS that might normally use just 150W would probably be OK with a 1000VA line interactive UPS but might fail with a 500VA unit, even though 500VA can supply much more than 150W.

Take into account that a battery performance and capacity will be reduced over time, till they need replacing.  Also consider that the runtime will improve as your reduce load on the battery, so loading it up to near maximum may not give you long enough to shut down for your equipment.

UPS Runtime

The VA figure relates to output at a given point at time.  Many people assume that a high VA means high run time.  In fact that’s not true and the two specifications are not directly related.

runningIt is quite possible for a certain model 1000VA UPS to run for say 10 min at half load, and a different 1000VA model to run for an hour at half load.  The figure that matters for a battery is how much energy in watt hours that it can store.

Most UPS units are built to allow time for shutdown but not much more than that, so if you need a long runtime after the power goes out, you need to look at long runtime batteries, add additional batteries, or run a UPS at a fraction of its maximum output.

UPS specification normally quote expected runtime at half load.  If you run it at full load, then roughly halve that time.  If you run at a quarter load, double it, and so on.  Also again consider that those numbers will tend to reduce as the battery ages and you always want to factor in some extra buffer time.

Beware of outlets with weird plugs and no battery protection

Many entry level UPS units now have standard wall power point style plugs available to make it easier to plug in any gear you want.

Eaton_550VABe aware that some may be only wired up to surge protection and not battery backup.  For example, you might have a PC plugged into one plug and a laser printer into the other for convenience.  You don’t want the laser printer to switch to battery; just make sure you know which plug is which!

You can plug in a double adapter or power board into a UPS outlet, but take care not to overload it with excessive demand.  It is not the number of devices that matter, but their total load.

UPS_CableOlder and larger UPS units will have a number of three prong power leads, or “jug leads” as I call them.  That’s the standard power lead you see running into your PC.  They are fine when plugging into a PC power supply, but can be a pain if you want to plug in other gear and you may need to buy separate cabled that convert to a wall plug.  Just make sure when you buy at UPS that you end up with the cables you need to plug in your gear.

Equipment you should never plug into a UPS

Some equipment can draw a heavy load of power for a short time, and these can damage the UPS and any other equipment attached.

Laser printers are notorious for it.  Don’t plug in anything that needs that big pulse of power that spike up beyond what your UPS can handle.

Set up Automatic Shutdown

For most buyers, the purpose of a UPS is allow for a clean shutdown of your equipment during a power outage.  To meet this goal, don’t forgot to install and setup compatible UPS management software on your devices so they shut down automatically when unattended.

LansafeThe UPS will be typically connected to your server or PC with a  serial or USB cable, with the option to use a network cable in higher end models.  Those cables are essential to let the UPS talk to your gear and let them know the power just went out.

I personally prefer a simple serial cable when available and in small scale deployments, as they tend to have less issues with drivers and avoid the situation using a network cable where if the switch fails or loses power, signal will be lost (so if using the network cable, make sure the switch/s are plugged into a UPS!).  USB cables will do the job if that’s your only option, but in my experience, tend to be a little less reliable.

I personally don’t trust the automatic setting with the software.  Take a look (and test) the runtime you are getting out of your UPS, and gain an estimate of how long your machine takes to shut down when the shutdown signal is sent.  The software will let you set a delay after the power goes out to start shutting down your server/PC. Hopefully during that time it will come back up, if not, the signal is sent and then your PC will start shutting down.  You may also have the option to turning the UPS itself off before the battery is entirely drained, using that function or not is a matter of design and personal choice.

Leave plenty of leeway.  If your UPS is estimating 30 mins of runtime and your critical server might take 5-10 mins to shutdown once the signal is sent, you might send the shutdown signal after just 5 mins to give it plenty of time to properly shutdown.  You do not want the battery to run or UPS turning off out during the shutdown process!

Guide to Personal and Small Business Backups – Technical Concepts

backup_reverseThis article builds on the high level conceptual framework introduced in our previous backup article.

I will explain technical concepts and related terminology to help you design a Backup System for use in business or home.

Related Articles:



Backup Methods Classified by Viewpoint

We can categorise backup processes into:

  • Backups that target files and folders (file level backups)
  • Backups that target full disks or partitions (block level backups)

For the most part, these types of backups are distinct in technology used, limitations, risks, and most importantly, outcomes.  I will try to clarify the differences and why they should be taken into account when developing a backup system.

File level backups aim to make a copy of discrete files, such as your photos or documents.  This type of backup focuses on each file as a discrete unit of data.  When you copy your documents across to an external HDD, you are implementing a file level backup.

Block level backups are a little different and often misunderstood, resulting in backup designs that fail to protect your data. The aim of a block level backup is to copy all the blocks of data contained on a partition.  It is important to gain an understanding of how files are stored, and what we mean by a disk, partition, and a block in order to make appropriate decisions regarding your block level backup design, so let’s cover the basics.


A disk stores data in small chunks, which can be referred to as blocks.  When you save a file, it will be cut up into small pieces that will fit in the blocks available on the disk.  The locations of blocks used by a file are recorded in the index along with the name of the file.  When a file needs to be opened, the index is used to find out which blocks need to be read and stitched back together.  In the above image, you might consider that a big file has been split across all the blue blocks with other squares representing other blocks on the disk.

A “block”, in the context of a block level backup, refers to one of those same sized portions of your disk.  A block may, or may not, contain a piece of a file.  It may in fact be blank or contain old data from a file you have deleted (this is why deleted files can sometimes be retrieved).

You will encounter the term “partition” or “disk partition” when setting up a block level.  A partition is the name given to the part of a physical disk that is set up with its own index and set of blocks to contain files.  It is possible to set up many partitions on a single physical disk, but often each disk will only have one visible partition and so people tend to use the terms disk and partition interchangeably.  C: for example, is a partition but it also might be called a drive or disk.

The below image shows two physical disks and the partitions located on each disk.  Note the partition with C: also has two hidden partitions, the first to help with the boot process to the Windows program located on C:.  The second disk has just the one partition, represented by D:  The critical information is in the C: and D: partitions, but it is normally best to backup the lot to make system recovery easier.


Block level backups don’t worry about the content of the blocks, they just copy all the blocks of data and in doing so just happen to capture all files on the backed up partition.  Most often the backup will skip blocks that the index says has no current files in them to save time and space.  While this method sounds more complex, it is pretty simple from the user’s perspective and is comprehensive where file level backups are more prone to miss important data.

Block level backups introduce some issues that can result in backup failures, but also advantages, such as the ability to backup open files using another layer of technology.  That’s leads us into a funky new(ish) Microsoft technology called VSS.


How to Backup Open Files including System State – VSS

Have you ever tried copying a file that was in use?  In most cases, windows won’t let you copy an open file; it is “locked” while open to prevent potential damage to the file.  When you start windows or any program, many files will be opened and cannot be copied safely using basic file level methods.  For this reason, most file level backups will fail to backup open files including programs and windows files.

Block level backups support a technology that allow blocks to copy while their related files are in use.  This means they can backup your operating system and program files and allow a complete restore of your system state and personal files.

The technology used to backup open files is implemented by the Volume Snapshot Service under Windows systems (VSS, also called Volume Shadow Copy).  A snapshot refers to the state of the disk at a moment in time, as the technology attempts to maintain access to the data at that moment.  Once the snapshot is made, usually taking only moments, the system can continue to read and write files, so you can keep using the computer.  The system will preserve any data that would otherwise be overwritten after the snapshot, so it is accessible to the continuing backup process, and new data is tracked and not backed up, preventing any inconsistency.  This is not a trivial process and things can go wrong.


VSS incorporates a number of parts, including “writers” specific to various programs designed to ensure their disk writes are complete and consistent at the point the snapshot is taken.  For example, a database writer would ensure all transactions that might be just partly written are complete before the snapshot, removing the risk of a corrupt database on restoring the backup.  Certain types of programs need a specific writer to support them and if they fail a “successful” backup can contain damaged files.  Sometimes, part of the VSS service can fail.

VSS is mature and works well in most cases, but you will still find that a VSS backup is more likely to fail than a simple file copy backup.

Some backup tools that focus on backing up particular files and folders can also use VSS.  This blurs my definitions, since these use similar technology to block level backup but with the focus of a file level backup.  This hybrid approach is worthwhile in cases where you want the advantages of a block level backup but want to exclude backing up some files, such as for example large videos you don’t care about.  You should still keep in mind that VSS adds complexity, buts it’s OK to use VSS only backups where you have the technology available and you are careful verifying your backups.


Backup Archives Classified Archive Types

archiveNow we have an idea of the technology behind block level backups, I will go over the rudiments of backup archive types.  These concepts can apply to file or block level backups, but they tend to be more related to block level processes.

When you setup a new backup process, the first backup you typically perform is a “full” backup, including all data present on the source.  Subsequent backups can vary.  You can back all your files each time, or copy just those files that have changed or are new.  There are more options than you might realise.  I will address the terminology that refers to these methods and outline typical use.


Backup Set/Archive:  A set of files, that when considered as a whole, include a complete set of backed up files over a period of time.

A backup set created by a dedicated backup program will often generate one file per backup, containing all files or data captured during the backup.  A backup set will then normally contain a number of files over time, but they won’t look like the original files.  It is important that you check these sets and ensure actually contain your files.  Don’t just assume all your stuff is in there.  Size is a good hint, if they are far smaller than your files, something is wrong.

If you have setup a simple file level backup, the archive set might be included in dedicated container files, such as a zipped file, or might be a simple mirror of the original files.  I like methods that result in a simple mirror of your files for home backups, as it lets you quickly check what is in your backup set and is less prone to mistakes.

Full Backup:   Take a complete, new copy of all files or data on your source and adds them to your destination storage.

A word of caution here, a “Full Backup” does not mean you have backed up all your data.  It depends on what you have selected for the process to backup.  Make sure you know where your data is before setting up your backup system and do not trust programs that try to select your data for you, they may miss important files.  This comes back to concepts in my first article, make sure you have visibility concerning your backups.


Incremental Backup: Backs up new and changed files since the last backup.

An incremental backup will normally be much smaller than the full backup, and commensurately faster to complete.  Using incremental backups is recommended where you add or change relatively few files over time.

When you make an incremental backup, it is dependent on any prior incremental backups as well as the original full backup, so if any of the files in the chain are damaged or lost, you will lose data.  In theory you could take one full backup and then nothing but incremental for years – don’t, create a new full every now and then.

As a safety precaution, if a backup program tries to create a new incremental backup and can’t find the dependant full backup, it will normally try to create a new full backup.


Differential Backup:  Like an incremental backup, but backs up all new and changed files since the last full backup

A differential is less commonly used than incrementals.  They play a role where you have relatively large incremental backups to help manage space as they let you delete some older incremental backups without needing a new full backup.


Continuous Backup: A misleading term that normally refers to a chain of frequent incremental backups that are later stitched into a single backup archive.

Continuous backups are a more advanced function only available on business grade backup solutions.  Incremental backups are incorporated into the original full backup by a cleanup process, and the oldest data may be stripped out to keep the size under control.

Continuous backups add complexity with the consolidation and cleanup process, but they have significant advantages by avoiding the load placed on systems by running full backups, and are ideal for immediate offsite backups where small incremental backups can be transferred via the internet to give you an immediate offsite copy.

Advanced systems can go one step further and use the backup image to mount an instance of backed up systems running as a virtual machine in the cloud.  Great for continuity and disaster recovery.

Common Backup Settings and Options

Once you have decided on the type of backup archive, or more likely a combination of archive types, you need to determine how the process will operate, and when.


Backup Schedule: Set an automatic schedule for your backups

A backup schedule usually involves a combination of archive types set to appropriate frequency.  It is important to schedule backups at times when your backup destination will be available and where the computer will be on.  If you miss an automated backup, you can always trigger a manual one as needed to cover it.

There are many different interfaces used in backup programs and it is usually worth looking at the advanced or custom options to ensure your schedule is set correctly, rather than going with default settings.

A common schedule would be a daily incremental backup, with a new full backup about every month or three.


Retention and Cleanup:  Manage your backup archives to remove old backups in order to maintain space for new backups.

It is very important to consider how long you need access to backed up files.  For example, if you delete a file today, or perhaps a file is damaged and you may not notice, how many days or months do you want to keep the old version in the backup archive?  Forever is great, except you need to consider how much space that requires!

You should also consider possible problems or damage that might impact your backups.  When operating with full backups, its best you keep the old backup set till after a new one has been created, just in case the new one fails and you are left with no backups.  You can bet that’s when your HDD will die.

Given that a typical backup system might involve an infrequent and very large full backup and a more frequent and smaller incremental backup, then carefully considering your retention plan can save a lot of space.  Below is an example of how you might set retention (I suggest more time between full backups than this example, a month is probably reasonable, but depends on your situation)


In the above example, a full backup is run Mondays with all other days set to incremental backups.  Disk space is limited on the backup hard disk, and lets assume that it can’t fit more than two full backups and some incrementals.  With the retention period set to six days, backups will sometime be kept for more than six days where backups within the six day period are dependant on older incremental or full backups.

In the above example we have 12 days of backups stored and two full backups.  If the system deleted all backups before Sunday, then the Sunday backup would be useless.  The system will be smart enough not to (hopefully!).  At this point, the backup disk will be near full with inadequate space available to create a new full backup, but consider what happens just before the third full backup is due.


The idea above is once the older incremental backup is no longer within the retention period, we can clean up (delete) all of the oldest backup set in one go.

In this way the old set is kept as long as possible, but is deleted before the next full is due, so the backup program does not run out of space on the following Monday.

See any possible issues with this retention? Any mistakes?

There are a number you should consider.  Setting a tight schedule like this may not work as expected.  How does the program interpret 6 day retention?  Is in inclusive or exclusive when it counts back?  What happens if you set it to 5 or 7 days?  What happens if the cleanup task runs before, or after the backup task on a given day (that’s particularly important and a common mistake).

You must check that the system works as planned by manually checking that backups clean up the way you plan on the days you plan.  Failure to verify your system will inevitably result in a flaw you may fail to notice and leave you vulnerable.


Compression: A mathematical process to reduce the space used by your backups.

When setting up the most basic file level backup, you probably won’t use compression, but every other backup will typically compress your files to save space.  This is a good idea and you normally want to go with default settings.

Most photos and videos are compressed as part of their file standard, and additional compression won’t help.  For some files that are inefficiently stored where their information content is much less than their file size, various compression schemes can save a tremendous amount of space.


Encryption:  A mathematical process based around a password that scrambles the file so its information is not available unless the password is used as the key to unscramble the file.

Modern encryption cannot be broken as long as a secure and appropriate algorithm and password is used.  Passwords like “abc123” can easily be guessed or “brute forced” but passwords like “Iam@aPass66^49IHate!!TypingsTuff” are not going to be broken unless the attacker can find the password through other means.

Encryption is dangerous!  If you lose the password, your backups are useless.  If part of the file is damaged, you probably won’t be able to get back any of it.  It adds another layer of things that can go wrong.  These risks are relatively small, so encryption is a good idea where your backups contain sensitive information, but if its just a backup of the family photo album, I suggest you don’t encrypt.


VSS:  A technology that is related to block level backups and allows files to be backed up while open and in use

You should normally enable VSS, however, if you find errors with VSS causing backups to fail, it is OK to turn off in some situations.  Make sure you understand what you lose if you turn off VSS eg database backups may fail.


Intelligent Sector Backup:  You may see this idea under a number of terms for partition and full disk backups.  The option prevents blocks with deleted files or blank space to be backed up and so saves a lot of space.  You normally want this on.


Archive splitting:  Many backup programs can split up backup archives into smaller files.

This was traditionally used where backups were split across limited disk media such as CDROMS and is not usually relevant where we are backing up to external HDDs, NAS boxes, or other storage with plenty of space.


Notifications:  Most backup programs will send you an email on success or on fail of a backup process.

It is best to have the program send you a message on each backup, but you will find they are annoying, and you just delete them.  That’s OK, at least you will notice after a while if the messages stop.  Understand that a message that the backup failed is handy and you are more likely to notice, but the program can always fail in various ways so you never get that message.

Do not rely on fail messages or assume their lack means your backups are running.  Manually verify backups from time to time.


So, when do we get to the nitty gritty?

Sorry.  We are getting there!

In the next article I will outline the hardware and common tools that may form part of your backup system, then in the final article I will go through the nitty gritty and some examples of home and small business backup.

How to Choose Surge Protection

Lightning2Every year, as Christmas approaches, we see an influx of PCs, modems, and other equipment killed by power surges.  It is that time of year again, so to head off some of the issues, I thought a timely reminder in order.

When a burst of energy is dumped into the grid, a surge results and the voltage at your power point will jump up above the normal 240V.  When the voltage moves above the level that your equipment is designed to handle, damage results. 

Small surges cause cumulative damage to electronics and you won’t notice it happening.  When your computer perhaps reboots, or later just dies, you won’t know that your mains power is the cause. The most obvious impact of surges is when you get a big surge, or spike, and that may immediately kill your equipment.

A surge is just one of a range of power issues you will see described with terms like spike (same thing, but shorter duration), sag (voltage drop), transient fluctuation, interruption, line noise, and others. These are all situations where supplied power moves outside the range of “normal” and is generally called “dirty” power.

What causes surges?

The most obvious source in Brisbane are lightning strikes.  Lightning is a big spark, triggered when the potential between two locations, normally charged clouds and the earth, grows big enough to cause a mighty spark to arc across the air.  Air is a poor conductor and the lightning will travel along the easiest path, so as it reaches the ground, it will jump to any tall structure that is more conductive than the air, and then follow a path of least resistance to the ground.


Lightning does not have to strike a building, or even a nearby power pole, to cause damage.  When lightning strikes, mutual induction results in a boost of up to thousands of volts in nearby cables, including phone and cable internet wiring.  Induction can result in a surge many kilometres from the strike.

The most common damage caused from lightning originates from distant strikes and results cumulative damage though moderate surges.  As strikes get closer, damage increases to the point that a surge might take out your PC with a single hit.  Surge protectors can still help at that level.  A strike that is very close, say on the power pole out the front, will create such a huge surge that nothing short of unplugging your power and other cables will protect you.

If a surge protector is a nice little spillway alongside a small and well behaved creek, maybe diverting a bit of water when the creeks flow gets a bit excitable, then a nearby lightning strike is a tidal wave smashing over the creek and spillway, drowning the whole region.  It will overwhelm everything to hit your equipment.  This is why you should unplug before a storm!

Other sources of power surges and related issues can include utility based causes such as switching generators or capacitors on and off, and local issues such as the use of heavy equipment, or even household equipment such as refrigerators, air conditioners, or fluro lights.  Ever hear the click though your Hi Fi when the fridge turns on and off?  Yeah, that’s not good!

How surges reach your gear

Network_Cable_SurgeWe see a lot of people who conscientiously unplug their equipment every storm, to only find their computer and modem not working the next day.

In addition to mains power, any other cable coming into the house can also carry a charge.  Phone lines, Foxtel, your roof aerial, and cable internet connections can all damage attached equipment.

In extreme cases the surge may pass through a chain of attached electronics.  A nearby lightning strike might fry your modem though the phone line, then run through the network cable to your PCs and kill them as well.

We see this sort of issue as burned traces and components on the mainboard, originating from the network port.

When it wasn’t a lightning strike

Electronic_RustCumulative damage can be caused when power fluctuations are not severe but are still outside the design range for the electronics.

If these types of issues happen frequently, they will cause ongoing damage to your system components until they fail.  This type of damage is called “electronic rust” and you can see the result through a microscope.

Many failed parts returned to us do not fail because of normal wear and tear, but rather by excessive wear though dirty power.

Surge Protection Standards

There are some near worthless products on the market labelled as surge protectors.  While we are somewhat protected from false statements on products in Australia, it is still very easy to massage figures for surge protectors.  The best way to compare products is to go with trusted brands and to take a look at specifications that reference standards for measurement.

UL_StandardsThe most commonly used standards for surge protectors are those developed by Underwriters Laboratories (UL) in the US, and specifically UL1449.  This standard specifies the waveforms to be used in testing, defines terminology and test procedures, and categorises the type of protector.  It’s a useful reference when comparing specification of surge protectors, as otherwise the same unit might quote specifications which vary widely depending on how tests are developed.  Better quality surge protectors tend to quote this standard in their specifications.

Others standards are published by the Institute Of Electrical And Electronics Engineers (IEEE) and other professional bodies around the world.

It is unwise to trust specifications that don’t refer to a respected standard.  Don’t take unrealistic numbers on the box at face value!

How Surge Protectors Work

There are three basic types of surge protection: SPDs (surge protective devices), line conditioning/filtering units, and data/signal line devices.  Each provides a certain type of protection but what they have in common is that their job is to manage dirty energy.  Understand they cannot create or destroy energy, only work with what comes out of the plug by modifying, diverting, or dissipating energy.

Failed_MOVDomestic surge protectors incorporate a substance that can burn away to dissipate excess energy as heat to handle surges and spikes.  When the voltage rises too high, current is diverted through this component to ground, usually a metal oxide varistor (MOV).  The energy consumed by the MOV allows the main line to come back to normal voltage for your gear, but at the same time the energy will burn away the MOV.  This means that once the sacrificial MOV is used up, the surge protector won’t be able to reduce the voltage to your gear any longer and will stop working as a surge protector, but will probably keep working as a power board.

Some surge protectors incorporate a fuse, so that if the MOV can’t handle the surge, the fuse will burn out on the line to your equipment, cutting power instead of letting a surge through.  This will only work once and then surge protector will stop working even as a power board, so they are not common in domestic protectors.

More advanced surge protectors incorporate components designed to massage the power signal into a perfect form in order to keep electronic gear happy.  There are many ways to do this and the most effective can be quite expensive to build.  Fortunately while this feature is a useful it is less important for most household electronics than the basic surge protection.  It can be of significant benefit to some equipment, but buy a high quality power supply in your PC, and you can do without in a typical Brisbane house.


Specifications that Matter

Energy Absorption Rating:  An indication of how much energy the unit can absorb before it stops working, as measures in Joules.  This number represents a consumable in the form of a metal oxide that is used up by many small surges, or potentially by one big surge.  The bigger the number, the longer the board will last and the bigger maximum surge it can handle.

It is important to read the fine print and check the rating based on the UL1449 standard as advertised numbers are normally much higher than the numbers based on standards and not useful when comparing products.  You want to see a number of over 1000 Joules based on UL1449.

Indicator Lights and Fuses:  When the sacrificial component used to dissipate energy is gone, or in other words when the energy absorption has exceeded the units rating, the protector no longer works to remove surges but may keep working as a power board.  There is no obvious way to tell if surge protection has failed, so some manufacturers add an indicator light to show when the surge protector needs replacing.

A surge protector may also or alternatively incorporate a fuse designed to burn out when a surge comes along that the sacrificial component can’t handle.  If the fuse goes, the board will stop working entirely.

Clamping Voltage:  The voltage where the protector will kick in.  If it doesn’t kick in till voltage is too high, the surge may damage your gear before the surge protector starts working.  A number to aim for in Australia is 275V (mains power fluctuates around 240V).  Cheap units will tend to clamp at 400V or higher.  Note the lower the clamping voltage, the more energy will be diverted to the sacrificial component over time so the protector will tend to wear out faster, but much better the protector wear out than your electronics it is protecting!

Response Time:  Indicates how long the protectors needs to start working after the voltage goes into the red zone.  If it is too slow, your gear is damaged before it kicks in.  A good quality protector might have a response time at 1 nano second or less.  Cheaper units tend to be slower and may allow significant damage to occur before blocking the surge.  Don’t confuse detection time with response time, detection doesn’t matter, response matters.

Maximum Transient Spike: How much current the device can handle when a large burst of energy comes through, such as with a nearby lightning strike.  Again look for the UL1449 rated value and you want to see big numbers, above 30,000 amps based on UL1449 testing is good.

Power Filtering / Line Conditioning:  Aims to provide clean AC power by reducing high and low voltage electrical line noise.  There are various ways to design filters and the specs here can be misleading.  Normally more components and more cascading circuits are a good thing and active tracking is a premium feature to look for.  Filtering is only on high end models.  If it is cheap and says it’s a filter, it is probably not a very good filter.

Circuit Isolation:  Some models in a power board configuration will provide isolated circuits for arrays of plugs.  Frequency isolation is less effective than circuit isolation.  This feature can be handy when you are going to plug in electrically noisy equipment into one of the banks so it doesn’t interfere with equipment in the other banks.  Particularly useful for Hi Fi gear.

RJ 45 Protection: This means there is a plug for a network cable which can stop a surge getting through from your modem/router to the PC it is connected to.

AV/TV and Cable TV Protection:   This provides a pass-through to handle surges through your aerial and/or Cable TV Coaxial cables.

Insurance:  Most better brands will back their protectors with insurance, where they will pay damages if a surge gets through one of their protectors.  In fact a close lightning strike has so much power behind that it can strike though a normal protector, and in most cases I have heard of, the quality manufacturers will still pay up.  Insurance is a nice bonus.

Warranty:  Protectors will wear out over time as they intercept surges.  The boards I have tend to last years, but that’s with pretty good normal power and high end surge protectors.   Other staff here tell me they get 1-2 years in more outlying areas where the power is not great.  Again, the quality manufacturers will tend to replace the product even when it stops because its capacity has been exceeded (I know Thor will, don’t have personal experience trying to claim with other brands).

When to use a Surge Protector?

From the Insurance Council of Australia:  It is advisable to use surge protection units, designed to minimise the effects of power surges, on all ‘big ticket’ items in the home including the fridge, television, stereo and computers.

In my view, where surge protection helps best is with sensitive electronics such as modem/routers, PCs, Hi-Fi gear and so forth.  Sensitive electronics is not just limited to what you would expect nowadays…

The last washing machine I bought stopped a working a couple of weeks later.  Turned out it was the control board and while I doubt it was a surge that time, watching the tech replace the board reminded me how electronic components are so much part of our general equipment nowadays.

The tech was near retirement and I had a good chat with him.  He was telling me that the old washing machines of the era I was replacing lasted much longer than the modern ones.  He had a bit of a conspiracy theory where he believed that the boards in them were designed only to last so long before burning out.  He knows what he has seen over the years, and that was his interpretation.  Mine was a little different, got me wondering if the issue was surges over time that would not have been a problem for older models and their simple control systems (and maybe manufacturers keeping costs down with components, so just perhaps they don’t handle normal surges as well as they could!).  I don’t have a surge protector attached to my washing machine, but instead turn it off at the wall when not in use, but it does get you thinking about equipment you do leave on.

On average, you will tend to increase the life of your electronic gear if you run it attached to a quality surge protector.

What about a UPS?

A UPS, or Uninterruptable Power Supply is often confused with surge protectors.  Some people assume they are a better type of surge protector; not true.

Eaton UPSA UPS is a battery that switches in if the mains power is interrupted.  We sell them with all server equipment so the servers have enough time to shut down without damage to files and databases if there is a power outage.  They are also used to keep critical services up for a while during power loss, such as phone systems, servers and so forth.

A UPS does not necessarily protect from surges.  Most will, but it is better to think of them as a battery backup with incorporated power protection.  Power protection in entry level UPS models is poor.

A UPS does not necessarily sit above surge protectors in a power protection range, in fact there are cases where we have sold top quality surge protectors to clients using cheap UPSs to prevent surges getting to, and through, the UPS.  It can also be worthwhile using a surge protector behind even a top quality UPS to help protect the UPS.

What are the best value brands?

Surge protectors from most major manufacturers will do at least a basic job.  You can pretty much rate them by cost, a $20 surge protector is possibly better than nothing, but it is not going to last long or do a very good job.  Think of them as a power board that might have some other benefits.

We have sold the Thor brand over the years in the high end with good results.  Reports from customers have indicated that they do their job well, and I know they look after people with warranty and the rare insurance claim.  Most of my protectors at home are Thor.

In the entry level, pretty much any major brand around the $50 mark will get you a decent if limited surge protector, appropriate to protect lower priced equipment.  We sell some Belkin surge protectors and various other brands which are all adequate for basic protection.

Bottom line is, get something for any equipment that you care about, and for expensive gear, get an appropriate high end board.  Remember to replace your protectors regularly once the indicator tells you their protection has worn out, and when you hear a storm coming, unplug!

Microsoft Rolls Out Office 2016

MS Office 2016

Microsoft released the 17th major version of Office today, Office 2016.  It is 25 years since Office came from nowhere to dominate against the incomparable Wordperfect 5.1 and Lotus 1-2-3.  Office adoption was rapid and near universal, exploding alongside Windows and becoming so strongly associated with Windows that many believed they were part of the same product.

Over those 25 years, each new version of office included major improvements and new features.  Each release was a big deal, though on average less so as Office matured.  Office 2016 again offers significant improvements over 2013, though with evolutionary improvements rather than a radical overhaul.

How does Office stack up in the age of the app?  Is it still relevant? Some say that “apps” can replace massive traditional applications such as Office.  So far the depth and breadth of features needed for office tasks has helped keep Office at the forefront of content creation in business and home environments.  Yet the success of technically simple and shallow apps, that perhaps do just one thing very well, must be a lesson for the old guard.  How can an application like office improve by taking lessons from the success of modern apps?


This time, its different

Unlike any previous version, this time round many users will receive an upgrade to 2016 as part of their Office 365 subscription.  The Office 2016 release is a genuine rollout for the first time in history, rather than a launch.

Office 365 HomeRecent corporate focus on subscription models seemed, at first, a financial driven change aimed at developing consistent revenue and providing a one stop ecosystem of products for a single fee.  Subscription bundles are a change to a revenue per customer focus rather than a revenue per product focus, and makes use of competitive advantage from a company with a vast breadth and depth of product.  Bundling products into a single fee also encourages the adoption of new products and services that may not take off if sold individually.  Those elements are without doubt an important part of this strategy, but there is more to it.

I see subscription as a model facilitating the adoption of a rapid, agile way of developing software.  A change designed to compete in the age of apps.  Continuous improvement is executed with frequent updates, and features released one at a time rather than saved up for a single big change every so many years.  Agile and iterative development.  Listening to what the customer wants, and making it happen.  Fast.  Without a subscription based model, frequent releases can’t work.

With subscription in place and growing, expect to see the end of major releases in the not too distant future.  The old model will be replaced with subscriptions, continuous development of core applications, and a growing ecosystem of supporting apps that integrate and complement core applications.  I see a strong future for office as the centre of this ecosystem.



What’s Changed? – Collaboration and Intelligence is the Focus

You will notice an incremental improvement in the user interface and appearance of Office 2016, nothing major or unfamiliar, just a bit of polish.  This is no bad thing, with no Windows 8 disruption to be seen here.

Word 2016 interface

Collaboration tools and Cloud integration have received substantive improvements and are, perhaps, a compelling reason to upgrade for those improvements alone.  Microsoft has caught up with Google Docs and its own online versions of office with real-time collaboration now available in the desktop version of Office.

Word 2016 improves the co-authoring feature, with ongoing improvements across the suite likely to be added via patches.  Changes made across multiple authors now propagate automatically, removing the annoyance and possible conflicts and time wasting where changes had to be updated and refreshed manually.

Office is more tightly integrated with Skype for Business, a product under rapid development in its own right.  So far this is a quality of life improvement for people wanting to chat via skype while collaborating on a project.   Tight integration of Unified Communications via Skype for Business will likely be rolled out over time.



Machine Intelligence

Smart_LookupOffice improves its smarts in 2016, with Cortana style help and research systems integrated with office.  Each application ribbon has a text box that accepts a natural language statement of what you are trying to do and tries to help you achieve that intent.  It sometimes its helpful, and expect the feature to improve over time.  Similarly, you can use a new “Smart Lookup” feature to run a web search against a selected term in a document, like to using google or bing, but with better results as the search considers context with respect to the document content.

Updates to Excel include new charts, including a waterfall chart type.  Charting is also made faster and more accessible with a new feature where Excel makes suggestions of the type of chart you need based on the type of data you select to throw at it.  I’m not sure dumbing down like this is always helpful, but you can always reject a poor selection.  With a similar approach and dangers, a new trend forecasting feature has excel extrapolating historical data and a likely range with customisable confidence intervals.  This could save time for an expert who understands the nature of their data and the algorithm behind the extrapolation, but its another tool I see that will be abused by people not understanding the limitations of this type of extrapolation. Delegating your thinking to a machine is always a danger, so use with caution.



Finally, a fix for attachments in Outlook 2016

Outlook has a new way to attach documents to an email.  Documents stored on OneDrive that would normally be attached and sent with an email are instead attached as a link in the email giving access to the recipient to view and download, or edit.

For those of us who manage mail servers and have clients asking why they can’t send some 50MB file by email to the guy sitting the next desk over, it’s a great feature, encouraging users to look at productive ways to communicate and collaborate and reducing the mess large attachments can make to our exchange servers and backups!!.

Other changes to outlook include smarts to automatically organise your email with a new folder called Clutter (am not a fan), and a new groups feature to organise teams for collaboration.



And a new app build around Machine Intelligence – Sway

A new app called Sway has been available for a while, and is part of the Office 2016 app line-up.  It is something a bit different, intended to help you build and present a story, but unlike PowerPoint, it helps you construct the presentation by starting with a topic and adding content from various online sources.

This is the first Office component with core functions dependent on machine learning algorithms, and as such an app that may be mainly a toy for now as the technology develops, but has the potential to become something quite impressive and spin off features and ideas to other parts of office, or seed ideas for new applications.



It is still Office, and that’s just fine

At its core, office does what it has always done.  There is no single app, or even group of apps that can touch it for general business productivity tasks.  The collaboration tools, machine smarts, and growing ecosystem, of related apps maintain Office as an essential tool for creators.


Subscription now the best model

When subscriptions to Office were first released by Microsoft, in almost all cases we suggested people stay clear.  They offered very poor value.  Microsoft took on board that feedback (and a lack of subscription sales!) and fixed what we hated about the subscription offers with Office 365.

At this time, most people are better off with Office 365 over other ways to buy Office.  If you own an older boxed version of Office and are considering an upgrade to 2016, I strongly suggest you review Office 365 offerings and compare to the boxed 2016 versions.

Home users take a look at Office 365 Home Premium with its license for 5 users and multiple installs of the entire office suite with extras such as unlimited OneDrive storage and free calls to phones.  Comparable boxed 2016 products limit access to some office programs, and usually allow just a single install so you need one copy per machine.  The traditional product does not include the extras or free upgrades.

Office 365 Home Premium

For business clients, there are a range of Office 365 options, many of which you won’t see advertised on retail web sites (ask to talk to our BDMs for options).   Many Office 365 for business plans include major cloud service extras such as Exchange for email, and SharePoint as well as more generous licensing terms.

Guide to Personal and Small Business Backups – Conceptual Framework

ScreamToo often I see our techs consoling a despondent customer, in tears, having irretrievably lost precious files.  Family photos.  Business records.  Blog articles (!). All gone.  Yet some of those people have been “Backing up”.

A simple definition of “Backing Up” is a process that makes a copy of data onto a second device that can be used to restore that data if your primary copy is deleted or damaged.  A broader definition is any process that reduces your risk of losing data (files) or your system state (windows, settings).  I prefer to use a more global term, Backup System, a collection of backup processes or other elements working together to reduce risk of data loss and related harm.

You might reasonably believe that backing up is a simple process.  Before you run this process, your files are at risk of being lost, and afterwards, they are safe.  Run a backup, and it’s all good.  This type of binary thinking is prevalent even among IT professionals – Black and White, True and False, Risky and Safe.  Unfortunately, applying a binary worldview to backups will only get you into trouble by giving you a false sense of security.  Backups are not Black and White, they are Grey.

This article will disabuse you of false assumptions relating to backups, and introduce a conceptual framework you can use to design a Backup System and to protect your precious data.

Developing a Backup System is easy and effective if you use the right approach.  Clicking a button that say “backup” and hoping for the best, is only good for gamblers!


Backup Systems are about Risk Management

The key concept here is risk.  Most people have a decent, if subconscious understanding of risk.  The subconscious mind has habit of simplifying complex concepts and can mislead if you don’t consciously interrogate the concept.  So let’s consider, what we mean when we refer to risk.  Risk relates to:

  • the Harm you will take if you lose some of all files or system state, and
  • the Probability of losing some or all files or system state.

In a business context, you might add other “harm” that can relate to backups, such as downtime, or files finding their way to unauthorised people.

So Risk = Harm * Probability.  That seems simple.

But how do you quantify Harm?  Say you look at a tender you are working on, perhaps you know it will cost $500 to rewrite it, so you can assign a cost of losing the file with some accuracy.  What about the family photo album?  Hard to assign a $ amount to that.  You can probably make some rough estimate, but it is not possible to assign an exact value.  Priceless, perhaps.

What about the second element in the equation, the Probability (chance) of loss?  Probability can be very difficult to quantify.  What is the chance of your HDD failing, being infected by a virus that wipes your drive, throwing the whole thing out the nearest window when its misbehaving, and tougher still, what about disasters you have not even though of?  Again, you can only apply a ballpark figure on the likelihood of data loss.

The difficulty of determining the Risk Level that you are exposed to leads to another concept that is implicit with backups, but not often addressed explicitly.  Uncertainty.  Uncertainty, inherent in assessing risk, means that you can’t quantify your level of risk with accuracy, it necessitates a fudge factor, some safety margin to make sure you are not taking on too much risk.

Risk Level and Uncertainty lead us to our final concept, Acceptable Risk.

No backup system can reduce your risk of losing data to zero.  No such system is possible in our world.  Beware of anyone who tells you that their system is 100%!  Instead of aiming for zero risk, you should consider what your level of Acceptable Risk is, and weigh that against the cost to reduce your actual Risk Level.

Finally to the good news.  It is usually possible, with a little thought and attention, to vastly reduce your Risk Level inexpensively.  Developing an effective Backup System for a home or SME environment is about using available tools intelligently rather than spending a fortune.

Before we go into the How, we need to cover more abstract concepts that you can use to assess the backup methods you choose.  Again, without applying these concepts to critique your Backup System, it’s likely you will run into trouble and find you backups are not doing their job, inevitably when it is too late.


Develop your Backup System with Desirable Attributes

Certain attributes of a backups system tend to increase the likelihood that it will perform as desired.   When developing or assessing the quality of a backups system, you may want to consider the following attributes.

Simple as PossibleTo make life that little bit more difficult (this is about computers, after all), some of these characteristics contradict one another, so you must apply some common sense where a trade-off is necessary.

  1. Simple – Never add complexity for marginal benefit.

Convoluted backups systems fail more often than simple systems, because, by their nature, there is more to go wrong, with less visibility in how the system works.  Simplicity leads to our second attribute.

  1. Visible – Know where your stuff is and how the backup system works.

The first step is knowing where your important files are.  The second is knowing what process is used to backup those files.  The third step is being able to locate your files at your backup locations and verify that they are complete and viable.

  1. Automated – Make it work without human intervention.

Most data loss I encounter where there are no backups is followed by the line “I used to do it, just have not got around to it recently”.  The best systems should work even if you neglect it, but a word of warning, automated does not mean you can skip manually verifying that the system works.

  1. Independent – Multiple backup processes and data locations should be unrelated.

Processes that are less dependent on the same factors are less likely to fail on you at the same time.  You might use an image backup and a simple file copy backup on the same data, since a failure with one method will not necessarily result in the other also failing.  A backup located in another room is not as good as backup located in a different building, and implementing both Is better.

  1. Timely – Capacity to recover data that avoids damaging downtime.

StopwatchFor a business, downtime while you recover files can be costly.  Assess how long your system requires to restore files and systems and reduce that time where unacceptable.

  1. Cost Effective – Seek balance between cost and benefit.

Aim to find a sweet spot where the cost and effort put into your backups effectively reduces risk, and then stop.  Don’t fight your way to reduce risk just a little further when it requires massive extra cost, but also don’t be cheap and stop reducing risk when the cost to do so is minimal.

  1. Secure – Control access to sensitive data.

Consider the harm you will take if backed up data gets into the wrong hands.  Where the harm is significant, consider encryption and other security techniques.  Do not apply security without due consideration as increasing security techniques can, and usually will, increase the chance of your backup system failing.


Understand Concepts, Techniques, and set Objectives before you begin

Once you are comfortable with risk management, and the attributes you want to incorporate into a backup system, it is time to set objectives for your Backup System and how to achieve those objectives.

To develop a plan, you will need a grasp of:

  • Your data and its characteristics: size, location, live or closed files, live services etc
    • Include files and systems. Eg an accounting data file might be critical, but the installed accounting package might also be worthwhile to backup.
  • Importance/acceptable risk level related to identified data.
  • Related risks such as downtime and stolen data.
  • Storage devices available/desirable and capacity: external HDDs, NAS, cloud, etc
  • Backup tools available/desirable: Image creation tools, command line tools, VSS, etc
  • Techniques possible: file mirror, images, full/incremental/differential/continuous, scheduled tasks, verification, encryption, cleanup, etc
  • Contingency Plan – what can go wrong with backups and how can those risks be reduced.
  • Available budget

Finally, start designing your system.

This article has covered some of the high level concepts relating to backups such as risk and desirable attributes.  It has not covered the types of backups possible, storage devices, or techniques.  Follow up articles will cover these areas and provide walk through examples of backup systems for home and business.

When Your Digital Life is Held to Ransom

crim_with_gunRansomware is a type of malware that will lock you out of your files, device, or other resources, and demand a payment to regain access.  Even if you choose to pay up, most often the payment will not result in gaining back access, and your files are permanently lost.

The threat posed by ransomware is serious, with recent infections more devastating than any malware or virus we have seen in years.  The infection rate increased throughout 2015 with a range of new variants attacking customers across PC and other platforms.

We are asked to help businesses recover from a major ransomware infection about once per week.  Where the infected site has implemented a viable backup system, files and services can usually be restored with some effort and relatively few losses.  Unfortunately, many businesses and individuals that come to us for help did not have appropriate backups in place and have lost files at huge cost.  We have seen Ransomware infections threaten the ongoing viability of businesses.

A Bad Day in The Life of …. John, the owner of a small manufacturing business who has come into the office at the crack of dawn to catch up. Coffee machine working. Check. Time to finally get this tender sorted.

Logging into his PC, John does a double take. There is an impressive looking red and white boxy thing on the screen.“ Your personal files are encrypted!…..pay us or they are gone forever.”

Files exist but nothing will open. Johns not too familiar with the backup system, but he knows they have one. Rings his accounts guy who looks after that. There are backups. They sit on a NAS in another building to the server, so they are “offsite”.

Tries to restore. Files won’t open. They have been encrypted as well. Very Bad Day.

How does Ransomware infect and hurt my computer systems?

The most common way to get infected is by clicking on an attachment or link in a malicious email.  The email may appear to have been sent from reputable company such as Australia Post or Telstra.

Other methods of infection include visiting infected web sites, malicious code embedded in downloadable applications, and attacks on security flaws in operating system software and services.  Any attack that can run executable code on your computer has the potential to infect you with malware.

The most common Ransomware will scan your computer for valuable files, such as office documents and photos, and encrypt them.  Ransomware will also encrypt files on any network accessible locations or attached disks.  Once done, the malware presents a message with a promise to decrypt the files if you pay up.

Files encrypted cannot be recovered without a key generated by the criminals or from backups.  Backup files accessible to ransomware, such as those located on an attached drive, may be encrypted alongside other files.  Backup sets may also be compromised with any backup process run after encryption has started including a mix of newly encrypted files and unencrypted files, potentially overwriting the original files depending on the backup system design.  At best this requires you to sift through backups for the last good copy.

By the time the damage is noticed, even if you have current backups you may find your business suffers with downtime to services while you restore the damage.  Even large organisations with multiple layers of security and backups have been hurt. “There was an IT security issue this morning which affected some of the ABC’s broadcasting systems and created technical difficulties for ABC News 24, as a result, we broadcast stand-by programming from 9:30am before resuming live news broadcasts from Melbourne at 10:00am.” – ABC news 8th October 2014



Are my other devices vulnerable?

Ransomware is most common and dangerous on a Windows machine, but this type of malware is also growing rapidly on other operating systems and devices.

Infections on Android devices are normally triggered by a malicious app.  When activated the malware tries to lock the user out of their device and demands payment to allow access.  There are work arounds to gain back access in most cases, but not always.  A particularly nasty variant reported in September 2015 tricks the user into clicking on a button that allows the app to change the devices PIN and disable security software, permanently locking out the user.

Ransomware can potentially attack Apple iOS and OSX devices, such as iPhones.  In 2014 Apple customers, primarily in Australia, woke to a message on their phone screens : “Hacked by Oleg Pliss. For unlock YOU NEED send voucher code by 100 $/eur one of this (Moneypack/Ukash/PaySafeCard) to helplock@gmx.com I sent code 2618911226.”  This attack was probably indirect, where a vulnerability in the user’s iCloud accounts was exploited to then attack the phones.  A handy reminder to set up two-factor authentication on important accounts and to set, and regularly change, a strong password on your accounts.



I’m not at risk, I have a virus scanner!?

It is common and dangerously mistaken belief that an up to date virus scanner can guarantee protection from malware.  A virus scanner will protect against many known, and some newly released malware, but no scanner can protect against all such threats.

Failure_300When Cryptolocker attacked in late 2013, no virus scanners I am aware of picked it up or prevented its actions.  It went right through the scanners as well as other security systems and infected many thousands of computers.  Days and weeks later virus scanners caught up and could block the malware, but then a variant of Cryptolocker was released to bypass them and a second wave of infections ensued.  To rub salt into those wounds, they also took the opportunity to raise the ransom amount.

Research by LastLine Labs (and others) confirms that anti-virus are never 100%, and can never be 100%: “On Day 0, only 51% of antivirus scanners detected new malware samples … After two weeks, there was a notable bump in detection rates (up to 61%), indicating a common lag time for antivirus vendors”

Seeing comments like the following are common after an infection, when it too late and people realise antivirus alone is not enough “…got a Crypt/Dorifel virus(an early version of cryptolocker) and all Symantec was doing is quarantining the modified documents….”

Antivirus will protect you at times, but it cannot be more than one element of a protection strategy.


Do I get my files back if I pay?

Sometimes.  These programs are written to make the criminals money.  It makes sense then that if you pay up, usually $100s to $1000s of dollars, they will decrypt your files or unlock your device so the next guy has some hope and will also pay them.

Sometimes paying will work.  More often the remote attacking servers will have been taken down, access blocked by to prevent further infections, the software simply breaks and will not decrypt correctly, or the attackers never intended to allow the option to decrypt.  If you get hit, ask for advice to look at all other possible ways to restore lost files, and only try paying up as a desperate last resort.



Is there any other way to get my files back?

Restoring from your backups is the most reliable and most often the only method to recover from a ransomware infection.  Where backups are accessible from the infected computer, they may have been encrypted and therefore practically destroyed.  Offline backups are not vulnerable and may be an option if your current most up to date backups have been compromised.

The effectiveness of backups depends on their design and many poor backup system designs are vulnerable to ransomware.  If you are reading this as a business owner and are not entirely familiar and confident about your backup system, please review it!  In our experience the majority of SMEs do not have an adequate backup system, and many who believe that have a working system in fact have no useful backups at all.

“CERT Australia was contacted by a number of organisations that had suffered significant business disruption as a result of corrupted backups.” – Australian Computer Emergency Response Team on Ransomware advisory.


For users of some versions of windows, you may be able to restore older versions of files, the ones present before the malware attacked, by accessing the shadow copy snapshots stored on your computer (if the feature is enabled – and note it is not available on the basic home versions of windows).

In rare cases, security organisations have discovered flaws in the encryption process or intercept encryption keys from the attackers, allowing files to be decrypted.  This is a long shot.

There may be other copies of your information cached or stored in places you are not aware of.  Think about files that may have been copied to pen drives, other devices, uploaded into the cloud, and so on.  There may be deleted files on devices that can be retrieved.

Ask for professional advice if you appear to have lost critical files.


How can I protect my business from Ransomware?

There is no one simple solution to insure a business against ransomware or similar malware and related disasters.  Implementing a number of measures can greatly reduce your risk of serious data loss or downtime.


Some of the measures you can use to reduce the risk to your business:

  • Educate staff in ways you might be attacked and what to look out for to avoid infection, such as suspicious emails.
  • Implement a planned and verified backup system including an offline copy. Ensure a senior person fully understands the design and how to verify its operation.
  • Limit account permissions – do not log into a computer with an administrator account for general use. Limit individual users account access to network resources they need.
  • Where backups run across your network to a shared location, such as a NAS, set a password to limit access so that only the backup process can use that share.
  • Turn on User Access Control (its on by default, many people turn it off. Do not.)
  • Do not plug any PC still running Windows XP into your network (no longer supported by Microsoft and highly vulnerable to attack)
  • Turn on Volume Shadow Copy where available (keep old version of files automatically).
  • Ensure operating system patches are fully up to date on all computers.
  • Reduce your attack surface.
    • Use firewalls and NAT routers so internet traffic cannot reach devices that may be vulnerable. Be careful with port forwards and do not use the DMZ feature on your router unless you know what it is!!
    • Block access to suspicious web sites at network level.
    • Turn off services you don’t need.
  • Block malware delivery with services such as a spam and virus checking service to filter email before it hits your user mailboxes and a web filtering service to block malicious or infected websites.
  • Ensure up to date antivirus systems are installed on all computers.
  • Implement software restriction policies to block execution of unknown software and other appropriate organisation wide group policies (for server based networks).
  • If you suspect an infection, physically unplug any infected machine from the network, and if not sure what to do,shut it down immediately. If encryption is not complete this will save files.


Further Reading

How My Mom Got Hacked – New York Times

Why Ransomware Will Continue to Rise in 2015 – McAfee Labs

Cutting the Gordian Knot: A Look Under the Hood of Ransomware Attacks – LastLine Labs

Antivirus Isn’t Dead, It Just Can’t Keep Up

Ransomware @ Wikipedia

New Windows 10 scam will encrypt your files for ransom @ ZDNet

Hackers lock up thousands of Australian computers, demand ransom @ SMH

Ransomware Advisory @ CERT

Ransomware Explained @ Microsoft

Cryptolocker @ Krebs on Security

Crypto-ransomware attack targets Australians via fake Australia Post emails @ ABC