USB goes Full Circle with USB-C

USB-CUniversal Serial Bus (USB) was developed in the late 90s to replace a mess of slow PC connections with a high speed, one size fits all plug and data transfer protocol.  All newish devices had the plug, it was good, and there was no real decision or gotchas to look out for when buying new devices.

A decade or two later, things are again a mess.  Incremental changes to USB have offered progressive technical improvements, but at the cost of modified plugs and sometimes questionable backwards compatibility.  Mobile devices using the small variants of USB ports, or worse still proprietary plugs, have diversified the cables in use and ensured X won’t plug into Y.  New connection standards such as DVI, Firewire, HDMI, and Display Port emerged to meet specific needs for what, at a basic level, is a demand for local bandwidth that could in theory be carried by one cable.

An effort is again underway to resist the evil forces of cable proliferation and focus on the holy grail, one cable to carry them all, resulting in USB-C (also called USB Type C).  This new type of connector is intended to replace the mishmash of USB plugs, and potentially other existing data plugs to, become the standard connector to attach any peripheral to a computer or mobile device and to provide enough electrical power to run many of them.

USB-C is now available.

Physical Characteristics of USB-C

USB-C on the left compared to Type A on the right

USB-C on the left compared to Type A on the right

The original USB connector was a spade shaped plug on your PC end (Type-A plug), and a squarish plug on the peripheral end (Type-B Plug).  Smaller variants are available for mobile devices.

They would only plug in one way round.  When taking random stabs at plugging a USB cable into the back of a PC, under zero visibility, and while fighting off the tech eating spiders and poisonous dust clouds, it would be impossible to get the cable around the right way.

Problem solved.  The new USB Type-C is a small, reversible connector about the same size as a micro-USB plug but a little thicker, and pretty easy to get the cable into.

The USB-C Plug was developed by the same industry group who define and control the USB data transfer protocols, but should not be confused those protocols.  USB-C can carry USB 3.1 and other USB standards but is not limited to those standards.

USB Protocols and how USB-C fits in

Superspeed_USB_10GbpsThe last major USB standard release was USB 3.1 in 2013, a relatively minor upgrade from USB 3.0 that itself dates back to 2008.  USB 3.1s main call to fame was doubling theoretical transfer speeds from 5Gb/s to 10Gb/s.

For reasons that I fail to understand, the body that handles USB recently decided to make a naming change:

  • USB 3.0 changed to USB 3.1 Gen 1 (5Gb/s) also known as SuperSpeed USB
  • USB 3.1 changed to USB 3.1 Gen 2 (10Gb/s) also known as SuperSpeed USB 10Gbps

For comparison sake, the older and still very common USB standard, USB 2.0, supports 480Mb/s of raw data.

The new USB-C plug is able to carry data complying with either USB 3.1 protocol or USB 2 (with adapters), but is not limited to USB transfer protocols.  USB-C can handle other useful high bandwidth data streams including video and network traffic, and will be able to handle future high speed protocols. This is done through the use of alternate modes, where the system can hand over control of certain pins within the connector to carry traffic defined by protocols unrelated to USB.

Where faster Data Transfer Speeds Matter

USB transfer protocols indicate a theoretical maximum throughput in bits per second of raw data.  There are 8 bits to a byte, so for USB 2.0 – 480Mbits/s = 60MBytes/s, but then we need to reduce this figure further to take into account “overhead”, essentially the part of the data stream that is used to get the information we need transferred across.  To test this, try to transfer a large file from a USB 2 HDD to your PC and look at the speed you get.  It will be no higher than 35MB/s, not 60, being limited by the USB 2 standard less significant overhead.

USB2_File_Transfer

USB 2 HDD transfer speed is capped by the USB Protocol

Your mouse, keyboard, and many other devices are just fine with USB 2.0, but HDDs and the occasional other USB device will work better with the faster protocols.

Modern Hard disk drives can transfer files at a much faster rate than 35MB/s so the upgrade to USB 3.0 can significantly increase transfer speeds.

USB3_File_Transfer

USB 3 HDD transfer speed is capped by the maximum speed of the physical disk

Marketing people confuse the issue by often quoting the USB 3.0 or 3.1 maximum speed on the box of many supported products, particularly HDDs.  This is not even close to the transfer rates you will get with the usual mechanical drives, as they will be limited by the transfer rate of the drive, not the transfer protocol.

Only the fastest of the modern SSD drives are now surpassing USB 3.0 speeds though given external drives are still mainly mechanical drives owing to cost, USB 3.0 is fine for now (or USB 3.1 Gen 1, if you prefer the new terminology!).

So what is the advantage of USB-C when transferring files using USB?  For time taken to transfer a file, not much (well, till you go with a SSD external), but have you ever been annoyed by that bulky AC Adapter powering your external HDD?  Well, USB-C can do something about that.

USB-C Power Delivery

The original USB specification allowed for up to 150mA of power at 5V, just 0.75W of power.  That was fine to power a mouse or keyboard but was never going to be adequate to power external HDDs or other more demanding devices.

USB 2 gave us a useful increase up to 2.5W and then USB 3.0 to 4.5W, and a power charging option was introduced at 7.5W but not simultaneous with data transfer.  Power supply was only at 5V.  Small steps, but it allowed for 2.5” HDDs to be powered off USB and mobile devices such as phones to be charged off USB ports, if over a long period of time.

With USB-C connectors we are finally seeing a major increase in overall power, and this time, at varying voltages.  The new plugs can support up to 5A at 5, 12, and 20V, potentially giving 20V x 5A = 100W of power.  Simultaneous data transfer is supported even at maximum power output.

USB_Power_Delivery

This sort of power delivery will allow for substantial devices to be powered from the same cable carrying data.  A desktop sized monitor needing just one cable from PC to screen is a good start, getting rid of AC adapters from 3.5” external HDDs is another use.

The new power spec allows for power to run in either direction. Get home and plug your notebook into your desktop monitor by USB-C and let you monitor charge the notebook.  Goodbye power brick.

The management of power between devices has also seen major improvement.  Where power is limited and needs to be shared across multiple devices, the protocol allows for a varying of power supplied as demands vary.  For example, if a laptop is pulling a lot of power off a desktop USB port to charge, and then the monitor, also powered off USB is turned on, the power available to the laptop can be reduced to allow the monitor to run.  How well devices play together will be interesting to see!

Alternate Mode to support DisplayPort, Thunderbolt, and more

So getting back to that file transfer speed, and more generally, moving data fast, perhaps faster than USB 3.1 allows, does USB-C give us any other options beyond the 10Gb/s of USB 3.1?

Well, yes.  The USB-C specification allows data protocols outside the USB specification to be supported through an alternate mode system where the device can negotiate control of certain pins to be reconfigured to support data streams outside the USB data transfer specifications.

When no other demands are placed on the cable, USB 3.1 can use 4 lanes for USB Superspeed data transfer, but when using alternate mode some or all of these lanes can be used for other types of data signals.  These signals are not encapsulated within a USB signal, rather the signal is supported directly on the wires with the alternate mode protocol managing the signals allowed to be sent.  Depending on the configuration, various levels of USB transfer can coexist with these alternate protocols.

The most popular protocol supported by current USB-C implementations is Display Port, allowing a 4k screen to be driven at 60Hz with 2 lanes while still allowing USB Superspeed simultaneously.  Adapters are also available to attach an older monitor with DisplayPort or HDMI to a USB-C connector.  Not all USB-C plugs will support Display Port Alt Mode, so take care to check the specs of devices where you might want to use this feature.

Thunderbolt 3 (40Gb/s) is supported in some implementations of USB-C, providing a wide range of data types including high resolution video, audio, and high speed, 10Gb/s Peer to Peer Ethernet.  That last one is particularly interesting as a modern version of a laplink cable by cutting out the middle man and providing a high bandwidth device to device connection ten times faster than available across most LANs.

USB-C Availability

At time of writing, USB-C is available across a modest range of products, and many newly announced products are supporting the technology as they hit the market.

We are seeing niche products such as portable monitors powered through USB-C, but do not expect to see widespread peripheral support until the installed base of USB-C reaches critical mass, probably less than a year away.

To help build that installed base, we have motherboards (used for desktop computers), notebooks, and tablets with USB-C support, as well as expansion cards that enable USB-C on legacy desktops.

If buying now, USB-C is not a essential purchase, but for a machine you expect to have in use for at least the next few years, its a desirable feature and worth considering during the hunt for your next computer.

Guide to Uninterruptible Power Supplies (UPS)

Eaton_1500VA_smallThe UPS is a misunderstood beast, so we have written this guide to clear up misconceptions and provide information to help you work out if you need one.

At its core, a UPS is a battery that sits between the mains power supply and your equipment.  When the power drops out, the battery is there to keep your gear running long enough to save your work and shut it down normally.

Avoiding an abrupt power cut to a PC or server is always a good thing.  A power cut can damage your files, OS, and occasionally hardware.  Complex systems and those under load, such as servers, are at particular risk.  For example, if a database write is in progress when the power is cut then your database will likely be damaged.

A UPS with large battery capacity can also be used to keep critical systems running for hours.  Many people assume a long uptime is the main reason to buy a UPS, but in fact that is a relatively uncommon, secondary purpose.  If you need extended runtime, a UPS can be configured for that purpose, but you might also want to look at generators in combination with a UPS.

Another benefit provided by most UPS units is built in power protection, from basic surge protection to more advanced power conditioning.  A UPS does not necessarily replace a good surge protector, and in fact using a surge protector behind a UPS may add a better quality of power protection compared to entry level UPS protection, and will at least protect the UPS itself.

Types of UPS Technology

There are three technologies typically used on consumer and SMB level UPS devices.

Standby UPS for Backup Power

Standby_UPS_CircuitEntry level UPS units use a “standby” design where power flows directly from the mains to your devices when mains power is available.  At the same time, the battery is charged using mains power.

When mains power is cut off, battery power is automatically switched on through an inverter to power your devices.  It takes a small but appreciable period of time to switch to the battery, and that time can cause issues for some sensitive equipment.  For example, some modern active PFC power supplies might pull excessive current for a moment as it comes back online if the switch takes too long, overloading the UPS.

Most of these units employ a basic surge protection in the circuit to protect against surges and brown outs (not shown in diagram).

The signal output tends to be a simulated sine wave, which is fine for most gear but can cause issues with some power supplies.

Line Interactive UPS with Power Conditioning

Line_Interactive_UPS_CircuitA line interactive UPS is often assumed to be an “online” system where the battery is always feeding power directly to your equipment.  Not true.

A line interactive UPS is similar to a standby UPS but adds an additional component able to regulate voltage.  When the mains voltage goes a little above or below an acceptable level then this additional component can adjust the voltage sent to your equipment, and so the battery does not need to be drained to handle it.  The UPS will usually click when this kicks in.  If voltages vary too much, then the battery will take over as the power source, same as for a standby UPS.

The output signal from a line interactive UPS may be a simulated or pure sine wave output, depending on the model.

Line interactive UPSs are relatively cheap and are the best value type of UPS for many circumstances.

Online Double Conversion UPS: Clean and Continuous Power

Double_Conversion_UPS_CircuitDouble conversion UPS models are the high end option in the for consumer and SMB market.

This technology keeps the inverter online at all times so any power interruption does not require switching and the output power quality is consistent, clean, with a true sine wave output at all times.

Because the battery and inverter are always online, additional stress is placed on these components and there is some loss of efficiency with associated waste heat.  Its not a serious problem, but is a reason why you need to ensure the UPS is well ventilated and personally I prefer to stick a decent surge protector behind these units.

A similar but more recent technology called an online delta conversion UPS is very similar to a double conversion unit but adds components to improve efficiency and reduce issues with the traditional online UPS.  This type of technology tends to be found in only large UPS units, over 5kVA but a worthwhile upgrade for higher end commercial demands.

Online double conversion systems are the best choice for critical systems with modest sized installs, but do cost significantly more than line interactive or standby UPS models.

Whats with Simulated vrs Pure Sine Wave??

Mains AC power is supplied in the form of a sine wave that smoothly alternates between positive and negative values.  Recreating that form from a DC source at the other end of a UPS battery can be expensive.  Very basic equipment will produce a square wave where the voltage jumps straight from positive to negative 240V.  Harsh.  Most cheapish UPSs will do a better job with a modified square wave form, also called a simulated sine wave that is closeish to the real thing, but not a smooth curve, just some steps.  Better units including all online UPS units will produce a nice smooth sine wave that works best.

squarewave modsquarewavesinewave

Modern, efficient computer power supplies (anything with the 80+ certification) feature active power factor correction (PFC) and they do not always play well with simulated sine waves, let alone the even more horrible square waves.

If you find yourself with  an active PFC power supply and anything less than a pure sine wave output, you may find your PC randomly rebooting and/or your power supply struggling and dropping efficiency when it switches to battery.  Many PSs will handle it, and as long as it works for the short time you are on battery you can get away with a cheaper unit, but its not ideal.

Note there are simulated sine waves, and then there are simulated sine waves.  In other words some of the modem UPS units do a pretty good job of producing a nearly accurate simulated curve, others not so much.  You tend to get what you pay for in that regard and closer to a pure sine wave, the better.

In general, try to get a UPS with a pure sine wave output but if the cost is excessive for your use, its likely a simulated wave for good brand name UPS units will be OK.

The much misunderstood “VA” vrs Watts

Once you decide on the UPS technology you want, you need to match it up with the battery specifications.

UPS units are usually quoted with “VA” specification, often in their model name.  VA stands for Volts x Amps, and if you remember your high school science you will know Volt*Amps= Watts so you would reasonably expect that VA figure to represent the output capacity in watts.  But then you might notice that a UPS will also specify maximum output in Watts, and the number will be different, and lower, than the VA number.  So whats up with that?

Apparent_Power

The reason for the difference is because the VA represents the maximum theoretical output, called apparent output, but available power will be less and convention says we should not use more than 60% of the apparent output, meaning the watt rating will be about 60% of the VA.  A 1000VA UPS should be rated at 600W.

Some manufacturers will use a relatively high maximum wattage quoted, maybe 70% or more of VA.  There may be some justification to do that in some cases, though I think more often it’s the marketing people sticking their nose in.  Personally, I prefer to look at the VA and work out wattage based on 60% of that.

That convention, and the variance it implements, brings up a good point.  Do not overload your UPS.  A reasonable target to run the UPS at about half the rated load, so if your equipment uses around a 300W and may burst up a bit past that, then go for a UPS rated to handle about 600W, or 1/0.6 * 600W = 1000VA.  Its usually OK if you run it up to that 60% figure, and potentially a little past that for short periods, but lower is safer and will give you better runtime.

Another issue many fail to consider is the behaviour of active PFC power with switching UPS units supplies (ie, all new PC power supplies nowadays).  Where the switchover to battery takes a while, the power supply may try to slurp up a lot of power to catch up in the moment that power is switched back on, and that can overload the UPS.  Consider the nominal rating of the power supply, plus a bit, against the UPS rating and aim for a UPS that is rated a little higher than the power supply to avoid this problem.  For example, a 750W PS that might normally use just 150W would probably be OK with a 1000VA line interactive UPS but might fail with a 500VA unit, even though 500VA can supply much more than 150W.

Take into account that a battery performance and capacity will be reduced over time, till they need replacing.  Also consider that the runtime will improve as your reduce load on the battery, so loading it up to near maximum may not give you long enough to shut down for your equipment.

UPS Runtime

The VA figure relates to output at a given point at time.  Many people assume that a high VA means high run time.  In fact that’s not true and the two specifications are not directly related.

runningIt is quite possible for a certain model 1000VA UPS to run for say 10 min at half load, and a different 1000VA model to run for an hour at half load.  The figure that matters for a battery is how much energy in watt hours that it can store.

Most UPS units are built to allow time for shutdown but not much more than that, so if you need a long runtime after the power goes out, you need to look at long runtime batteries, add additional batteries, or run a UPS at a fraction of its maximum output.

UPS specification normally quote expected runtime at half load.  If you run it at full load, then roughly halve that time.  If you run at a quarter load, double it, and so on.  Also again consider that those numbers will tend to reduce as the battery ages and you always want to factor in some extra buffer time.

Beware of outlets with weird plugs and no battery protection

Many entry level UPS units now have standard wall power point style plugs available to make it easier to plug in any gear you want.

Eaton_550VABe aware that some may be only wired up to surge protection and not battery backup.  For example, you might have a PC plugged into one plug and a laser printer into the other for convenience.  You don’t want the laser printer to switch to battery; just make sure you know which plug is which!

You can plug in a double adapter or power board into a UPS outlet, but take care not to overload it with excessive demand.  It is not the number of devices that matter, but their total load.

UPS_CableOlder and larger UPS units will have a number of three prong power leads, or “jug leads” as I call them.  That’s the standard power lead you see running into your PC.  They are fine when plugging into a PC power supply, but can be a pain if you want to plug in other gear and you may need to buy separate cabled that convert to a wall plug.  Just make sure when you buy at UPS that you end up with the cables you need to plug in your gear.

Equipment you should never plug into a UPS

Some equipment can draw a heavy load of power for a short time, and these can damage the UPS and any other equipment attached.

Laser printers are notorious for it.  Don’t plug in anything that needs that big pulse of power that spike up beyond what your UPS can handle.

Set up Automatic Shutdown

For most buyers, the purpose of a UPS is allow for a clean shutdown of your equipment during a power outage.  To meet this goal, don’t forgot to install and setup compatible UPS management software on your devices so they shut down automatically when unattended.

LansafeThe UPS will be typically connected to your server or PC with a  serial or USB cable, with the option to use a network cable in higher end models.  Those cables are essential to let the UPS talk to your gear and let them know the power just went out.

I personally prefer a simple serial cable when available and in small scale deployments, as they tend to have less issues with drivers and avoid the situation using a network cable where if the switch fails or loses power, signal will be lost (so if using the network cable, make sure the switch/s are plugged into a UPS!).  USB cables will do the job if that’s your only option, but in my experience, tend to be a little less reliable.

I personally don’t trust the automatic setting with the software.  Take a look (and test) the runtime you are getting out of your UPS, and gain an estimate of how long your machine takes to shut down when the shutdown signal is sent.  The software will let you set a delay after the power goes out to start shutting down your server/PC. Hopefully during that time it will come back up, if not, the signal is sent and then your PC will start shutting down.  You may also have the option to turning the UPS itself off before the battery is entirely drained, using that function or not is a matter of design and personal choice.

Leave plenty of leeway.  If your UPS is estimating 30 mins of runtime and your critical server might take 5-10 mins to shutdown once the signal is sent, you might send the shutdown signal after just 5 mins to give it plenty of time to properly shutdown.  You do not want the battery to run or UPS turning off out during the shutdown process!

Guide to Personal and Small Business Backups – Technical Concepts

backup_reverseThis article builds on the high level conceptual framework introduced in our previous backup article.

I will explain technical concepts and related terminology to help you design a Backup System for use in business or home.

Related Articles:

01 GUIDE TO PERSONAL AND SMALL BUSINESS BACKUPS – CONCEPTUAL FRAMEWORK

 

Backup Methods Classified by Viewpoint

We can categorise backup processes into:

  • Backups that target files and folders (file level backups)
  • Backups that target full disks or partitions (block level backups)

For the most part, these types of backups are distinct in technology used, limitations, risks, and most importantly, outcomes.  I will try to clarify the differences and why they should be taken into account when developing a backup system.

File level backups aim to make a copy of discrete files, such as your photos or documents.  This type of backup focuses on each file as a discrete unit of data.  When you copy your documents across to an external HDD, you are implementing a file level backup.

Block level backups are a little different and often misunderstood, resulting in backup designs that fail to protect your data. The aim of a block level backup is to copy all the blocks of data contained on a partition.  It is important to gain an understanding of how files are stored, and what we mean by a disk, partition, and a block in order to make appropriate decisions regarding your block level backup design, so let’s cover the basics.

blocks

A disk stores data in small chunks, which can be referred to as blocks.  When you save a file, it will be cut up into small pieces that will fit in the blocks available on the disk.  The locations of blocks used by a file are recorded in the index along with the name of the file.  When a file needs to be opened, the index is used to find out which blocks need to be read and stitched back together.  In the above image, you might consider that a big file has been split across all the blue blocks with other squares representing other blocks on the disk.

A “block”, in the context of a block level backup, refers to one of those same sized portions of your disk.  A block may, or may not, contain a piece of a file.  It may in fact be blank or contain old data from a file you have deleted (this is why deleted files can sometimes be retrieved).

You will encounter the term “partition” or “disk partition” when setting up a block level.  A partition is the name given to the part of a physical disk that is set up with its own index and set of blocks to contain files.  It is possible to set up many partitions on a single physical disk, but often each disk will only have one visible partition and so people tend to use the terms disk and partition interchangeably.  C: for example, is a partition but it also might be called a drive or disk.

The below image shows two physical disks and the partitions located on each disk.  Note the partition with C: also has two hidden partitions, the first to help with the boot process to the Windows program located on C:.  The second disk has just the one partition, represented by D:  The critical information is in the C: and D: partitions, but it is normally best to backup the lot to make system recovery easier.

Backup_Partitions

Block level backups don’t worry about the content of the blocks, they just copy all the blocks of data and in doing so just happen to capture all files on the backed up partition.  Most often the backup will skip blocks that the index says has no current files in them to save time and space.  While this method sounds more complex, it is pretty simple from the user’s perspective and is comprehensive where file level backups are more prone to miss important data.

Block level backups introduce some issues that can result in backup failures, but also advantages, such as the ability to backup open files using another layer of technology.  That’s leads us into a funky new(ish) Microsoft technology called VSS.

 

How to Backup Open Files including System State – VSS

Have you ever tried copying a file that was in use?  In most cases, windows won’t let you copy an open file; it is “locked” while open to prevent potential damage to the file.  When you start windows or any program, many files will be opened and cannot be copied safely using basic file level methods.  For this reason, most file level backups will fail to backup open files including programs and windows files.

Block level backups support a technology that allow blocks to copy while their related files are in use.  This means they can backup your operating system and program files and allow a complete restore of your system state and personal files.

The technology used to backup open files is implemented by the Volume Snapshot Service under Windows systems (VSS, also called Volume Shadow Copy).  A snapshot refers to the state of the disk at a moment in time, as the technology attempts to maintain access to the data at that moment.  Once the snapshot is made, usually taking only moments, the system can continue to read and write files, so you can keep using the computer.  The system will preserve any data that would otherwise be overwritten after the snapshot, so it is accessible to the continuing backup process, and new data is tracked and not backed up, preventing any inconsistency.  This is not a trivial process and things can go wrong.

VSS

VSS incorporates a number of parts, including “writers” specific to various programs designed to ensure their disk writes are complete and consistent at the point the snapshot is taken.  For example, a database writer would ensure all transactions that might be just partly written are complete before the snapshot, removing the risk of a corrupt database on restoring the backup.  Certain types of programs need a specific writer to support them and if they fail a “successful” backup can contain damaged files.  Sometimes, part of the VSS service can fail.

VSS is mature and works well in most cases, but you will still find that a VSS backup is more likely to fail than a simple file copy backup.

Some backup tools that focus on backing up particular files and folders can also use VSS.  This blurs my definitions, since these use similar technology to block level backup but with the focus of a file level backup.  This hybrid approach is worthwhile in cases where you want the advantages of a block level backup but want to exclude backing up some files, such as for example large videos you don’t care about.  You should still keep in mind that VSS adds complexity, buts it’s OK to use VSS only backups where you have the technology available and you are careful verifying your backups.

 

Backup Archives Classified Archive Types

archiveNow we have an idea of the technology behind block level backups, I will go over the rudiments of backup archive types.  These concepts can apply to file or block level backups, but they tend to be more related to block level processes.

When you setup a new backup process, the first backup you typically perform is a “full” backup, including all data present on the source.  Subsequent backups can vary.  You can back all your files each time, or copy just those files that have changed or are new.  There are more options than you might realise.  I will address the terminology that refers to these methods and outline typical use.

 

Backup Set/Archive:  A set of files, that when considered as a whole, include a complete set of backed up files over a period of time.

A backup set created by a dedicated backup program will often generate one file per backup, containing all files or data captured during the backup.  A backup set will then normally contain a number of files over time, but they won’t look like the original files.  It is important that you check these sets and ensure actually contain your files.  Don’t just assume all your stuff is in there.  Size is a good hint, if they are far smaller than your files, something is wrong.

If you have setup a simple file level backup, the archive set might be included in dedicated container files, such as a zipped file, or might be a simple mirror of the original files.  I like methods that result in a simple mirror of your files for home backups, as it lets you quickly check what is in your backup set and is less prone to mistakes.

Full Backup:   Take a complete, new copy of all files or data on your source and adds them to your destination storage.

A word of caution here, a “Full Backup” does not mean you have backed up all your data.  It depends on what you have selected for the process to backup.  Make sure you know where your data is before setting up your backup system and do not trust programs that try to select your data for you, they may miss important files.  This comes back to concepts in my first article, make sure you have visibility concerning your backups.

 

Incremental Backup: Backs up new and changed files since the last backup.

An incremental backup will normally be much smaller than the full backup, and commensurately faster to complete.  Using incremental backups is recommended where you add or change relatively few files over time.

When you make an incremental backup, it is dependent on any prior incremental backups as well as the original full backup, so if any of the files in the chain are damaged or lost, you will lose data.  In theory you could take one full backup and then nothing but incremental for years – don’t, create a new full every now and then.

As a safety precaution, if a backup program tries to create a new incremental backup and can’t find the dependant full backup, it will normally try to create a new full backup.

Backup_Incremental

Differential Backup:  Like an incremental backup, but backs up all new and changed files since the last full backup

A differential is less commonly used than incrementals.  They play a role where you have relatively large incremental backups to help manage space as they let you delete some older incremental backups without needing a new full backup.

 

Continuous Backup: A misleading term that normally refers to a chain of frequent incremental backups that are later stitched into a single backup archive.

Continuous backups are a more advanced function only available on business grade backup solutions.  Incremental backups are incorporated into the original full backup by a cleanup process, and the oldest data may be stripped out to keep the size under control.

Continuous backups add complexity with the consolidation and cleanup process, but they have significant advantages by avoiding the load placed on systems by running full backups, and are ideal for immediate offsite backups where small incremental backups can be transferred via the internet to give you an immediate offsite copy.

Advanced systems can go one step further and use the backup image to mount an instance of backed up systems running as a virtual machine in the cloud.  Great for continuity and disaster recovery.

Common Backup Settings and Options

Once you have decided on the type of backup archive, or more likely a combination of archive types, you need to determine how the process will operate, and when.

 

Backup Schedule: Set an automatic schedule for your backups

A backup schedule usually involves a combination of archive types set to appropriate frequency.  It is important to schedule backups at times when your backup destination will be available and where the computer will be on.  If you miss an automated backup, you can always trigger a manual one as needed to cover it.

There are many different interfaces used in backup programs and it is usually worth looking at the advanced or custom options to ensure your schedule is set correctly, rather than going with default settings.

A common schedule would be a daily incremental backup, with a new full backup about every month or three.

 

Retention and Cleanup:  Manage your backup archives to remove old backups in order to maintain space for new backups.

It is very important to consider how long you need access to backed up files.  For example, if you delete a file today, or perhaps a file is damaged and you may not notice, how many days or months do you want to keep the old version in the backup archive?  Forever is great, except you need to consider how much space that requires!

You should also consider possible problems or damage that might impact your backups.  When operating with full backups, its best you keep the old backup set till after a new one has been created, just in case the new one fails and you are left with no backups.  You can bet that’s when your HDD will die.

Given that a typical backup system might involve an infrequent and very large full backup and a more frequent and smaller incremental backup, then carefully considering your retention plan can save a lot of space.  Below is an example of how you might set retention (I suggest more time between full backups than this example, a month is probably reasonable, but depends on your situation)

Retention_02

In the above example, a full backup is run Mondays with all other days set to incremental backups.  Disk space is limited on the backup hard disk, and lets assume that it can’t fit more than two full backups and some incrementals.  With the retention period set to six days, backups will sometime be kept for more than six days where backups within the six day period are dependant on older incremental or full backups.

In the above example we have 12 days of backups stored and two full backups.  If the system deleted all backups before Sunday, then the Sunday backup would be useless.  The system will be smart enough not to (hopefully!).  At this point, the backup disk will be near full with inadequate space available to create a new full backup, but consider what happens just before the third full backup is due.

Retention_03

The idea above is once the older incremental backup is no longer within the retention period, we can clean up (delete) all of the oldest backup set in one go.

In this way the old set is kept as long as possible, but is deleted before the next full is due, so the backup program does not run out of space on the following Monday.

See any possible issues with this retention? Any mistakes?

There are a number you should consider.  Setting a tight schedule like this may not work as expected.  How does the program interpret 6 day retention?  Is in inclusive or exclusive when it counts back?  What happens if you set it to 5 or 7 days?  What happens if the cleanup task runs before, or after the backup task on a given day (that’s particularly important and a common mistake).

You must check that the system works as planned by manually checking that backups clean up the way you plan on the days you plan.  Failure to verify your system will inevitably result in a flaw you may fail to notice and leave you vulnerable.

 

Compression: A mathematical process to reduce the space used by your backups.

When setting up the most basic file level backup, you probably won’t use compression, but every other backup will typically compress your files to save space.  This is a good idea and you normally want to go with default settings.

Most photos and videos are compressed as part of their file standard, and additional compression won’t help.  For some files that are inefficiently stored where their information content is much less than their file size, various compression schemes can save a tremendous amount of space.

monkey_compress

Encryption:  A mathematical process based around a password that scrambles the file so its information is not available unless the password is used as the key to unscramble the file.

Modern encryption cannot be broken as long as a secure and appropriate algorithm and password is used.  Passwords like “abc123” can easily be guessed or “brute forced” but passwords like “Iam@aPass66^49IHate!!TypingsTuff” are not going to be broken unless the attacker can find the password through other means.

Encryption is dangerous!  If you lose the password, your backups are useless.  If part of the file is damaged, you probably won’t be able to get back any of it.  It adds another layer of things that can go wrong.  These risks are relatively small, so encryption is a good idea where your backups contain sensitive information, but if its just a backup of the family photo album, I suggest you don’t encrypt.

 

VSS:  A technology that is related to block level backups and allows files to be backed up while open and in use

You should normally enable VSS, however, if you find errors with VSS causing backups to fail, it is OK to turn off in some situations.  Make sure you understand what you lose if you turn off VSS eg database backups may fail.

 

Intelligent Sector Backup:  You may see this idea under a number of terms for partition and full disk backups.  The option prevents blocks with deleted files or blank space to be backed up and so saves a lot of space.  You normally want this on.

 

Archive splitting:  Many backup programs can split up backup archives into smaller files.

This was traditionally used where backups were split across limited disk media such as CDROMS and is not usually relevant where we are backing up to external HDDs, NAS boxes, or other storage with plenty of space.

 

Notifications:  Most backup programs will send you an email on success or on fail of a backup process.

It is best to have the program send you a message on each backup, but you will find they are annoying, and you just delete them.  That’s OK, at least you will notice after a while if the messages stop.  Understand that a message that the backup failed is handy and you are more likely to notice, but the program can always fail in various ways so you never get that message.

Do not rely on fail messages or assume their lack means your backups are running.  Manually verify backups from time to time.

 

So, when do we get to the nitty gritty?

Sorry.  We are getting there!

In the next article I will outline the hardware and common tools that may form part of your backup system, then in the final article I will go through the nitty gritty and some examples of home and small business backup.

How to Choose Surge Protection

Lightning2Every year, as Christmas approaches, we see an influx of PCs, modems, and other equipment killed by power surges.  It is that time of year again, so to head off some of the issues, I thought a timely reminder in order.

When a burst of energy is dumped into the grid, a surge results and the voltage at your power point will jump up above the normal 240V.  When the voltage moves above the level that your equipment is designed to handle, damage results. 

Small surges cause cumulative damage to electronics and you won’t notice it happening.  When your computer perhaps reboots, or later just dies, you won’t know that your mains power is the cause. The most obvious impact of surges is when you get a big surge, or spike, and that may immediately kill your equipment.

A surge is just one of a range of power issues you will see described with terms like spike (same thing, but shorter duration), sag (voltage drop), transient fluctuation, interruption, line noise, and others. These are all situations where supplied power moves outside the range of “normal” and is generally called “dirty” power.

What causes surges?

The most obvious source in Brisbane are lightning strikes.  Lightning is a big spark, triggered when the potential between two locations, normally charged clouds and the earth, grows big enough to cause a mighty spark to arc across the air.  Air is a poor conductor and the lightning will travel along the easiest path, so as it reaches the ground, it will jump to any tall structure that is more conductive than the air, and then follow a path of least resistance to the ground.

lightning_640

Lightning does not have to strike a building, or even a nearby power pole, to cause damage.  When lightning strikes, mutual induction results in a boost of up to thousands of volts in nearby cables, including phone and cable internet wiring.  Induction can result in a surge many kilometres from the strike.

The most common damage caused from lightning originates from distant strikes and results cumulative damage though moderate surges.  As strikes get closer, damage increases to the point that a surge might take out your PC with a single hit.  Surge protectors can still help at that level.  A strike that is very close, say on the power pole out the front, will create such a huge surge that nothing short of unplugging your power and other cables will protect you.

If a surge protector is a nice little spillway alongside a small and well behaved creek, maybe diverting a bit of water when the creeks flow gets a bit excitable, then a nearby lightning strike is a tidal wave smashing over the creek and spillway, drowning the whole region.  It will overwhelm everything to hit your equipment.  This is why you should unplug before a storm!

Other sources of power surges and related issues can include utility based causes such as switching generators or capacitors on and off, and local issues such as the use of heavy equipment, or even household equipment such as refrigerators, air conditioners, or fluro lights.  Ever hear the click though your Hi Fi when the fridge turns on and off?  Yeah, that’s not good!

How surges reach your gear

Network_Cable_SurgeWe see a lot of people who conscientiously unplug their equipment every storm, to only find their computer and modem not working the next day.

In addition to mains power, any other cable coming into the house can also carry a charge.  Phone lines, Foxtel, your roof aerial, and cable internet connections can all damage attached equipment.

In extreme cases the surge may pass through a chain of attached electronics.  A nearby lightning strike might fry your modem though the phone line, then run through the network cable to your PCs and kill them as well.

We see this sort of issue as burned traces and components on the mainboard, originating from the network port.

When it wasn’t a lightning strike

Electronic_RustCumulative damage can be caused when power fluctuations are not severe but are still outside the design range for the electronics.

If these types of issues happen frequently, they will cause ongoing damage to your system components until they fail.  This type of damage is called “electronic rust” and you can see the result through a microscope.

Many failed parts returned to us do not fail because of normal wear and tear, but rather by excessive wear though dirty power.

Surge Protection Standards

There are some near worthless products on the market labelled as surge protectors.  While we are somewhat protected from false statements on products in Australia, it is still very easy to massage figures for surge protectors.  The best way to compare products is to go with trusted brands and to take a look at specifications that reference standards for measurement.

UL_StandardsThe most commonly used standards for surge protectors are those developed by Underwriters Laboratories (UL) in the US, and specifically UL1449.  This standard specifies the waveforms to be used in testing, defines terminology and test procedures, and categorises the type of protector.  It’s a useful reference when comparing specification of surge protectors, as otherwise the same unit might quote specifications which vary widely depending on how tests are developed.  Better quality surge protectors tend to quote this standard in their specifications.

Others standards are published by the Institute Of Electrical And Electronics Engineers (IEEE) and other professional bodies around the world.

It is unwise to trust specifications that don’t refer to a respected standard.  Don’t take unrealistic numbers on the box at face value!

How Surge Protectors Work

There are three basic types of surge protection: SPDs (surge protective devices), line conditioning/filtering units, and data/signal line devices.  Each provides a certain type of protection but what they have in common is that their job is to manage dirty energy.  Understand they cannot create or destroy energy, only work with what comes out of the plug by modifying, diverting, or dissipating energy.

Failed_MOVDomestic surge protectors incorporate a substance that can burn away to dissipate excess energy as heat to handle surges and spikes.  When the voltage rises too high, current is diverted through this component to ground, usually a metal oxide varistor (MOV).  The energy consumed by the MOV allows the main line to come back to normal voltage for your gear, but at the same time the energy will burn away the MOV.  This means that once the sacrificial MOV is used up, the surge protector won’t be able to reduce the voltage to your gear any longer and will stop working as a surge protector, but will probably keep working as a power board.

Some surge protectors incorporate a fuse, so that if the MOV can’t handle the surge, the fuse will burn out on the line to your equipment, cutting power instead of letting a surge through.  This will only work once and then surge protector will stop working even as a power board, so they are not common in domestic protectors.

More advanced surge protectors incorporate components designed to massage the power signal into a perfect form in order to keep electronic gear happy.  There are many ways to do this and the most effective can be quite expensive to build.  Fortunately while this feature is a useful it is less important for most household electronics than the basic surge protection.  It can be of significant benefit to some equipment, but buy a high quality power supply in your PC, and you can do without in a typical Brisbane house.

Thor_A12

Specifications that Matter

Energy Absorption Rating:  An indication of how much energy the unit can absorb before it stops working, as measures in Joules.  This number represents a consumable in the form of a metal oxide that is used up by many small surges, or potentially by one big surge.  The bigger the number, the longer the board will last and the bigger maximum surge it can handle.

It is important to read the fine print and check the rating based on the UL1449 standard as advertised numbers are normally much higher than the numbers based on standards and not useful when comparing products.  You want to see a number of over 1000 Joules based on UL1449.

Indicator Lights and Fuses:  When the sacrificial component used to dissipate energy is gone, or in other words when the energy absorption has exceeded the units rating, the protector no longer works to remove surges but may keep working as a power board.  There is no obvious way to tell if surge protection has failed, so some manufacturers add an indicator light to show when the surge protector needs replacing.

A surge protector may also or alternatively incorporate a fuse designed to burn out when a surge comes along that the sacrificial component can’t handle.  If the fuse goes, the board will stop working entirely.

Clamping Voltage:  The voltage where the protector will kick in.  If it doesn’t kick in till voltage is too high, the surge may damage your gear before the surge protector starts working.  A number to aim for in Australia is 275V (mains power fluctuates around 240V).  Cheap units will tend to clamp at 400V or higher.  Note the lower the clamping voltage, the more energy will be diverted to the sacrificial component over time so the protector will tend to wear out faster, but much better the protector wear out than your electronics it is protecting!

Response Time:  Indicates how long the protectors needs to start working after the voltage goes into the red zone.  If it is too slow, your gear is damaged before it kicks in.  A good quality protector might have a response time at 1 nano second or less.  Cheaper units tend to be slower and may allow significant damage to occur before blocking the surge.  Don’t confuse detection time with response time, detection doesn’t matter, response matters.

Maximum Transient Spike: How much current the device can handle when a large burst of energy comes through, such as with a nearby lightning strike.  Again look for the UL1449 rated value and you want to see big numbers, above 30,000 amps based on UL1449 testing is good.

Power Filtering / Line Conditioning:  Aims to provide clean AC power by reducing high and low voltage electrical line noise.  There are various ways to design filters and the specs here can be misleading.  Normally more components and more cascading circuits are a good thing and active tracking is a premium feature to look for.  Filtering is only on high end models.  If it is cheap and says it’s a filter, it is probably not a very good filter.

Circuit Isolation:  Some models in a power board configuration will provide isolated circuits for arrays of plugs.  Frequency isolation is less effective than circuit isolation.  This feature can be handy when you are going to plug in electrically noisy equipment into one of the banks so it doesn’t interfere with equipment in the other banks.  Particularly useful for Hi Fi gear.

RJ 45 Protection: This means there is a plug for a network cable which can stop a surge getting through from your modem/router to the PC it is connected to.

AV/TV and Cable TV Protection:   This provides a pass-through to handle surges through your aerial and/or Cable TV Coaxial cables.

Insurance:  Most better brands will back their protectors with insurance, where they will pay damages if a surge gets through one of their protectors.  In fact a close lightning strike has so much power behind that it can strike though a normal protector, and in most cases I have heard of, the quality manufacturers will still pay up.  Insurance is a nice bonus.

Warranty:  Protectors will wear out over time as they intercept surges.  The boards I have tend to last years, but that’s with pretty good normal power and high end surge protectors.   Other staff here tell me they get 1-2 years in more outlying areas where the power is not great.  Again, the quality manufacturers will tend to replace the product even when it stops because its capacity has been exceeded (I know Thor will, don’t have personal experience trying to claim with other brands).

When to use a Surge Protector?

From the Insurance Council of Australia:  It is advisable to use surge protection units, designed to minimise the effects of power surges, on all ‘big ticket’ items in the home including the fridge, television, stereo and computers.

In my view, where surge protection helps best is with sensitive electronics such as modem/routers, PCs, Hi-Fi gear and so forth.  Sensitive electronics is not just limited to what you would expect nowadays…

The last washing machine I bought stopped a working a couple of weeks later.  Turned out it was the control board and while I doubt it was a surge that time, watching the tech replace the board reminded me how electronic components are so much part of our general equipment nowadays.

The tech was near retirement and I had a good chat with him.  He was telling me that the old washing machines of the era I was replacing lasted much longer than the modern ones.  He had a bit of a conspiracy theory where he believed that the boards in them were designed only to last so long before burning out.  He knows what he has seen over the years, and that was his interpretation.  Mine was a little different, got me wondering if the issue was surges over time that would not have been a problem for older models and their simple control systems (and maybe manufacturers keeping costs down with components, so just perhaps they don’t handle normal surges as well as they could!).  I don’t have a surge protector attached to my washing machine, but instead turn it off at the wall when not in use, but it does get you thinking about equipment you do leave on.

On average, you will tend to increase the life of your electronic gear if you run it attached to a quality surge protector.

What about a UPS?

A UPS, or Uninterruptable Power Supply is often confused with surge protectors.  Some people assume they are a better type of surge protector; not true.

Eaton UPSA UPS is a battery that switches in if the mains power is interrupted.  We sell them with all server equipment so the servers have enough time to shut down without damage to files and databases if there is a power outage.  They are also used to keep critical services up for a while during power loss, such as phone systems, servers and so forth.

A UPS does not necessarily protect from surges.  Most will, but it is better to think of them as a battery backup with incorporated power protection.  Power protection in entry level UPS models is poor.

A UPS does not necessarily sit above surge protectors in a power protection range, in fact there are cases where we have sold top quality surge protectors to clients using cheap UPSs to prevent surges getting to, and through, the UPS.  It can also be worthwhile using a surge protector behind even a top quality UPS to help protect the UPS.

What are the best value brands?

Surge protectors from most major manufacturers will do at least a basic job.  You can pretty much rate them by cost, a $20 surge protector is possibly better than nothing, but it is not going to last long or do a very good job.  Think of them as a power board that might have some other benefits.

We have sold the Thor brand over the years in the high end with good results.  Reports from customers have indicated that they do their job well, and I know they look after people with warranty and the rare insurance claim.  Most of my protectors at home are Thor.

In the entry level, pretty much any major brand around the $50 mark will get you a decent if limited surge protector, appropriate to protect lower priced equipment.  We sell some Belkin surge protectors and various other brands which are all adequate for basic protection.

Bottom line is, get something for any equipment that you care about, and for expensive gear, get an appropriate high end board.  Remember to replace your protectors regularly once the indicator tells you their protection has worn out, and when you hear a storm coming, unplug!

Microsoft Rolls Out Office 2016

MS Office 2016

Microsoft released the 17th major version of Office today, Office 2016.  It is 25 years since Office came from nowhere to dominate against the incomparable Wordperfect 5.1 and Lotus 1-2-3.  Office adoption was rapid and near universal, exploding alongside Windows and becoming so strongly associated with Windows that many believed they were part of the same product.

Over those 25 years, each new version of office included major improvements and new features.  Each release was a big deal, though on average less so as Office matured.  Office 2016 again offers significant improvements over 2013, though with evolutionary improvements rather than a radical overhaul.

How does Office stack up in the age of the app?  Is it still relevant? Some say that “apps” can replace massive traditional applications such as Office.  So far the depth and breadth of features needed for office tasks has helped keep Office at the forefront of content creation in business and home environments.  Yet the success of technically simple and shallow apps, that perhaps do just one thing very well, must be a lesson for the old guard.  How can an application like office improve by taking lessons from the success of modern apps?

 

This time, its different

Unlike any previous version, this time round many users will receive an upgrade to 2016 as part of their Office 365 subscription.  The Office 2016 release is a genuine rollout for the first time in history, rather than a launch.

Office 365 HomeRecent corporate focus on subscription models seemed, at first, a financial driven change aimed at developing consistent revenue and providing a one stop ecosystem of products for a single fee.  Subscription bundles are a change to a revenue per customer focus rather than a revenue per product focus, and makes use of competitive advantage from a company with a vast breadth and depth of product.  Bundling products into a single fee also encourages the adoption of new products and services that may not take off if sold individually.  Those elements are without doubt an important part of this strategy, but there is more to it.

I see subscription as a model facilitating the adoption of a rapid, agile way of developing software.  A change designed to compete in the age of apps.  Continuous improvement is executed with frequent updates, and features released one at a time rather than saved up for a single big change every so many years.  Agile and iterative development.  Listening to what the customer wants, and making it happen.  Fast.  Without a subscription based model, frequent releases can’t work.

With subscription in place and growing, expect to see the end of major releases in the not too distant future.  The old model will be replaced with subscriptions, continuous development of core applications, and a growing ecosystem of supporting apps that integrate and complement core applications.  I see a strong future for office as the centre of this ecosystem.

 

 

What’s Changed? – Collaboration and Intelligence is the Focus

You will notice an incremental improvement in the user interface and appearance of Office 2016, nothing major or unfamiliar, just a bit of polish.  This is no bad thing, with no Windows 8 disruption to be seen here.

Word 2016 interface

Collaboration tools and Cloud integration have received substantive improvements and are, perhaps, a compelling reason to upgrade for those improvements alone.  Microsoft has caught up with Google Docs and its own online versions of office with real-time collaboration now available in the desktop version of Office.

Word 2016 improves the co-authoring feature, with ongoing improvements across the suite likely to be added via patches.  Changes made across multiple authors now propagate automatically, removing the annoyance and possible conflicts and time wasting where changes had to be updated and refreshed manually.

Office is more tightly integrated with Skype for Business, a product under rapid development in its own right.  So far this is a quality of life improvement for people wanting to chat via skype while collaborating on a project.   Tight integration of Unified Communications via Skype for Business will likely be rolled out over time.

Collaboration_Office_2016

 

Machine Intelligence

Smart_LookupOffice improves its smarts in 2016, with Cortana style help and research systems integrated with office.  Each application ribbon has a text box that accepts a natural language statement of what you are trying to do and tries to help you achieve that intent.  It sometimes its helpful, and expect the feature to improve over time.  Similarly, you can use a new “Smart Lookup” feature to run a web search against a selected term in a document, like to using google or bing, but with better results as the search considers context with respect to the document content.

Updates to Excel include new charts, including a waterfall chart type.  Charting is also made faster and more accessible with a new feature where Excel makes suggestions of the type of chart you need based on the type of data you select to throw at it.  I’m not sure dumbing down like this is always helpful, but you can always reject a poor selection.  With a similar approach and dangers, a new trend forecasting feature has excel extrapolating historical data and a likely range with customisable confidence intervals.  This could save time for an expert who understands the nature of their data and the algorithm behind the extrapolation, but its another tool I see that will be abused by people not understanding the limitations of this type of extrapolation. Delegating your thinking to a machine is always a danger, so use with caution.

Excel_2016_waterfall

 

Finally, a fix for attachments in Outlook 2016

Outlook has a new way to attach documents to an email.  Documents stored on OneDrive that would normally be attached and sent with an email are instead attached as a link in the email giving access to the recipient to view and download, or edit.

For those of us who manage mail servers and have clients asking why they can’t send some 50MB file by email to the guy sitting the next desk over, it’s a great feature, encouraging users to look at productive ways to communicate and collaborate and reducing the mess large attachments can make to our exchange servers and backups!!.

Other changes to outlook include smarts to automatically organise your email with a new folder called Clutter (am not a fan), and a new groups feature to organise teams for collaboration.

Outlook_2016_attachment

 

And a new app build around Machine Intelligence – Sway

A new app called Sway has been available for a while, and is part of the Office 2016 app line-up.  It is something a bit different, intended to help you build and present a story, but unlike PowerPoint, it helps you construct the presentation by starting with a topic and adding content from various online sources.

This is the first Office component with core functions dependent on machine learning algorithms, and as such an app that may be mainly a toy for now as the technology develops, but has the potential to become something quite impressive and spin off features and ideas to other parts of office, or seed ideas for new applications.

Sway

 

It is still Office, and that’s just fine

At its core, office does what it has always done.  There is no single app, or even group of apps that can touch it for general business productivity tasks.  The collaboration tools, machine smarts, and growing ecosystem, of related apps maintain Office as an essential tool for creators.

 

Subscription now the best model

When subscriptions to Office were first released by Microsoft, in almost all cases we suggested people stay clear.  They offered very poor value.  Microsoft took on board that feedback (and a lack of subscription sales!) and fixed what we hated about the subscription offers with Office 365.

At this time, most people are better off with Office 365 over other ways to buy Office.  If you own an older boxed version of Office and are considering an upgrade to 2016, I strongly suggest you review Office 365 offerings and compare to the boxed 2016 versions.

Home users take a look at Office 365 Home Premium with its license for 5 users and multiple installs of the entire office suite with extras such as unlimited OneDrive storage and free calls to phones.  Comparable boxed 2016 products limit access to some office programs, and usually allow just a single install so you need one copy per machine.  The traditional product does not include the extras or free upgrades.

Office 365 Home Premium

For business clients, there are a range of Office 365 options, many of which you won’t see advertised on retail web sites (ask to talk to our BDMs for options).   Many Office 365 for business plans include major cloud service extras such as Exchange for email, and SharePoint as well as more generous licensing terms.

Guide to Personal and Small Business Backups – Conceptual Framework

ScreamToo often I see our techs consoling a despondent customer, in tears, having irretrievably lost precious files.  Family photos.  Business records.  Blog articles (!). All gone.  Yet some of those people have been “Backing up”.

A simple definition of “Backing Up” is a process that makes a copy of data onto a second device that can be used to restore that data if your primary copy is deleted or damaged.  A broader definition is any process that reduces your risk of losing data (files) or your system state (windows, settings).  I prefer to use a more global term, Backup System, a collection of backup processes or other elements working together to reduce risk of data loss and related harm.

You might reasonably believe that backing up is a simple process.  Before you run this process, your files are at risk of being lost, and afterwards, they are safe.  Run a backup, and it’s all good.  This type of binary thinking is prevalent even among IT professionals – Black and White, True and False, Risky and Safe.  Unfortunately, applying a binary worldview to backups will only get you into trouble by giving you a false sense of security.  Backups are not Black and White, they are Grey.

This article will disabuse you of false assumptions relating to backups, and introduce a conceptual framework you can use to design a Backup System and to protect your precious data.

Developing a Backup System is easy and effective if you use the right approach.  Clicking a button that say “backup” and hoping for the best, is only good for gamblers!

MrBackup

Backup Systems are about Risk Management

The key concept here is risk.  Most people have a decent, if subconscious understanding of risk.  The subconscious mind has habit of simplifying complex concepts and can mislead if you don’t consciously interrogate the concept.  So let’s consider, what we mean when we refer to risk.  Risk relates to:

  • the Harm you will take if you lose some of all files or system state, and
  • the Probability of losing some or all files or system state.

In a business context, you might add other “harm” that can relate to backups, such as downtime, or files finding their way to unauthorised people.

So Risk = Harm * Probability.  That seems simple.

But how do you quantify Harm?  Say you look at a tender you are working on, perhaps you know it will cost $500 to rewrite it, so you can assign a cost of losing the file with some accuracy.  What about the family photo album?  Hard to assign a $ amount to that.  You can probably make some rough estimate, but it is not possible to assign an exact value.  Priceless, perhaps.

What about the second element in the equation, the Probability (chance) of loss?  Probability can be very difficult to quantify.  What is the chance of your HDD failing, being infected by a virus that wipes your drive, throwing the whole thing out the nearest window when its misbehaving, and tougher still, what about disasters you have not even though of?  Again, you can only apply a ballpark figure on the likelihood of data loss.

The difficulty of determining the Risk Level that you are exposed to leads to another concept that is implicit with backups, but not often addressed explicitly.  Uncertainty.  Uncertainty, inherent in assessing risk, means that you can’t quantify your level of risk with accuracy, it necessitates a fudge factor, some safety margin to make sure you are not taking on too much risk.

Risk Level and Uncertainty lead us to our final concept, Acceptable Risk.

No backup system can reduce your risk of losing data to zero.  No such system is possible in our world.  Beware of anyone who tells you that their system is 100%!  Instead of aiming for zero risk, you should consider what your level of Acceptable Risk is, and weigh that against the cost to reduce your actual Risk Level.

Finally to the good news.  It is usually possible, with a little thought and attention, to vastly reduce your Risk Level inexpensively.  Developing an effective Backup System for a home or SME environment is about using available tools intelligently rather than spending a fortune.

Before we go into the How, we need to cover more abstract concepts that you can use to assess the backup methods you choose.  Again, without applying these concepts to critique your Backup System, it’s likely you will run into trouble and find you backups are not doing their job, inevitably when it is too late.

 

Develop your Backup System with Desirable Attributes

Certain attributes of a backups system tend to increase the likelihood that it will perform as desired.   When developing or assessing the quality of a backups system, you may want to consider the following attributes.

Simple as PossibleTo make life that little bit more difficult (this is about computers, after all), some of these characteristics contradict one another, so you must apply some common sense where a trade-off is necessary.

  1. Simple – Never add complexity for marginal benefit.

Convoluted backups systems fail more often than simple systems, because, by their nature, there is more to go wrong, with less visibility in how the system works.  Simplicity leads to our second attribute.

  1. Visible – Know where your stuff is and how the backup system works.

The first step is knowing where your important files are.  The second is knowing what process is used to backup those files.  The third step is being able to locate your files at your backup locations and verify that they are complete and viable.

  1. Automated – Make it work without human intervention.

Most data loss I encounter where there are no backups is followed by the line “I used to do it, just have not got around to it recently”.  The best systems should work even if you neglect it, but a word of warning, automated does not mean you can skip manually verifying that the system works.

  1. Independent – Multiple backup processes and data locations should be unrelated.

Processes that are less dependent on the same factors are less likely to fail on you at the same time.  You might use an image backup and a simple file copy backup on the same data, since a failure with one method will not necessarily result in the other also failing.  A backup located in another room is not as good as backup located in a different building, and implementing both Is better.

  1. Timely – Capacity to recover data that avoids damaging downtime.

StopwatchFor a business, downtime while you recover files can be costly.  Assess how long your system requires to restore files and systems and reduce that time where unacceptable.

  1. Cost Effective – Seek balance between cost and benefit.

Aim to find a sweet spot where the cost and effort put into your backups effectively reduces risk, and then stop.  Don’t fight your way to reduce risk just a little further when it requires massive extra cost, but also don’t be cheap and stop reducing risk when the cost to do so is minimal.

  1. Secure – Control access to sensitive data.

Consider the harm you will take if backed up data gets into the wrong hands.  Where the harm is significant, consider encryption and other security techniques.  Do not apply security without due consideration as increasing security techniques can, and usually will, increase the chance of your backup system failing.

 

Understand Concepts, Techniques, and set Objectives before you begin

Once you are comfortable with risk management, and the attributes you want to incorporate into a backup system, it is time to set objectives for your Backup System and how to achieve those objectives.

To develop a plan, you will need a grasp of:

  • Your data and its characteristics: size, location, live or closed files, live services etc
    • Include files and systems. Eg an accounting data file might be critical, but the installed accounting package might also be worthwhile to backup.
  • Importance/acceptable risk level related to identified data.
  • Related risks such as downtime and stolen data.
  • Storage devices available/desirable and capacity: external HDDs, NAS, cloud, etc
  • Backup tools available/desirable: Image creation tools, command line tools, VSS, etc
  • Techniques possible: file mirror, images, full/incremental/differential/continuous, scheduled tasks, verification, encryption, cleanup, etc
  • Contingency Plan – what can go wrong with backups and how can those risks be reduced.
  • Available budget

Finally, start designing your system.

This article has covered some of the high level concepts relating to backups such as risk and desirable attributes.  It has not covered the types of backups possible, storage devices, or techniques.  Follow up articles will cover these areas and provide walk through examples of backup systems for home and business.

When Your Digital Life is Held to Ransom

crim_with_gunRansomware is a type of malware that will lock you out of your files, device, or other resources, and demand a payment to regain access.  Even if you choose to pay up, most often the payment will not result in gaining back access, and your files are permanently lost.

The threat posed by ransomware is serious, with recent infections more devastating than any malware or virus we have seen in years.  The infection rate increased throughout 2015 with a range of new variants attacking customers across PC and other platforms.

We are asked to help businesses recover from a major ransomware infection about once per week.  Where the infected site has implemented a viable backup system, files and services can usually be restored with some effort and relatively few losses.  Unfortunately, many businesses and individuals that come to us for help did not have appropriate backups in place and have lost files at huge cost.  We have seen Ransomware infections threaten the ongoing viability of businesses.

A Bad Day in The Life of …. John, the owner of a small manufacturing business who has come into the office at the crack of dawn to catch up. Coffee machine working. Check. Time to finally get this tender sorted.

Logging into his PC, John does a double take. There is an impressive looking red and white boxy thing on the screen.“ Your personal files are encrypted!…..pay us or they are gone forever.”

Files exist but nothing will open. Johns not too familiar with the backup system, but he knows they have one. Rings his accounts guy who looks after that. There are backups. They sit on a NAS in another building to the server, so they are “offsite”.

Tries to restore. Files won’t open. They have been encrypted as well. Very Bad Day.

How does Ransomware infect and hurt my computer systems?

The most common way to get infected is by clicking on an attachment or link in a malicious email.  The email may appear to have been sent from reputable company such as Australia Post or Telstra.

Other methods of infection include visiting infected web sites, malicious code embedded in downloadable applications, and attacks on security flaws in operating system software and services.  Any attack that can run executable code on your computer has the potential to infect you with malware.

The most common Ransomware will scan your computer for valuable files, such as office documents and photos, and encrypt them.  Ransomware will also encrypt files on any network accessible locations or attached disks.  Once done, the malware presents a message with a promise to decrypt the files if you pay up.

Files encrypted cannot be recovered without a key generated by the criminals or from backups.  Backup files accessible to ransomware, such as those located on an attached drive, may be encrypted alongside other files.  Backup sets may also be compromised with any backup process run after encryption has started including a mix of newly encrypted files and unencrypted files, potentially overwriting the original files depending on the backup system design.  At best this requires you to sift through backups for the last good copy.

By the time the damage is noticed, even if you have current backups you may find your business suffers with downtime to services while you restore the damage.  Even large organisations with multiple layers of security and backups have been hurt. “There was an IT security issue this morning which affected some of the ABC’s broadcasting systems and created technical difficulties for ABC News 24, as a result, we broadcast stand-by programming from 9:30am before resuming live news broadcasts from Melbourne at 10:00am.” – ABC news 8th October 2014

Cryptolocker3

 

Are my other devices vulnerable?

Ransomware is most common and dangerous on a Windows machine, but this type of malware is also growing rapidly on other operating systems and devices.

Infections on Android devices are normally triggered by a malicious app.  When activated the malware tries to lock the user out of their device and demands payment to allow access.  There are work arounds to gain back access in most cases, but not always.  A particularly nasty variant reported in September 2015 tricks the user into clicking on a button that allows the app to change the devices PIN and disable security software, permanently locking out the user.

Ransomware can potentially attack Apple iOS and OSX devices, such as iPhones.  In 2014 Apple customers, primarily in Australia, woke to a message on their phone screens : “Hacked by Oleg Pliss. For unlock YOU NEED send voucher code by 100 $/eur one of this (Moneypack/Ukash/PaySafeCard) to helplock@gmx.com I sent code 2618911226.”  This attack was probably indirect, where a vulnerability in the user’s iCloud accounts was exploited to then attack the phones.  A handy reminder to set up two-factor authentication on important accounts and to set, and regularly change, a strong password on your accounts.

FBI__ransom_Android

 

I’m not at risk, I have a virus scanner!?

It is common and dangerously mistaken belief that an up to date virus scanner can guarantee protection from malware.  A virus scanner will protect against many known, and some newly released malware, but no scanner can protect against all such threats.

Failure_300When Cryptolocker attacked in late 2013, no virus scanners I am aware of picked it up or prevented its actions.  It went right through the scanners as well as other security systems and infected many thousands of computers.  Days and weeks later virus scanners caught up and could block the malware, but then a variant of Cryptolocker was released to bypass them and a second wave of infections ensued.  To rub salt into those wounds, they also took the opportunity to raise the ransom amount.

Research by LastLine Labs (and others) confirms that anti-virus are never 100%, and can never be 100%: “On Day 0, only 51% of antivirus scanners detected new malware samples … After two weeks, there was a notable bump in detection rates (up to 61%), indicating a common lag time for antivirus vendors”

Seeing comments like the following are common after an infection, when it too late and people realise antivirus alone is not enough “…got a Crypt/Dorifel virus(an early version of cryptolocker) and all Symantec was doing is quarantining the modified documents….”

Antivirus will protect you at times, but it cannot be more than one element of a protection strategy.

 

Do I get my files back if I pay?

Sometimes.  These programs are written to make the criminals money.  It makes sense then that if you pay up, usually $100s to $1000s of dollars, they will decrypt your files or unlock your device so the next guy has some hope and will also pay them.

Sometimes paying will work.  More often the remote attacking servers will have been taken down, access blocked by to prevent further infections, the software simply breaks and will not decrypt correctly, or the attackers never intended to allow the option to decrypt.  If you get hit, ask for advice to look at all other possible ways to restore lost files, and only try paying up as a desperate last resort.

CTB-Locker

 

Is there any other way to get my files back?

Restoring from your backups is the most reliable and most often the only method to recover from a ransomware infection.  Where backups are accessible from the infected computer, they may have been encrypted and therefore practically destroyed.  Offline backups are not vulnerable and may be an option if your current most up to date backups have been compromised.

The effectiveness of backups depends on their design and many poor backup system designs are vulnerable to ransomware.  If you are reading this as a business owner and are not entirely familiar and confident about your backup system, please review it!  In our experience the majority of SMEs do not have an adequate backup system, and many who believe that have a working system in fact have no useful backups at all.

“CERT Australia was contacted by a number of organisations that had suffered significant business disruption as a result of corrupted backups.” – Australian Computer Emergency Response Team on Ransomware advisory.

MrBackup

For users of some versions of windows, you may be able to restore older versions of files, the ones present before the malware attacked, by accessing the shadow copy snapshots stored on your computer (if the feature is enabled – and note it is not available on the basic home versions of windows).

In rare cases, security organisations have discovered flaws in the encryption process or intercept encryption keys from the attackers, allowing files to be decrypted.  This is a long shot.

There may be other copies of your information cached or stored in places you are not aware of.  Think about files that may have been copied to pen drives, other devices, uploaded into the cloud, and so on.  There may be deleted files on devices that can be retrieved.

Ask for professional advice if you appear to have lost critical files.

 

How can I protect my business from Ransomware?

There is no one simple solution to insure a business against ransomware or similar malware and related disasters.  Implementing a number of measures can greatly reduce your risk of serious data loss or downtime.

Insurance

Some of the measures you can use to reduce the risk to your business:

  • Educate staff in ways you might be attacked and what to look out for to avoid infection, such as suspicious emails.
  • Implement a planned and verified backup system including an offline copy. Ensure a senior person fully understands the design and how to verify its operation.
  • Limit account permissions – do not log into a computer with an administrator account for general use. Limit individual users account access to network resources they need.
  • Where backups run across your network to a shared location, such as a NAS, set a password to limit access so that only the backup process can use that share.
  • Turn on User Access Control (its on by default, many people turn it off. Do not.)
  • Do not plug any PC still running Windows XP into your network (no longer supported by Microsoft and highly vulnerable to attack)
  • Turn on Volume Shadow Copy where available (keep old version of files automatically).
  • Ensure operating system patches are fully up to date on all computers.
  • Reduce your attack surface.
    • Use firewalls and NAT routers so internet traffic cannot reach devices that may be vulnerable. Be careful with port forwards and do not use the DMZ feature on your router unless you know what it is!!
    • Block access to suspicious web sites at network level.
    • Turn off services you don’t need.
  • Block malware delivery with services such as a spam and virus checking service to filter email before it hits your user mailboxes and a web filtering service to block malicious or infected websites.
  • Ensure up to date antivirus systems are installed on all computers.
  • Implement software restriction policies to block execution of unknown software and other appropriate organisation wide group policies (for server based networks).
  • If you suspect an infection, physically unplug any infected machine from the network, and if not sure what to do,shut it down immediately. If encryption is not complete this will save files.

 

Further Reading

How My Mom Got Hacked – New York Times

Why Ransomware Will Continue to Rise in 2015 – McAfee Labs

Cutting the Gordian Knot: A Look Under the Hood of Ransomware Attacks – LastLine Labs

Antivirus Isn’t Dead, It Just Can’t Keep Up

Ransomware @ Wikipedia

New Windows 10 scam will encrypt your files for ransom @ ZDNet

Hackers lock up thousands of Australian computers, demand ransom @ SMH

Ransomware Advisory @ CERT

Ransomware Explained @ Microsoft

Cryptolocker @ Krebs on Security

Crypto-ransomware attack targets Australians via fake Australia Post emails @ ABC