WannaCry? Nope, WannaSwitch and WannaWork. Let Ransomware hit into empty space.

Ransomware, viruses and Co.

Until now, topics such as ransomware, viruses and unintentional data outflow were primarily reserved for end user devices. Recently, however, ransomware variants such as WannaCry, Petya and Co. have caused a stir in the corporate server environment. Obviously, even high data center classes were not able to answer this threat with their physical and logical access controls as well as constructional and network-technical security.

Large names of a wide variety of companies in the financial, chemical, and many other sectors of the economy, public and semi-public institutions, were published in the press as presumed and confirmed victims. The fact that operators of critical infrastructures are among them, lets us look deeply into the answer to current requirements such as GDPR, CRITIS, IT-SiG & Co. Here, the question arises to which extent the security and safety measures taken are suitable for coping with the threats of ransomware.

Vulnerability to WannaCry, Petya and colleagues base apparently on unpatched operating systems. Now there are many use cases where, by definition, patches to production environments only get implemented at long intervals. So it does not even need the most up-to-date zero-day exploits and fast-reacting spiteful energy that appropriate software can seize these systems. It is enough to use more or less current gaps, with patches only a few weeks or months available. An IT department running 7×24 operations, which (a) provides regular maintenance windows perhaps twice a year and (b) does not consider the current security patches as critical enough for a ad-hoc business interruption, remains open to such attacks.

7×24 operations vs. permanent patching // attack clearance  vs. relaxed lean-back

Practice offers a simple and, above all, more or less universally applicable solution option: architecture and infrastructure-independent mirroring at the database and application level.

This technology, which in the times of virtual machines, storage mirroring and various cluster options often is smiled at as obsolete, can play off one of its existing strengths: the logical independence of the underlying system environments.

Gangsters go blank, despite successful attack

The Libelle BusinessShadow mirroring solution works completely independently of the production environment, without Shared Server, without Shared Storage, in short: Shared Nothing. Due to the mirroring, the current data is constantly physically present on the mirror side. However, the mirror systems can be maintained independently from the productive environments, at the latest patch level.

If an attack with ransomware succeeds on the productive environment, e.g. due to the low patch level, simply switch to the highly patched mirror system and continue working there within a few minutes. The result: The attack was not staved off, but ran into the empty space.

Prevention also against classical data corruption

The data mirroring described above is asynchronous. This has several advantages over synchronous mirroring, which is often used in storage mirroring and clusters: On the one hand, there is the option of using maintenance windows on the mirror in a relaxed manner, since there is, in difference to synchronous mirroring, no two-site commit.

On the other hand, the IT department also comes out of the synchronous trap: If a logical error corrupted the productive database, the database on the mirror is automatically corrupt, too. Complete ransomware encryption or deletion, virus attack, bad application activity, incorrect data imports, malicious manual activities by internal or external users, or similar, are logical errors that cause the business process to stop in worst case scenario. And there might be the even worse than worst case: continue working with corrupted data and thereby generate an additional economic effort or even public image damage.

Asynchronous data / application mirroring can define any time offsets (“time funnel”) between the productive and mirror system: the actual productive data is already physically present on the mirror system, but hold in the temporary space of the time funnel. The “real” logical activation in database and file systems starts after the defined time offset expires. So the mirror system runs permanently behind the production system, from a logical point of view, at precisely this defined time offset. However, it already stores all delta between the physical (“now”) and the logical time stamp (“x minutes/hurs ago”).

If now a logical error occurs in the productive environment, the organizationally responsible instance decides the switchover. Depending on the company structure and processes, this might be: Application Owner, DR Representative or IT Management.

Now the best possible point in time is determined and activated on the mirror system, usually the latest point in time before the data corruption. The time funnel supports with the ability to activate every point in time within the time offset. Database or application on the mirror system can activate and go into operations with the consistent set of data. Users and other accessing applications log in again and can continue to work with correct data.

One technology – other application scenarios: Logical vs. physical vs. infrastructural challenges

Another advantage of asynchronous data and application mirroring: possible latencies are not significant because the production system does not have to wait for the mirror system to commit. Thus, practicable and economically interesting DR concepts with long distances, small network bandwidth and low QoS requirements for the network lines between the systems are also possible.

Also, the mirror systems may not only be in the company’s own data centers, but e.g. operated as a service at an arbitrarily distant “friendly company” or service provider, which is more common especially in the SMB. If this follows the new GDPR, indeed, is an open question.

The distance between the productive and the secondary location is thus no longer limited by Dark Fiber, Campus or Metro cluster technologies, which usually make up a few kilometers only. Asynchronous mirroring can expand even to different tectonic plates, based on business needs and corporate structure. Thus, there are ways to re-think DR concepts applying in the case of widespread disaster and keep the country, region or even worldwide IT operations running.

In addition, the architecture-independent data / application mirroring solves the “single point of failure” dilemma: In addition to the already recommended shared-nothing architecture, different infrastructure architectures are also supported in the participating environments. Besides technological also economic interests should be considered here. In homogeneous architectures, the maintenance effort is lower, but the risk of faulty drivers, firmware patches or controller software radiates not only to individual but to all environments. In addition, commercial thoughts also play a role with regard to the requirements of productive and emergency environments: in many cases it is sufficient if only the productive system is designed for permanent high-performance operation. The failure system can certainly be designed smaller: it just has to be “good enough” for the case that hopefully never occurs, and if, then only temporarily. In practice, these considerations often result in the “old” productive system to be reassigned as the new secondary system, following the usual hardware cycle.

Thus, many companies opt for the middle course, between homogeneous and heterogeneous architecture, in which at least two hardware standards are defined, often with components from different manufacturers.