High availability through low latency

January 31st, 2018

Successful conclusion: Cloud&Heat developed a system of optimized cross-location data replication of data centers in the research project fast realtime. Even in the event of a server failure, data availability from a redundant location can be guaranteed within a few milliseconds.  The project, funded by “Twenty20 – Partnership for Innovation” of the Federal Ministry of Education and Research, is celebrating its successful conclusion today.

Smart Homes, Internet of things, industry 4.0: The technologisation of the professional and everyday life has resulted in increased demands on manufacturing IT infrastructure. Fail-safety and backup systems are becoming more and more relevant – because a power failure causes costs in the five-digit range within a very short time. As part of the “fast realtime” research project, which was completed on 31.01.2018, Cloud&Heat Technologies has optimized data availability in case of an emergency. The Dresden-based company has developed a system of cross-location data replication that ensures high availability thanks to the fastest transmission rates.

According to a study carried out by Veeam Software, the IT infrastructures of companies and public authorities fail on average 27 times a year. The cause: Power failures, defective components, human failure, hacker attacks or disturbances of the internet connection. Only failures of large companies often reach the public, e.g. just recently from Spiegel Online, Amazon or British Airways. Especially concerning critical processes and networks, high availability of data plays an important role. After all, about 96 percent of all medium and large companies worldwide rely on the digital processing of data and try to optimize their business processes with innovative technologies. Therefore, more and more devices and sensors are connected to the Internet. According to Gartner, about 4.9 billion devices already control production and production processes in order to plan them better and make them more cost- and time-efficient. Disruptions in the IT infrastructure cause delays of the production and supply chain – with disastrous consequences. This is why high availability of data is immensely important here. Even compute-intensive jobs, such as those found in large research institutions like the CERN nuclear research center, require stable digital infrastructures. Every day huge amounts of experimental data are collected, analyzed and interpreted here. And they have to be available around the clock.

Backup data centers as an emergency means

In order to prevent the emergency case and, thus, the temporary loss of important data, especially authorities or companies with an extensive IT operation and high availability requirements consider using an additional backup data center. The background: An additional data center can take over the entire IT operation, if the actual one fails. In addition to the optimal location and the distance between the data centers, the speed with which the alternative data center takes over the tasks is of particular importance. In addition, the data must be constantly synchronized at both locations. In order to provide them identically at different locations, low latency plays an important role. It ensures continuous data reconciliation, replication, and synchronized data storage. Latency refers to the technical time delay in networks that arises when the data packets pass through. Keeping the period of time very low (“low latency”) is important in order to bring the IT infrastructure back to production as soon as possible in the event of an emergency.

Real-time synchronization only of relevant changes

The majority of replication programs always copy data completely, even if only a small portion of the file changes. Especially large files with small changes increase the network load enormously and above all unnecessarily. “If a data center fails, the data to be replicated may not have fully synchronized at the backup site. Solving this problem was part of our research“, explains Marius Feldmann of Cloud&Heat Technologies. To this end, the company from Dresden has developed a mechanism that first divides files into so-called blocks. If these are as small as possible, only a small amount of data has to be sent over the network in case of replication, which increases the speed of data transmission. Also, if the software detects a change, it only replicates the changed block. “With short test intervals and the smallest possible amount of data, we can even reduce the times of conventional programs by an average of 30 to 70 percent,” continues Feldmann.

Smart search for the best server location

In addition, Cloud&Heat has developed a geolocation mechanism that allows the user to select the server location that has the lowest latency. It is not only important to replicate as little data as possible across locations, but also to minimize the time delay in data transmission between the user and the cloud. The research goal was also to improve latency between sites. The time delay of transmission between the individual data repositories is minimal when the infrastructures of a data center are distributed over different locally operated smaller sites. The time needed to synchronize all existing data from Site A at Site B and secure it in the event of a failure of Site A can be reduced to a few milliseconds. When data is synchronized and stored in close proximity to the customer, data transfer also takes less time than processing German cloud resources in server farms in the US.

On 31.01.2018,  fast realtime was successfully completed: “Within the scope of the research project, we were able to produce many relevant results that form an important basis for increasing the availability of data centers,” summarizes Feldmann. “We are delighted that Cloud&Heat was able to contribute to such a relevant topic and to enrich ongoing developments in the data center sector.”

About fast realtime

Fast realtime was the basic project in the funded project “fast” (fast actuators, sensors and transceivers). Fast started in 2013 and is funded within the BMBF “Twenty20 – Partnership for Innovation”. The program, which involves a total of 80 consortium partners, is dedicated to the future topic of innovative real-time systems. The project started on 01.02.2015 and was completed on 31.01.2018.

About Cloud & Heat Technologies

Cloud&Heat is a provider of OpenStack-based public and private cloud solutions. Since 2012, the company has been operating its own cloud infrastructure distributed across different locations, offering classic cloud computing (IaaS). With the conception, commissioning and maintenance of tailor-made cloud solutions for companies, Cloud & Heat Technologies completes its portfolio with the Datacenter in a Box, responding to the rapidly increasing demand for in-house cloud infrastructures. All of our cloud solutions are based on Cloud & Heat’s own “Data Center in a Box” server base, which is uniquely energy-efficient worldwide thanks to innovative hot-water cooling. The server waste heat is absorbed directly by the thermal hotspots such as CPU or RAM, discharged and can be used for heating homes and for hot water treatment. The energy and cost-efficient concept has won several awards, including the German Data Center Award 2015 and 2016.

Fanziska Buettner Marketing Manager at Cloud&Heat.