We use cookies and similar technologies to provide certain features, optimize perfomance, analyze site traffic, enhance the user experience and deliver content that is relevant to your interests. Depending on their purpose, analysis and marketing cookies may be used in addition to technically necessary cookies. By clicking on "Agree and continue", you declare your consent to the use of the aforementioned cookies. Here you can make detailed settings or revoke your consent (in part if necessary) with effect for the future. For further information, please refer to our Privacy Policy. cookie
nyheder fra iEDI

iEDI's development department creates an environmentally friendly server

iEDI

Copenhagen

18/10/2020

At iEDI, we have a strong focus on energy discharge, and this year the challenge was to create an environmentally friendly server for running business-oriented applications.

“....We must create the best possible server, and the machine must use a maximum of 30W of power”

- This was the task given by the CTO, Jens Kirkeby, to the developers at iEDI.

As the development department set out to achieve this goal, the choice quickly fell on SoC computers.
SoC (System on a Chip) is a type of computer where all components sit on a single motherboard.

We chose the RaspberryPi model 4b, which has 8GB of onboard system RAM, which is important for application servers.
We have experience with RaspberryPi from previous years, where we have developed robots that run on RaspberryPi.

Hardware

4 pcs. RaspberryPi computers were mounted in a 1U, 19 "rack unit, so the server is not only power-friendly, but also space-friendly in that it can be mounted in a standard rack cabinet.

In addition, we chose to install 4 pcs. Western Digital Red NAS type SSD, as storage in addition to the normal SD Drives that RaspberryPi uses.

Our Kingston 16GB SD card was replaced with 32GB SanDisk Extreme Pro A1, which provides 20% better performance.
2 pieces of 5A power supplies were mounted on the back of the rack unit which ensured that we did not have to use external power supplies, which would be a bit unprofessional.

Finally, we embellished the solution a bit with 4 CPU heatsinks and 2 small fans to draw hot air out of the cabinet.

Software

Our choice of software fell on Kubernetes, which we have good experience with from our existing iEDI Cluster and from customers with whom we help with cloud technology.

The big challenge was to get Kubernetes to run on the ARM based CPUs that RaspberryPi uses. It took a few tries with different operating systems.

We had success with RancherOS K3S, which also gave the advantage that our newly developed server could be enrolled in our existing very large world wide cluster.

Test

After the hardware and software were tuned, it was time to test.

To stress the system a bit, we chose to test with a java application because java is often very resource intensive.
We deployed our PEPPOL Access Point solution, which consists of a java based AS2 / AS4 server and a Python FastAPI REST API.

Deployment worked fine and pulled a lot of processing power, but after 2 minutes the solution ran and we made some measurements.

The server ran a java based PEPPOL Access Point idle of 6.8 watts… We all stared at each other and said

“WTF is this true?”

During data input / output, power consumption increased to 8-10W, but dropped again to 6.8 watts after processing.

After this surprise, we tried to stress the server as much as possible.
With full CPU load on all boards, the server used a full 22 watts ← yes, you read that right, it's less than a night light.

Mission accomplished!

Result

A server with 4 CPUs of 4 cores each gave us an environment of 16 cores with 32GB of RAM.

With the 4x32GB SD boot drives and 4x500GB SSD NAS drives, it gave us 1.9TB of disk storage.

Kubernetes gave us the ability to run the same applications as our large cloud cluster and at a satisfying speed.

The server uses max. 22 watts, and stands idle under 7 watts 24x7.

The purchased spare parts ran up to 6400 DKK of which approx. 2000 DKK is due to the fact that we chose to extra mount 4 x WD SSD for storage.

Our next project is to scale the solution up to 64 cores and install powerful database servers and see what it can do.