We are acutely aware of the fragility of the environment and take our corporate social responsibility very seriously. A large part of what we do means electrical energy is used in networks, data storage, cooling for data centres, CPUs and memory banks, to name a few. This is all being turned into heat as it produces the resources such as computational cycles we need in our work and the amount required quickly adds up. In this time of consumer cloud services, where you don't see the computers it all runs on, it is easy to forget there is this cost when you are using your online personal storage or email. Yet, it is there using energy, just the same.
Data Centers use just one primary resource: power. The servers and all computing hardware draw this, and as mentioned above, the waste product is large amounts of heat. These facilities are designed for high-density and can hold in each rack on average 42 servers, stacked on top of each other. Each is many more times powerful than what you are currently using to access this site, so the heat output is tremendous. In order to stop the temperature exceeding the working threshold within the data centre, air conditioning is primarily relied on. Cooling of this magnitude itself has to be a feat of engineering, so requires a large amount of energy in the form of electricity as well.
Online services may be out of sight, but the environmental impact can be huge, especially if not currently understood and managed. In the UK, electricity is still generated using a lot of fossil fuels. Recently 4-5% of output has been from coal. We also rely hugely on Gas which can be as much as 40%. You can see live visualisations and stats on the UK electricity generation here:
While environmental protection efforts are improving, it is essential to understand where the electricity is coming from. Together with consideration of all the factors, VideoLock worked to architect the best solution we could, while also being cost efficient and enabling additional money saved in addition to the company contribution. VideoLock has invested in carefully selected carbon offsetting and environmental projects. The aim, to be carbon neutral or better for power used, as well as travel required by the business.
Our infrastructure has complex requirements which change rapidly and can require substantial amounts of specific resources for short periods without warning. This can be, for example, computational power or perhaps ample specialised storage for cumbersome processes. To find the best solution we have been using technologies that help in our goal for the environment while also meeting the requirements for the work we must do. As early adopters of visualisation tech Docker, we were able to start our architecture foundation. VideoLock found, by having our personal hardware to manage continual requirements (because it can be hard to get efficient use out of each machine) energy is always going into blocks of resource you essentially can't use. We took a further step with Docker. We used it to allow any codebase to be used in the same machines transparently whilst implementing a FaaS approach, further breaking the blocks down to a more granular size. The release of OpenFaas, which essentially structures the system you are building in the same way as we did earlier, afforded us the ability to make each granular module in our system genuinely serverless.
Often known as Serverless architecture with scalability down to 0, the framework is awaiting the job/event, and on getting one, it will turn on the process and pass it the work to do. Together, it means that we can achieve very high levels of continual usage across our dedicated hardware, delivering the most output for the energy it consumes, while being a significant saving against the same resource had it been purchased from AWS in on-demand EC2 hours. That saving is added to money already put into the offsetting, etc.
For burst requirements, AWS provides this, but is managed by our systems to maximise the use of our hardware described above. We require an extensive range of different technologies in our work, and much of it very early in its life cycle, but the same principle of underlying architecture applies. This has allowed us to minimise any overhead capacity, whether it be standby for bursts or overhead that has to be in a system for it to run, while being limitlessly flexible to tackle our work and performance demands. Where possible, bursts are dampened by dropping processes that can run later without affecting services, for whatever process is most urgent. This effort helps the hosting provider (in this case, the main on-demand supplier is AWS) to see a lower peak in needs and this allows them to manage their systems better by having less unexpected events so less hardware hot on standby to deal with this for VideoLock and all the other clients they cover in the same facilities.
Staff have flexibility and home working where possible to ease resources required for transport and our clients are encouraged to work with us by using tools we have tried and tested to do more without having to meet face-to-face.