SCALARA Development Story

Cloud-Services, Lock-In and the approach we took

11.10.2022
·   Lesedauer:
10 minutes

Cloud Services, Lock-In and the approach we took

As a company and especially as a Startup there is always the question “Where to start with my Infrastructure”. In general the modern IT-Infrastructure offers a big variety of possible solutions and providers. The first thing that everybody is talking about and/or considering is “Go for the Cloud”. The first notable thing is that this general opinion is more complex than you might think. You can divide your cloud migrations and installations in two separate approaches:

The Infrastructure as a Service

This is the most commonly encountered scenario for settled corporations. In this scenario they exchange their infrastructure that is currently placed in their own spaces with infrastructure from one or multiple cloud providers. This scenario is mostly driven by two principles

  1. Reduce the TCoO ( Total Cost of Ownership)
  2. Optimize Load Scenarios and/or Idling Hardware

These companies normally tend to have also started with a bigger virtualization approach. The problem with this scenario is the missing link to cloud services. This is hindering your company to gain the full advantage from using cloud technologies. This is mainly because if you step back, you see that this is only "outsourcing" your infrastructure.

Normally you see a company do a multi strategy approach that combines this approach with the following:

The Service as Service

This scenario relies on the fact, that you can rethink old and settled infrastructure and hosting concepts in an environment that offers you limitless scalability. The very promising fact here is that you are not thinking in hardware anymore. Some of the more cloud-native generations are not for once thinking about CPUs, RAM and Storage. They use services. Services - at least the good ones - are completely hardware agnostic. The service can be used and if it needs more it gets more. No complicated hardware migration needed. No specialist for server and unix administration needed. No expensive and complicated infrastructure monitoring needed. Nothing of that.

Nowadays companies with existing infrastructure are trying to migrate their tools and software towards the possibility to use cloud-native services that can replace exiting solutions. One example here is the S3-Bucket from Amazon. You can either operate your old sFTP/SMB/SAN solution or you can just upload our files in a managed storage with Metadata and IAM-Access Management. All pay per use!

Cost, Scaling and Transparency

Both the above mentioned scenarios are viable and offer a great amount of saving and scaling potential. One of the biggest downsides mentioned by a lot of “old” infrastructure managers is the lack of transparency for the costs resulting from an extensive service and IaaS usage. The current big players like google, amazon and microsoft tend to have such complex billing conditions that you need special calculators and in most cases even their own trainings to get an idea for the costs or the implications of an extensive usage.

Here is where the lock-in already starts. To really get an idea of your personal way forward you already need a deeper understanding of the cloud providers services, costs and scaling capabilities. For these kind of tasks you then normally hire someone that is trained in the portfolio of a specific cloud provider or consulting companies that offer some kind of guideline and checklists for this tasks.

In several migration projects in the past I observed that the result of this was a highly biased decision making.

BIAS is bad!?

In my personal opinion this highly specialized approach is not the best. Since everything is online and openly accessible you have the chance to use the best services, functions, infrastructure and or locations that fulfill your needs. So why don't use AWS S3 bucket together wit Google Firebase and Azure AD for your mobile application? If you get creative you can benefit from the high competition of the could providers. This is also backed by a research from Deloitte:

Another survey from the same month found that 97% of IT managers planned to distribute workloads across two or more clouds in order to maximize resilience, meet regulatory and compliance requirements, and leverage best-of-breed services from different providers.

Deloitte - 2020: Report ->

The truth on the other hand is that we - despite my opinion - we did the opposite. But there are reasons for that.

Lock-in vs. slow and costly

As a Tech-Startup your way to approach this topic should be a bit different. In general you have three main questions that should be the foundation of your decision making.

Questions that Matter
  1. What are my skills?
    Do I have the needed skills to do certain tasks either on cloud or off cloud?
  2. Do I need speed or full control?
    Everything I do is created by me and I know how it works vs. I want to start as fast as possible.
  3. How do I want to scale?
    Scale by widening my services (horizontal scale) or pure amount of customers (vertical scale).

Based one the answers you have certain outcomes and possibilities. The biggest mistake you can make is to only base your decision on the costs side. In my former company I made exactly this mistake. To get a SQL-Based Database, Webserver, Build-Server and Storage, in AWS or another cloud platform your can easily start with up to 500€/month without anything running there. Especially for test and development environments this seems a lot of money. So I decided “Hey see this hosted root server? Only 99€ per month. Fully stocked with enough power to host all applications we need”. And this was a pretty costly mistake. We created a foundation for something that has to be maintained by a professional or at least somebody with experience in administrative work. But hey! We have developers and they want to do DevOps. So they can do it too.

Long story short... in a team of 10 developers we had a developer at least 1,5pd per month maintaining the server to keep our environments running and troubleshoot problems, update or implement new needed features.

Needless to say this was way more costly then to just buy the services.

Your company values are not the infrastructure! You should not let your best assets - and for a software company these are the developers - work on raising the value of the infrastructure. They should ship your product. The infrastructure is a helper for your development. You can use your developers to boost speed in product development and this benefits your company way more then keeping the upfront cost as low as possible.

The decisions we took for SCALARA

In our current case we went for the following setup and decisions.

Cloud Provider

  1. The first decision is explained pretty fast. We knew AWS from other projects from the past. We took it.
  2. For our network (Vertices and nodes) based software we need a graph database. Since google has no Graph DB and amazon has its proprietary solution this decision was not simple. But what eventually made the difference was the easy and good way to use AWS CDK to setup and maintain the database infrastructure. So not based on DB-functionality and more based on ease to use.
  3. Lambda Functions → period 🙂 .

Architecture

As for now we use AWS services only. We have not one service that is based on “bare metal”. So not one terraform installation not one manual created server nothing in this regard.

SCALARA AWS Architecture Overview

The whole infrastructure is strapped together while deploying the application with amazons own CDK (Cloud Development Kit) as Infrastructure as Code. The main difference is that we rely on github actions to build and deploy the application since the AWS possibilities did not fit our needs in terms of sourcecode integration and outside accessibility.

This now enables us to spawn, create , redeploy complete environments automated while pushing code or accepting pull-requests. Starting from development, staging and production. And with server less functions we also even skipped the currently most used way of complex container environments.

What are our downsides?

The first and biggest painpoint is the use of testing infrastructure. In our normal development lifecycle the “local” development on the developers machine is the first step. The problem here is that some of the cloud services are not available as local packages or containers that can be used for testing. In addition they require other cloud infrastructure to work properly. For most database systems you will find some kind of substitute. The story changes if you look on the authentication side. If your application wants to use for example amazon cognito you have the challenge to make it publicly available and allow “localhost” connections while multiple developers use one instance and therefore the possibility of destroying the local test setup of another colleague is given at any time.

The second thing is the complex “spaces” and organization structure. AWS being so big and global has its perks for certain scenarios. But in the small scale it boosts the complexity a lot. For our more or less industry standard staging concept with experimental → staging → production the separation of different environments and spaces creates more complexity in automated building, setup and deployment scripts.

Wrap up

A general rule of thumb is basically non existing. In every case you have to look at your own situation and possibilities. Never underestimate the value of technicians and human resources for these decisions. In our case we went the all Lock-In route. If we scale like we plan to we can evaluate and change this in the future. But without a successful product the best designed, constructed and planned cloud infrastructure is useless.
A nice and interesting benefit for startups and small companies is that most cloud providers offer some free ratios for their services.

Ein Artikel von

Alexander Dziendziol-Dickopf

CTO

Alex ist seit 20 Jahren in der IT-Branche unterwegs. In den verschiedensten Rollen betreut er seit 12 Jahren Software-Entwicklungsteams in Corporate-Projekten. In 2016 hat er erfolgreich ein Kölner Software-Unternehmen gegründet und gibt dessen Leitung nun ab. Als Enterprise-Architekt und Programm-Manager hat er Erfahrung im Führen von multiprofessionellen IT-Teams in On- und Offshore. Bei SCALARA bringt er nun all sein Wissen und seine Expertise ein, um unsere Vision technisch zu skalieren. Mit seine Erfahrungen mit skalierenden Enterprise-Infrastrukturen, konzipiert er die Grundlage für unsere Vision als Platzhirsch der digitalen Immobilienindustrie.

Zur Artikel Übersicht