About us
Our services

Capabilities

Legacy Modernization
Data Platforms
AI & Advanced Analytics

Industries

Automotive
Finance
Manufacturing

Solutions

Databoostr

Data Sharing & Monetization Platform

Cloudboostr

Multicloud Enterprise Kubernetes

Looking for something else?

Contact us for tailored solutions and expert guidance.

Contact
Case studies
Resources

Resources

Blog

Read our blog and stay informed about the industry’s latest trends and technology.

Ready to find your breaking point?

Stay updated with our newsletter.

Subscribe

Insights

Ebooks

Explore our resources and learn about building modern software solutions from experts and practitioners.

Read more
Careers
Contact
Blog

Thinking out loud

Where we share the insights, questions, and observations that shape our approach.

All blog post
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Automotive

Developing software for connected cars - common challenges and how to tackle them

 Automotive is transforming into a hyper-connected, software-driven industry that goes far beyond the driving experience. How to build applications in such an innovative environment? What are the main challenges of providing software for connected cars and how to deal with them? Let’s dive into the process of utilizing the capabilities of the cloud to move automotive forward.

People have always aimed for the clouds. From Icarus in Greek mythology, first airplanes and spaceships to dreams about flying cars – our culture and history of technology development express a strong desire to go beyond our limits. Although the vision from Back to the Future and other Sci-Fi movies didn’t come true and our cars cannot be used as flying vehicles, our cars actually are in the cloud.

Meanwhile, the idea of the Internet of Things came true; our  devices are connected to the Internet . We have smartphones, smartwatches, smart homes and, as it turns out, smart cars. We are able to communicate with them to gather data or even remotely control them. The possibilities are only limited by hardware, but even it is constantly improving to follow the pace of rapid changes triggered by software development.

Offerings on the automotive market are developing rapidly with numerous features and promised experiences to the end customer. By using cutting-edge technologies, utilizing cloud platforms, and working with innovative software developers,  automakers provide solutions to even the most demanding needs . And while our user experience is improving at an accelerated pace, there is still a broad list of challenges to tackle.

In this article, we dive into the technology behind the latest trends, take into account the most demanding areas of developing software in the cloud, and explain how proper solution empowers the change that affects us all.

Challenging determinants of the cloud revolution in automotive

Connecting with your car through a smartphone or utilizing information about traffic provided to your vehicle thanks to the platforms that accumulate data registered by other drivers is extremely useful.

Those innovative changes wouldn’t be possible without  cloud infrastructure . And as there is no way back from moving to the cloud, the transition creates challenges in various areas:  safety, security, responsiveness, integrity , and more.

Safety in the automotive sector

How to create a solution that doesn’t affect the safety of a driver? When developing new services, you cannot forget about the basics. Infotainment provided to vehicles is more advanced for every new release of a car and can be really engaging. The amount of delivered information combined with increasingly larger displays may lead to distraction and create dangerous situations. It’s worth mentioning that some of the colors may even impair the driver’s vision!

Integration with the cloud usually enables some of the remote commands. When implementing them, there are a lot of restrictions that need to be kept in mind. Some of them are obvious, such as you don’t want to disable the engine when a car is being driven 100km/h, but others may be much more complicated and unseen at first.

Providing security for car owners

Enabling services for your vehicle in the cloud, despite being extremely helpful to improve your experience, creates another way to break into your car. Everyone would like to open a car without using keys, but using a mobile phone, voice, or a fingerprint instead. And as these solutions seem modern and fancy, there is a big responsibility on the software side to do it securely.

Responsiveness enabling the seamless user experience

 Customer-facing services need to deliver a seamless experience to the end-user. The customer doesn’t want to wait a minute or even ten seconds for unlocking a car door. These services need to do it immediately or not at all, as an issue with opening the doors just because the system had a ‘lag’ is not acceptable behavior.

Data integrity is a must

Another very important concept associated with providing solutions utilizing cloud technologies is data integrity.  Information collected by your vehicle should be useful and up to date. You don’t want a situation when the mobile application says that the car has a range of 100km, but in the morning, it turns out that the tank is almost empty, and you need to refuel it before going to work.

How to integrate and utilize mobile devices to connect with your vehicle?

When discussing how to use mobile phones to control cars, a very important question occurs; how to communicate with the car? There is no simple answer, as it all depends on what model and version of a car it is, as depending on a provider, the vehicles are equipped with various technologies. Some of them are equipped with BLE, Wi-Fi Hotspots, or RFID tags, while others don’t offer a direct connection to the car, and the only way is to go through the backend side. Most of the manufacturers will expose some API over the Internet without providing a direct connection from mobile to the car. In such cases, usually, it’s a good practice to create your own backend which handles all API flaws. To do so, your system will need a platform to have a reliable solution.

When the limitation of hardware is met, there is always an option to equip the car with a custom device, which will expose a proper communication channel and will be integrated with the vehicle. To do so, it may use the OBD protocol. It gives us full control over the communication part, however, it’s expensive and hard to maintain the solution.

Building a platform to solve the challenges

There is no simple answer on how to solve the mentioned challenges and implement a resilient system that will deliver all necessary functionalities with the highest quality. However, it’s very important to remember that such a solution should be scalable and utilize cloud-native patterns. When designing a system for connected cars, the natural choice is to go with the microservice architecture. The implementation of the system is one thing, and partly this topic was covered in the     previous article   , but on the other hand, the very important aspect is a runtime, the platform. Choosing the wrong setup of virtual machines or having to deploy everything manually can lead to downtime of the system. Having a system that isn’t available for the customer constantly can damage your business.

Kubernetes to the rescue! As probably you know, Kubernetes is a container orchestration platform, which allows running workload in pods. The platform itself helped us to deliver many features faster and with ease to our clients. Nowadays, Kubernetes is so easily accessible that you can spin up a cluster in minutes using existing service providers like AWS or Azure. It allows you to increase the speed of delivery of new features, as they may be deployed immediately! What’s very important with Kubernetes, is its abstraction from infrastructure. The development team with expertise in Kubernetes is able to work on any cloud provider. Furthermore,     mission-critical systems can successfully implement Kubernetes   for their use cases as well.

Automotive cloud beyond car manufacturers

 Automotive cloud is not only a domain of car manufacturers. As mentioned earlier, they offer digital services to integrate with their cars, but numerous mobility service providers integrate with these APIs to implement their own use cases.

  •  Live notifications
  •  Online diagnostics
  •  Fleet management
  •  Vehicle recovery
  •  Remote access
  •  Car sharing
  •  Car rental

The best practices of providing cloud-native software for the automotive industry

Working with  the leading auto motive brands and being engaged in numerous projects meant to deliver innovative applications. Our team have collected a group of helpful practices which make development easier and improve user experience. There are some must-have practices when it comes to delivering high-quality software, such as CI/CD, Agile, DevOps, etc., – they are crucial yet well-known for the experienced development team and we don’t focus on them in this article. Here we share tips dedicated for teams working with app delivery for automotive.

Containerize your vehicle

One of the things we’ve learned     collaborating with Porsche   is that vehicles are equipped with ECUs and installing software on them isn’t easy. However, Kubernetes helps to mitigate that challenge, as we can mock the target ECU by docker image with specialized operating systems and install software directly in it. That’s a good approach to create an integration environment that shortens the feedback loop and helps deliver software faster and better.

Asynchronous API

In the IoT ecosystem, you can’t rely too much on your connection with edge devices. There are a lot of connectivity challenges, for example, a weak cellular range. You can’t guarantee when your command to the car will be delivered and if the car will respond in milliseconds or even at all. One of the best patterns here is to provide the asynchronous API. It doesn’t matter on which layer you’re building your software if it’s a connector between vehicle and cloud or a system communicating with the vehicle’s API provider. Asynchronous API allows you to limit your resource consumption and avoid timeouts that leave systems in an unknown state.

Let’s take a very simple example of a mobile application for locking the car remotely.

 Synchronous API scenario

  1.  A customer presses a button on the application to lock the car.
  2.  The request is sent and is waiting for a response.
  3.  The request needs to be delegated to the car which may take some time.
  4.  The backend component crashes and starts without any knowledge about the previous request.
  5.     The application gets a timeout.  
  6.  What now? Is the car locked? What should be displayed to the end-user?

 Asynchronous API scenario

  1.  The customer presses a button on the application to lock the car.
  2.     The request is sent and ended immediately.  
  3.  The request needs to be delegated to the car which may take some time.
  4.  The backend component crashes and starts without any knowledge about the previous request.
  5.  The car sends a request with the command result through the backend to the application.
  6.     Application displays: “Car is locked.”  

With asynchronous API, there’s always a way to resend the response. With synchronous API, after you lose connection, the system doesn’t know where to resend response out of the box. As you may see, the asynchronous pattern handles this case perfectly.

Digital Twin

DigDigital Twin is a virtual model of a process, a product or a service, in case of automotive – a digital cockpit of a car. This pattern helps to ensure the integrity of data and simplify the development of new systems by its abstraction over the vehicle. The concept is based on the fact that it stores the actual state of the vehicle in the cloud and constantly updates it based on data sent from a car. Every feature requiring some property of vehicle should be integrated with Digital Twin to limit direct integrations with a car and improve the execution time of operations.

Implementation of Digital Twin may be tricky though, as it all depends on the vehicle manufacturer and API it provides. Sometimes it doesn’t expose enough properties or doesn’t provide real-time updates. In such cases, it’s even impossible to implement this pattern.

Software for Connected Cars - Summary

We believe that the future will look more futuristic than we could have ever imagined. Autonomous cars, smart cars, smart homes, every device tries to make our lives easier. It’s not known when and how these solutions will fully utilize Artificial Intelligence to make this experience even better. Everything connects as numerous IoT devices are connected which provides us with unlimited possibilities.

T  he automotive industry is currently transforming, and it isn’t only focusing on the driving experience anymore. There is a serious focus on connected mobility and other customer-oriented services to enhance our daily routines and habits. However, as software providers, we should keep in mind that automotive is a mature industry. The first connected car solutions were built years ago, and it’s challenging to integrate with them. These best practices should help focus on customer experience. Unreliable systems won’t encourage anyone to use it, and bad reviews can easily destroy a brilliant idea.

The automotive industry is experiencing a challenging transformation. We can notice these changes with every new model of a car and with every new service released. However, to keep up with the pace of the changing world, the industry needs modern technologies and reliable solutions, such as Kubernetes. And on top of that cloud-native application,     software created with the best practices by experienced engineers   who use the customer-first approach.

written by
Adam Kozłowski
written by
Marcin Wiśniewski
Software development

ASP.NET core CI/CD on Azure Pipelines with Kubernetes and Helm

Due to the high entry threshold, it is not that easy to start a journey with Cloud Native. Developing apps focused on reliability and performance, and meeting high SLAs can be challenging. Fortunately, there are tools like Istio which simplify our lives. In this article, we guide you through the steps needed to create CI/CD with Azure Pipelines for deploying microservices using Helm Charts to Kubernetes. This example is a good starting point for preparing your development process. After this tutorial, you should have some basic ideas about how Cloud Native apps should be developed and deployed .

Technology stack

  • .NET Core 3.0 (preview)
  • Kubernetes
  • Helm
  • Istio
  • Docker
  • Azure DevOps

Prerequisites

You need a Kubernetes cluster, free Azure DevOps account, and a docker registry. Also, it would be useful to have kubectl and gcloud CLI installed on your machine. Regarding the Kubernetes cluster, we will be using Google Kubernetes Engine from Google Cloud Platform, but you can use a different cloud provider based on your preferences. On GCP you can create a free account and create a Kubernetes cluster with Istio enabled ( Enable Istio checkbox). We suggest using a machine with 3 standard nodes.

Connecting the cluster with Azure Pipelines

Once we have the cluster ready, we have to use kubectl to prepare service account which is needed for Azure Pipelines to authenticate. First, authenticate yourself by including necessary settings in kubeconfig. All cloud providers will guide you through this step. Then following commands should be run:

kubectl create serviceaccount azure-pipelines-deploy
kubectl create clusterrolebinding azure-pipelines-deploy --clusterrole=cluster-admin --serviceaccount=default:azure-pipelines-deploy
kubectl get secret $(kubectl get secrets -o custom-columns=":metadata.name" | grep azure-pipelines-deploy-token) -o yaml

We are creating a service account, to which a cluster role is assigned. The cluster-admin role will allow us to use Helm without restrictions. If you are interested, you can read more about RBAC on Kubernetes website . The last command is supposed to retrieve secret yaml , which is needed to define connection - save that output yaml somewhere.

Now, in Azure DevOps, go to Project Settings -> Service Connections and add a new Kubernetes service connection. Choose service account for authentication and paste the yaml copied from command executed in the previous step.

One more thing we need in here is the cluster IP. It should be available at cluster settings page, or it can be retrieved via command line. In the example, for GCP command should be similar to this:

gcloud container clusters describe --format=value(endpoint) --zone

Another service connection we have to define is for docker registry. For the sake of simplicity, we will use the Docker hub, where all you need is just to create an account (if you don’t have one). Then just supply whatever is needed in the form, and we can carry on with the application part.

Preparing an application

One of the things we should take into account while implementing apps in the Cloud is the Twelve-Factor methodology. We are not going to describe them one by one since they are explained good enough here but few of them will be mentioned throughout the article.

For tutorial purposes, we’ve prepared a sample ASP.NET Core Web Application containing a single controller and database context. It also contains simple dockerfile and helm charts. You can clone/fork sample project from here . Firstly, push it to a git repository (we will use Azure DevOps), because we will need it for CI. You can now add a new pipeline, choosing any of the available YAML definitions. In here we will define our build pipeline (CI) which looks like that:

trigger:
- master
pool:
vmImage: 'ubuntu-latest'
variables:
buildConfiguration: 'Release'
steps:
- task: Docker@2
inputs:
containerRegistry: 'dockerRegistry'
repository: '$(dockerRegistry)/$(name)'
command: 'buildAndPush'
Dockerfile: '**/Dockerfile'
- task: PublishBuildArtifacts@1
inputs:
PathtoPublish: '$(Build.SourcesDirectory)/charts'
ArtifactName: 'charts'
publishLocation: 'Container'

Such definition is building a docker image and publishing it into predefined docker registry. There are two custom variables used, which are dockerRegistry (for docker hub replace with your username) and name which is just an image name (exampleApp is our case). The second task is used for publishing artifact with helm chart. These two (docker image & helm chart) will be used for the deployment pipeline.

Helm charts

Firstly, take a look at the file structure for our chart. In the main folder, we have Chart.yaml which keeps chart metadata, requirements.yaml with which we can specify dependencies or values.yaml which serves default configuration values. In the templates folder, we can find all Kubernetes objects that will be created along with chart deployment. Then we have nested charts folder, which is a collection of charts added as a dependency in requirements.yaml. All of them will have the same file structure.

Let’s start with a focus on the deployment.yaml - a definition of Deployment controller, which provides declarative updates for Pods and Replica Sets. It is parameterized with helm templates, so you will see a lot of {{ template [...] }} in there. Definition of this Deployment itself is quite default, but we are adding a reference for the secret of SQL Server database password. We are hardcoding ‘-mssql-linux-secret’ part cause at the time of writing this article, helm doesn’t provide a straightforward way to access sub-charts properties.

env:
- name: sa_password
valueFrom:
secretKeyRef:
name: {{ template "exampleapp.name" $root }}-mssql-linux-secret
key: sapassword

As we mentioned previously, we do have SQL Server chart added as a dependency. Definition of that is pretty simple. We have to define the name of the dependency, which will match the folder name in charts subfolder and the version we want to use.

dependencies:
- name: mssql-linux
repository: https://kubernetes-charts.storage.googleapis.com
version: 0.8.0
[...]

For the mssql chart, there is one change that has to be applied in the secret.yaml . Normally, this secret will be created on each deployment ( helm upgrade ), it will generate a new sapassword - which is not what we want. The simplest way to adjust that is by modifying metadata and adding a hook on pre-install. This will guarantee that this secret will be created just once on installing the release.

metadata:
annotations:
"helm.sh/hook": "pre-install"

A deployment pipeline

Let’s focus on deployment now. We will be using Helm to install and upgrade everything that will be needed in Kubernetes. Go to the Releases pipelines on the Azure DevOps, where we will configure continuous delivery. You have to add two artifacts, one for docker image and second for charts artifact. It should look like on the image below.

On the stages part, we could add a few more environments, which would get deployed in a similar manner, but to a different cluster. As you can see, this approach guarantees Deploy DEV stage is simply responsible for running a helm upgrade command. Before that, we need to install helm, kubectl and run helm init command.

For the helm upgrade task, we need to adjust a few things.

  • set Chart Path, where you can browse into Helm charts artifact (should look like: “$(System.DefaultWorkingDirectory)/Helm charts/charts”)
  • paste that “image.tag=$(Build.BuildNumber)” into Set Values
  • and check to Install if release not present or add --install ar argument. This will behave as helm install if release won’t exist (i.e. on a clean cluster)

At this point, we should be able to run the deployment application - you can create a release and run deployment. You should see a green output at this point :).

You can verify if the deployment went fine by running a kubectl get all command.

Making use of basic Istio components

Istio is a great tool, which simplifies services management. It is responsible for handling things like load balancing, traffic behavior, metric & logs, and security. Istio is leveraging Kubernetes sidecar containers, which are added to pods of our applications. You will have to enable this feature by applying an appropriate label on the namespace.

kubectl label namespace default istio-injection=enabled

All pods which will be created now will have an additional container, which is called a sidecar container in Kubernetes terms. That’s a useful feature, cause we don’t have to modify our application.

Two objects that we are using from Istio, which are part of the helm chart, are Gateway and VirtualService . For the first one, we will bring Istio definition, because it’s simple and accurate: “Gateway describes a load balancer operating at the edge of the mesh receiving incoming or outgoing HTTP/TCP connections”. That object is attached to the LoadBalancer object - we will use the one created by Istio by default. After the application is deployed, you will be able to access it using LoadBalancer external IP, which you can retrieve with such command:

kubectl get service/istio-ingressgateway -n istio-system

You can retrieve external IP from the output and verify if http://api/examples url works fine.

Summary

In this article, we have created a basic CI/CD which deploys single service into Kubernetes cluster with the help of Helm. Further adjustments can include different types of deployment, publishing tests coverage from CI or adding more services to mesh and leveraging additional Istio features. We hope you were able to complete the tutorial without any issues. Follow our blog for more in-depth articles around these topics that will be posted in the future.

written by
Grape up Expert
Our experts

The state of Kubernetes - what upcoming months will bring for the container orchestration

Kubernetes has become a must-have container orchestration platform for every company that aims to gain a competitive advantage by delivering high-quality software at a rapid pace. What’s the state of Kubernetes at the beginning of 2020? Is there room for improvement? Here is a list of trends that should shape the first months of the upcoming year.

As a team that provides own Multicloud Enterprise Kubernetes platform and empowers numerous companies in adopting K8s, we follow all the news that helps to prepare for the upcoming trends in using this cloud-native platform. And there are the best places to learn what’s new and what’s coming like KubeCon I CloudNativeCon conferences.

A few weeks ago, San Diego hosted KubeCon + CloudNativeCon North America gathering 12 thousand cloud-native enthusiasts - 50% increase in the number of attendees in comparison to the previous edition shows the scale of the Kubernetes' popularity growth. During the event, we had a chance to listen about new trends and discuss further opportunities with industry experts. Most of the news announced in San Diego will influence the upcoming months in a cloud-native world. Below, we focus on the most important ones.

Kubernetes is a must for the most competitive brands

What makes KubeCon so likable? Access to Kubernetes experts, networking with an amazing community of people gathered around CNCF, chance to learn the trends before they become mainstream? For sure, but what also makes it so special? The answer comes to the hottest brands that join cloud-native nation these days - Pinterest, Home Depot, Walmart, Tinder and many more.

It’s obvious when tech companies present how they build their advantage using the latest technologies, but it becomes more intriguing when you have an opportunity to get to know how companies like Adidas, Nike or Tinder (yes, indeed) are using Kubernetes to provide their customers/users with extraordinary value.

As attached examples show, we live in the software-driven world, where the quality of delivered apps is crucial to stay relevant, regardless of the industry.

Enterprises need container orchestration to sustain their market share

The conference confirmed that Kubernetes is a standard in container orchestration and one of the key elements contributing to the successful implementation of a cloud-first strategy for enterprises.

But why the largest companies should be interested in adopting the newest technologies? Because their industries are being constantly disrupted by fresh startups utilizing agility and cutting-edge tech solutions. The only way to sustain position is by evolving. The way to achieve it comes to adopting a cloud-native strategy and implementing Kubernetes. As Jonathan Smart once said - “You’re never done with improving and learning.”

Automate what can be automated

As more and more teams move Kubernetes to production, a large number of companies is working on solutions that would help streamline and automate certain processes. That drives to the growing market of tools associated with Kubernetes and enriching its usage.

For example, Helm, which has its place in the native cloud toolbox used by administrators as one of the key deployment tools in its latest version, simplifies and improves operation by getting rid of some dependencies, such as Tiller, a server-side component running in the Kubernetes cluster.

Kubernetes-as-a-service and demand for Kubernetes experts

During this year’s KubeCon, many vendors presented a range of domains that have been offering complete solutions for Kubernetes, accelerating container orchestration. At previous events, we met vendors who have been providing storage, networking, and security components for Kubernetes. This evolution expresses the development of the environment built around the platform. Such an extensive offer of solutions allows teams or organizations to migrate to the native cloud to facilitate finding a compromise regarding "building versus buying" concerning components and solutions.

Rancher announced a solution that may be an example of an interesting Kubernetes-as-a-service option. The company collaborated with ARM to design a highly optimized version of Kubernetes for the edge - packaged as a single binary with a small footprint to reduce the dependencies and steps needed to install and run K8s in resource-constrained environments (e.g. IoT or edge devices for ITOps and DevOps teams.) By making K3s (lightweight distribution built for small footprint workloads) available and providing the beta release of Rio, their new application deployment engine for Kubernetes, Rancher delivers integrated deployment experience from operations to the pipeline.

Kubernetes-as-a-service offerings on the market are gaining strength. The huge number of Kubernetes use cases entails another very important trend. Companies are looking for talent in this field more than ever. Many companies have used conferences to meet with experts. Therefore, the number of Kubernetes jobs has also increased. The demand for experts on the subject is huge.

Multicloud is here to stay

Are hybrid solutions becoming a standard? Many cloud providers have claimed to be the best providers for multi-clouds - and we observe the trend that it becomes more popular. Despite some doubts (regarding its complexity, security, regulatory, or performance) enterprises are dealing well with implementing a multicloud strategy.

Top world’s companies are moving to multicloud as this approach empowers them to gain exceptional agility and huge cost savings thanks to the possibility to separate their workloads into different environments and make decisions based on the individual goals and specific requirements.

It is also a good strategy for companies working with private cloud-only. Usually, that’s the case because of storing sensitive data. As numerous case studies show, these businesses can be architected into multicloud solutions, whereas sensitive data is still stored securely on-premise, while other things are moved into the public cloud, which makes them easily scalable and easier to maintain.

Kubernetes is everywhere, even in the car….

During KubeCon, Rafał Kowalski, our colleague from Grape Up shared his presentation about running Kubernetes clusters in the car - "Kubernetes in Your 4x4 - Continuous Deployment Direct to the Car". Rafał showed how to use Kubernetes, KubeEdge, k3s, Jenkins, and RSocket for building continuous deployment pipelines, which ship software directly to the car, deals with rollbacks and connectivity issues. You can watch the entire video here:

https://www.youtube.com/watch?v=zmuOxFp3CAk&feature=youtu.be

…. and can be used in various devices

But these are not all of the possibilities; other devices such as drones or any IoT devices can also utilize containers The need for increased automation of cluster management and the ability to quickly rebuild clusters from scratch were the conclusions breaking through the above-mentioned occurrences.

The environment shows, through the remarkable pattern of the number of companies using Kubernetes and the development of utilities, there are still open needs in terms of simplicity and scalability of tools for operations, e.g. Security, data management, programming tools, and continuing operations in this area should be expected.

“Kubernetes has established itself as the de facto standard for container orchestration,”- these are the most frequently repeated words. It’s good to observe the development of the ecosystem around Kubernetes that strives to provide more reliable and cheaper experiences for enterprises that want to extend their strategic initiatives to the limit.

written by
Grape up Expert
Software development

Reactive service to service communication with RSocket – load balancing & resumability

This article is the second one of the mini-series which will help you to get familiar with RSocket – a new binary protocol which may revolutionize machine to machine communication in distributed systems. In the following paragraphs, we will discuss the load balancing problem in the cloud as well as we will present the resumability feature which helps to deal with network issues, especially in the IoT systems.

  • If you are not familiar with RSocket basics, please see the previous article available here
  • Please notice that code examples presented in the article are available at GitHub

High availability & load balancing as a crucial part of enterprise-grade systems

Applications availability and reliability are crucial parts of many business areas like banking and insurance. In these demanding industries, the services have to be operational 24/7 even during high traffic, periods of increased network latency or natural disasters. To ensure that the software is always available to the end-users it is usually deployed in redundantly, across the multiple availability zones.

In such a scenario, at least two instances of each microservice are deployed in at least two availability zones. This technique helps our system become resilient and increase its capacity - multiple instances of the microservices are able to handle a significantly higher load. So where is the trick? The redundancy introduces extra complexity. As engineers, we have to ensure that the incoming traffic is spread across all available instances. There are two major techniques which address this problem: server load balancing and client load balancing .

The first approach is based on the assumption that the requester does not know the IP addresses of the responders. Instead of that, the requester communicates with the load balancer, which is responsible for spreading the requests across the microservices connected to it. This design is fairly easy to adopt in the cloud era. IaaS providers usually have built-in, reliable solutions, like Elastic Load Balancer available in Amazon Web Services. Moreover, such a design helps develop routing strategy more sophisticated than plain round ribbon (e.g. adaptive load balancing or chained failover ). The major drawback of this technique is the fact that we have to configure and deploy extra resources, which may be painful if our system consists of hundreds of the microservices. Furthermore, it may affect the latency – each request has extra “network hop” on the load balancer.

The second technique inverts the relation. Instead of a central point used to connect to responders, the requester knows IP addresses of each and every instance of the given microservice. Having such knowledge, the client can choose the responder instance to which it sends the request or opens the connection with. This strategy does not require any extra resources, but we have to ensure that the requester has the IP addresses of all instances of the responder ( see how to deal with it using service discovery pattern ). The main benefit of the client load balancing pattern is its performance – by reduction of one extra “network hop”, we may significantly decrease the latency. This is one of the key reasons why RSocket implements the client load balancing pattern.

Client load balancing in RSocket

On the code level, the implementation of the client load balancing in RSocket is pretty straightforward. The mechanism relies on the LoadBalancedRSocketMono object which works as a bag of available RSocket instances, provided by RSocket supplier. To access RSockets we have to subscribe to the LoadBalancedRSocketMono which onNext signal emits fully-fledged RSocket instance. Moreover, it calculates statistics for each RSocket, so that it is able to estimate the load of each instance and based on that choose the one with the best performance at the given point of time.

The algorithm takes into account multiple parameters like latency, number of maintained connections as well as a number of pending requests. The health of each RSocket is reflected by the availability parameter – which takes values from 0 to 1, where 0 indicates that the given instance cannot handle any requests and 1 is assigned to fully operational socket. The code snippet below shows the very basic example of the load-balanced RSocket, which connects to three different instances of the responder and executes 100 requests. Each time it picks up RSocket from the LoadBalancedRSocketMono object.

@Slf4j

public class LoadBalancedClient {



static final int[] PORTS = new int[]{7000, 7001, 7002};



public static void main(String[] args) {



List rsocketSuppliers = Arrays.stream(PORTS)

.mapToObj(port -> new RSocketSupplier(() -> RSocketFactory.connect()

.transport(TcpClientTransport.create(HOST, port))

.start()))

.collect(Collectors.toList());



LoadBalancedRSocketMono balancer = LoadBalancedRSocketMono.create((Publisher>) s -> {

s.onNext(rsocketSuppliers);

s.onComplete();

});



Flux.range(0, 100)

.flatMap(i -> balancer)

.doOnNext(rSocket -> rSocket.requestResponse(DefaultPayload.create("test-request")).block())

.blockLast();

}



}

It is worth noting, that client load balancer in RSocket deals with dead connections as well. If any of the RSocket instances registered in the LoadBalancedRSocketMono stop responding, the mechanism will automatically try to reconnect. By default, it will execute 5 attempts, in 25 seconds. If it does not succeed, the given RSocket will be removed from the pool of available connections. Such design combines the advantages of the server-side load balancing with low latency and reduction of “network hops” of the client load balancing.

Dead connections & resumabilty mechanism

The question which may arise in the context of dead connections is: what will happen if I have an only single instance of the responder and the connection drops due to network issues. Is there anything we can do with this? Fortunately, RSocket has built-in resumability mechanism.

To clarify the concept let’s consider the following example. We are building an IoT platform which connects to multiple temperature sensors located in different places. Most of them in the distance to the nearest buildings and internet connection sources. Therefore, the devices connect to cloud services using GPRS. The business requirement for our system is that we need to collect temperature readings every second in the real-time, and we cannot lose any data.

In case of the machine-to-the machine communication within the cloud, streaming data in real-time is not a big deal, but if we consider IoT devices located in areas without access to a stable, reliable internet connection, the problem becomes more complex. We can easily identify two major issues we may face in such a system: the network latency and connection stability . From a software perspective, there is not much we can do with the first one, but we can try to deal with the latter. Let’s tackle the problem with RSocket, starting with picking up the proper interaction model . The most suitable in this case is request stream method, where the microservice deployed in the cloud is the requester and temperature sensor is the responder. After choosing the interaction model we apply resumability mechanism. In RSocket, we do it by method resume() invoked on the RSocketFactory , as shown in the examples below:

@Slf4j

public class ResumableRequester {



private static final int CLIENT_PORT = 7001;



public static void main(String[] args) {

RSocket socket = RSocketFactory.connect()

.resume()

.resumeSessionDuration(RESUME_SESSION_DURATION)

.transport(TcpClientTransport.create(HOST, CLIENT_PORT))

.start()

.block();

socket.requestStream(DefaultPayload.create("dummy"))

.map(payload -> {

log.info("Received data: [{}]", payload.getDataUtf8());

return payload;

})

.blockLast();



}

}

@Slf4j

public class ResumableResponder {



private static final int SERVER_PORT = 7000;

static final String HOST = "localhost";

static final Duration RESUME_SESSION_DURATION = Duration.ofSeconds(60);





public static void main(String[] args) throws InterruptedException {



RSocketFactory.receive()

.resume()

.resumeSessionDuration(RESUME_SESSION_DURATION)

.acceptor((setup, sendingSocket) -> Mono.just(new AbstractRSocket() {

@Override

public Flux requestStream(Payload payload) {

log.info("Received 'requestStream' request with payload: [{}]", payload.getDataUtf8());

return Flux.interval(Duration.ofMillis(1000))

.map(t -> DefaultPayload.create(t.toString()));

}

}))

.transport(TcpServerTransport.create(HOST, SERVER_PORT))

.start()

.subscribe();

log.info("Server running");



Thread.currentThread().join();

}



}

  • Please notice that to run provided examples, you need ‘socat’ installed on your machine, please see README file for more details

The mechanism on the requester and responder side works similarly, it is based on a few components. First of all, there is a ResumableFramesStore which works as a buffer for the frames. By default, it stores them in the memory, but we can easily adjust it to our needs by implementing the ResumableFramesStore interface (e.g. store the frames in the distributed cache, like Redis). The store saves the data emitted between keep alive frames, which are sent back and forth periodically and indicates, if the connection between the peers is stable. Moreover, the keep alive frame contains the token, which determines Last received position for the requester and the responder. When the peer wants to resume the connection, it sends the resume frame with an implied position . The implied position is calculated from last received position (is the same value we have seen in the Keep Alive frame) plus the length of the frames received from that moment. This algorithm is applied to both parties of the communication, in the resume frame is it reflected by last received server position and first client available position tokens. The whole flow for resume operation is shown in the diagram below:

By adopting the resumability mechanism built in the RSocket protocol, with the relatively low effort we can reduce the impact of the network issues. Like shown in the example above, the resumability might be extremely useful in the data streaming applications, especially in the case of the device to the cloud communication.

Summary

In this article, we discussed more advanced features of the RSocket protocol, which are helpful in reducing the impact of the network on the system operationality. We covered the implementation of the client load balancing pattern and resumability mechanism. These features, combined with the robust interaction model constitutes the core of the protocol.

In the last article of this mini-series , we will cover available abstraction layers on top of the RSocket.

  • Please notice that fully working examples are provided here →
written by
Grape up Expert
Software development

Painless view controller configuration in Swift

Back in the pre-storyboard era of iOS development, developers had to write hundreds of lines of boilerplate code that served to manage UIViewController object hierarchies. Back then, some were inventing generic approaches to the configuring of controllers and transitions between them. Others were just satisfied with the ad-hoc view controller creation and presented them directly from other code controllers. But things changed when Apple introduced storyboards in iOS 5. It was a huge step forward in the UI design for iOS. Storyboards introduced an ability to visually define app screens and - what is the most important - transitions between them (called segues) in a single file. Storyboard segues allow to discard all the boilerplate code related to transitions between view controllers.

Of course, every solution has its advantages and disadvantages. When it comes to storyboards, some may note issues such as hard to resolve merge conflicts, coupling of view controllers, poor reusability etc. Some developers don’t even use storyboards because of such disadvantages. For others the advantages play a more important role. However, the real bottleneck of the storyboards is the initialization of view controllers. In fact, there is no true initialization for the view controllers presented by storyboard segues.

Problems with the view controller configuration

Let’s start from some basics. In Objective-C/Swift, in order to give an object an initial state, the initializer (  init() ) is called. This call assigns values to properties of the class. It always happens at the point where the object is created. When subclassing any class, we may provide the initializer and this is the only proper way. We may also provide such initializer for the  UIViewController subclass. However, in case such controller is created/presented using the storyboard, the segue creation takes place through a particular initializer –  init(coder:) . Overriding it in subclass may give us the ability to initialize properties added by the subclass. However, we don’t have the ability to pass additional arguments to the overridden method. Moreover, even if we had such an ability, it would make no sense. This is because for storyboard-driven view controllers there is no particular point in code which allows them to pass data to the initializer. That is, we cannot catch the moment of creation of such controller. The creation of view controllers managed by storyboard segues is hidden from the programmer. It happens when segue to the controller is triggered – either entirely handled by the system (when triggering action is set up in the storyboard file) or using  performSegue() method.

Apple, however, provides a place where we can pass some data to an already created view controller after the segue is triggered. It’s a  prepare(for : sender:) method. From its first parameter (of  UIStoryboardSegue type), we can get the segue’s destination view controller. Because the controller has already been created (initialization is already performed when triggering segue) the only option for passing the required data is to configure it. This means that after the initialization, but before the  prepare(for : sender:) is called, the properties of the controller that hold such data should not have initial value or should have fake ones. While the second option is meaningless in most cases, the first one is widely used. Such absence of data means that the corresponding controller’s properties should be of an optional type(s). Let’s take a look on the following sample:

override func prepare(for segue: UIStoryboardSegue, sender: Any?) {

 if segue.identifier == "ToSomeViewControllerSegueID",

    let controller = segue.destination as? SomeViewController {

     controller.someValue = 12345

   }

 }

}

This is how the view controller configuring is implemented in most cases when dealing with segues:

  •  check segue id;
  •  get the destination view controller from the segue object;
  •  try to cast it to the type of the expected view controller subclass.

In case all conditions are satisfied we can set values to the properties of the controller that need to be configured. The problem with the approach is that it has too much service code related to verification and data extraction. It may not be visible in simple cases like the one shown above. However, taking into account the fact that each view controller in application often has transitions to several other view controllers such service code becomes a real boilerplate code we’d like to avoid. Take a look at the following example that generalizes the problem with  prepare(for : sender:) implementation.

override func prepare(for segue: UIStoryboardSegue, sender: Any?) {

 if segue.identifier == "ToViewControllerA",

    let controller = segue.destination as? ViewControllerA {

     // configure  View Controller A

   }

 } else if segue.identifier == "ToViewControllerB",

    let controller = segue.destination as? ViewControllerB {

     // configure  View Controller B

   }

 } else if segue.identifier == "ToViewControllerC",

    let controller = segue.destination as? ViewControllerC {

     // configure  View Controller C

   }

 } else ...

   ...

 } else if segue.identifier == "ToViewControllerZ",

    let controller = segue.destination as? ViewControllerZ {

     // configure  View Controller Z

   }

 }

}

All those  if… else if… blocks are making code hard to read. Moreover, each block is for the a different view controller that has to be configured. That is, the more view controllers are going to be present by this one, the more  if… else if… will be added. This, in turn, reveals another problem with such configuration. There is a single method for a particular controller that does all configurations for every controller we’re going to present.

Solution

Let’s try to find the approach to the view controller configuration that may eliminate the outlined problems. We’re limited to the usage of  prepare(for : sender:) since it’s the only point where the configuration can be done. So we cannot do anything with the type of the destination view controller and with the check of segue identifier. Instead we’d like to generalize the process of configuration in a way that allows us to have a single type check and single verification for identifier. That is, check with some generalized type of destination view controller and variable segue identifier rather than enumerating all the possible concrete types/identifiers. For this, we need to pass somehow the information about the type and the segue identifier to the  prepare(for : sender:) method. We would like to have something like the following:

override func prepare(for segue: UIStoryboardSegue, sender: Any?) {

 if segue.identifier == ,

    let viewerController = segue.destination as?  {

   // configure viewerController

 }

}

In order to have a single configuration code for all the controllers we need two things: unified interface to configure the controller, and a way to get the configuration data for the particular destination controller and segue identifier. Let’s define each part of the solution.

1. Unified interface for view configuration

As defined previously, configuration means setting values to one or more properties of destination view controller. So it’s natural to associate the configuration interface with the destination controller rather than with the one that triggers the segue. Obviously, each destination view controller has a different number of properties of different types to configure.

In order to provide a unified configuration interface we may implement a method for configuring each controller. We should pass there the values that will be assigned to the corresponding controller properties. To unify such method, every configured controller should have the same signature. To achieve this, we should wrap a set of passed configuring values into a single object. Then such method will always have one argument – no matter how many properties should be set. The type of the argument is a type of the wrapping object and is different for each view controller. This means that the view controller should implement a method for configuring and somehow define a type of the argument of the method. This is a perfect task for protocols with associated types. Let’s define the following protocol:

protocol Configurable {

 associatedtype ConfigurationType

 func configure(with configuration: ConfigurationType)

}

Each view controller that is going to be configured (is configurable) should conform to this protocol by implementing the  configure(with:) method and defining a concrete type for  ConfigurationType . In the easiest case where we only have one property that needs to be configured, the ConfigurationType is the type of that property. Otherwise, the  ConfigurationType may be defined as a structure or tuple to represent several values. Consider the following examples:

class SomeViewController: UIViewController, Configurable {

 var someValue: Int?

 var someObject: MyModelType?

 …

 func configure(with configuration: (value: Int, object: MyModelType)) {

   value = configuration.value

   someObject = configuration.object

   }

}

class OtherViewController: UIViewController, Configurable {

 var underlyingObject: MyObjectType?

 …

 func configure(with object: MyObjectType) {

   underlyingObject = object

   }

}

2. Defining the configuration data for view controller

Now, let’s go back to the controller that is triggering a segue. We’re going to use the configuration protocol we’ve defined. For this, we need to have data for passing it to the  configure(with:) method. This should be something as follows:

override func prepare(for segue: UIStoryboardSegue, sender: Any?) {

 if let segueIdentifier = segue.identifier {

   // 1. Get configuration object

   // for segue.destination and segueIdentifier

   // 2. Pass configuration object to the destination controller

 }

}

Let's focus on how should we obtain the configuration object. Each segue is unique within a controller that triggers those segues. For each segue we have a single destination controller that has its own type of configuration. This means that segue id unambiguously defines a configuration type that should be used for configuring the destination view controller.

On the other hand, just returning the configuration of a concrete type per each segue id is not enough. If we did so, we would need to pass it somehow to a destination controller that has a type UIViewCotroller. It has nothing to do with the configuration. On the other hand, we cannot use the  Configurable protocol as a type of an object directly because it has an associated type constraint. That is, we cannot cast the destination view controller to the  Configurable type like as follows:
 (segue.destination as? Configurable)?.configure(with: data) . Instead, we need to use some proxy generic type that is constrained to being a Configurable.

Also, creating all the configuration objects for the controllers in a single method has no sense since it brings the same issue as the one with  prepare(for ) . That is, in this case we have a concentration of code intended to configuring different objects in a single method. Instead, the better solution is to group the code for creating the particular configuration and the type of the controller which is configured into a separate object. Consider the following example:

class Configurator {

 let configurationProvider: () -> ConfigurableType.ConfigurationType



 init(configuringCode: @escaping () -> ConfigurableType.ConfigurationType) {

   self.configurationProvider = configuringCode

 }



 func performConfiguration(of object: ConfigurableType) {

   let configuration = configurationProvider()

   object.configure(with: configuration)

 }

}

In the code above, a single  Configurator<T> instance is responsible for configuring the controller of a particular type. The code that creates the configuration is injected to the configurator in the  init() method during creation.

According with the reasoning given above, we should associate a segue ID with the particular configuration and type. Considering the approach with the  Configurator<T> , the easiest way to do it is to create a mapping object where the key is a segue ID and a value is the corresponding  Configurator<T> instance. We may also create those  Configurator<T> objects in place of the map definition. This will make the code more clear and readable. The following example demonstrates such map:

var segueIDToConfigurator: [String : Any] {

 return [

   "ToSomeViewControllerSegueID": Configurator {

     return (value: 123, object: MyModelType())

   },

   "ToOtherViewControllerSegueID": Configurator {

     return MyObjectType()

   }

 ]

}

Let’s now try to use the configuration from the dictionary above in  prepare(for ) method. Let’s take a look at the following example

override func prepare(for segue: UIStoryboardSegue, sender: Any?) {

 if let segueIdentifier = segue.identifier,

    let configuring = segueIDToConfigurator[segueIdentifier] as? Configurator {

     configuring.performConfiguration(of: segue.destination)

 }

}

The problem is that the value type of the dictionary  segueIDToConfigurator is  Any . We cannot call on it any method directly. Instead, we need to cast it to the type that contains the  performConfiguration(of:) method. On the other hand, the only type in our implementation that contains the  performConfiguration(of:) method is the generic type  Configurator<T> . And to use it we should pass a certain type of the destination view controller in place of the generic type placeholder. At this point, the problem is in  prepare(for ) method. In this method we don’t have the information about that view controller type. Let’s try to resolve the problem. We need  Configurator<T> only to call the  performConfiguration(of:) method. Instead of having the whole interface of  Configurator<T> type inside the  prepare(for ) method we may use some intermediate interface that does not depend on a generic type and allows us to call  performConfiguration(of:) .

var segueIDToConfigurator: [String : Configuring] {

 return [

   "ToSomeViewControllerSegueID": Configurator {

     return (value: 123, object: MyModelType())

   },

   "ToOtherViewControllerSegueID": Configurator {

     return MyObjectType()

   }

 ]

}

For this, let’s create a protocol  Configuring and modify the  Configurator<T> type to make it conform to it. The example below demonstrates the refined approach.

protocol Configuring {

 func performConfiguration(of object: SomeType) throws

}



class Configurator: Configuring {

 let configurationProvider: () -> ConfigurableType.ConfigurationType



 init(configuringCode: @escaping () -> ConfigurableType.ConfigurationType) {

   self.configurationProvider = configuringCode

 }



 func performConfiguration(of object: SomeType) throws {

   if let configurableObject = object as? ConfigurableType {

     let configuration = configurationProvider()

     configurableObject.configure(with: configuration)

   } else {

     throw ConfigurationError()

   }

 }

}

Now, the  performConfiguration(of:) is a generic method. This allows us to call it without knowing the exact type of the object which is configured. The method however became throwable. This is because the type of its argument is widened so that the arbitrary type can be passed. But the method can still handle only the objects that conform to the  Configurable protocol. And if the passed object is not  Configurable we don’t have anything to do with it. In this case we throw an error.

We may now use the newly defined  Configuring protocol to define the dictionary for segue-to-configurator mapping:

This allows us to use the  Configuring objects inside the  prepare(for ) method as shown below:

override func prepare(for segue: UIStoryboardSegue, sender: Any?) {

 if let segueIdentifier = segue.identifier,

    let configuring = segueIDToConfigurator[segueIdentifier] {

   do {

     try configuring.performConfiguration(of: segue.destination)

   } catch let configurationError {        

     fatalError("Cannot configure (segue.destination). " +

                "Error: (configurationError)")

   }

 }

}

Refining the solution

The above  prepare(for ) implementation is the same for any controller that is going to use the described approach. There are several ways to avoid such code duplication. But you must keep in mind that each has its downsides.
The first and the most obvious way is to use some base view controller across the project that will implement the method  prepare(for ) and the  segueIDToConfigurator property for holding configurations:

class BaseViewController {



 var segueIDToConfigurator: [String: Configuring] {

   return [String: Configuring]()

 }



 override func prepare(for segue: UIStoryboardSegue, sender: Any?) {

   if let segueIdentifier = segue.identifier,

      let configuring = segueIDToConfigurator[segueIdentifier] {

     do {

       try configuring.performConfiguration(of: segue.destination)

     } catch let configurationError {

       // throw an error or just write to log

       // if you just want to silently it ignore      

       fatalError("Cannot configure (segue.destination). " +

                  "Error: (configurationError)")

     }

   }

 }



}



class MyViewController: BaseViewController {



 // Define needed configurators

 override var segueIDToConfigurator: [String: Configuring]{

   ...

 }



}

The advantage of the first way is that any controller that subclasses  BaseViewController needs to define strictly the data that is needed for the configuration. That is, override the  segueIDToConfigurator property. However, it forces all the view controllers to subclass  BaseViewController . This makes it impossible to use the system  UIViewController subclasses like  UITableViewViewController , etc.

The second way is to use a special protocol that defines the interface of the controller that can configure other controllers. Consider the following example:

protocol ViewControllerConfiguring {

 var segueIDToConfigurator: [String: Configuring] { get }

}



extension ViewControllerConfiguring {



 func configure(segue: UIStoryboardSegue) {

   if let segueIdentifier = segue.identifier,

      let configuring = segueIDToConfigurator[segueIdentifier] {

     do {

       try configuring.performConfiguration(of: segue.destination)

     } catch let configurationError {        

       fatalError("Cannot configure (segue.destination). " +

                  "Error: (configurationError)")

     }

   }

 }



}



class MyViewController: UIViewController, ViewControllerConfiguring {



 // Define needed configurators

 var segueIDToConfigurator = ...



 // Each view controller still have to implement this method

 override func prepare(for segue: UIStoryboardSegue, sender: Any?) {

   configure(segue: segue)

 }



}

This way is more flexible in comparison to the first one. The protocol can be implemented by any object that is going to configure the segue destination controller. It means that not only  UIViewController subclass can use it. Moreover, it doesn’t limit us to use only the  BaseViewController as a superclass. On the other hand, each view controller still needs to override  prepare(for ) and call  configure(segue:) method in its implementation.

Summary

In this article, I described the approach to configuring destination view controllers with clean and straightforward code when using storyboard segues. The approach is possible thanks to useful Swift concepts, such as Generics and Protocols with associated types. The code is also safe as it uses static typing wherever possible and handles errors. Meanwhile the dynamic types are concentrated in single place and the possible errors are handled only there. This approach allows us to avoid unnecessary boilerplate code in the  prepare(for ) methods. On the other hand, it makes configuring particular view controllers clearer and more robust by using a specific  Configurable protocol.

written by
Andrii Biehunov
Our experts

Main challenges while working in multicultural half-remote teams

We know that adjusting to the new working environment may be tough. It’s even more challenging when you have to collaborate with people located in different offices around the world. We both experienced such a demanding situation and want to describe a few problems and suggest some ways to tackle them. We hope to help fellow professionals who are at the beginning of this ambitious career path. In this article, we want to elaborate on working in multicultural, half-remote teams and main challenges related to this. To dispel doubts, by “half remote team” we mean a situation in which part of the group works together on-site when other part/parts of the crew work in other places, single or in a larger group/groups. We've gathered our experiences during our works in this kind of teams in Europe and the USA.

It’s nothing new that some seemingly harmless things can nearly destroy whole relations in a team and can start an internal tension. One of us worked in a team where six people worked together in one office and the rest of the team (three people) in the second office. We didn’t know why, but something wrong started to happen, these two groups of people started calling themselves “we” and “them”. One team, divided into two mutually opposing groups of people.

Those groups started to defy each other, gossip about yourself, and disturb yourself at work. What's more, there was not a person who tried to fix it, conflicts were growing, and teamwork was impossible. The project was closed. One year later, we started to observe this in our another project. In that project situation was different. There were 3 groups of people, a larger group with 4 people, and 2 smaller with 2 people each. We had delved into the state of things, and we discovered the reasons for this situation.

Information exchange

One of the reasons was the information exchange. The biggest team was located together, and they often discuss things related to work. Often the discussion turned into planning and decision-making, as you can guess, the rest of the team did not have the opportunity to take part in them. The large team made the decision without consulting it, and it really annoyed the rest…

What was done wrong, how can you avoid it in your project? The team should be team, even if its members don’t work in the same location. Everyone should take an active part in making decisions. Despite the fact that it is difficult to achieve, all team members must be aware of this, they must treat the rest of the team in the same way, as if they were sitting next to them.

Firstly, if an idea is developed during the discussion, a group of people must grab it and present it to the rest of the team for further analysis. You should avoid situations where the idea is known only to a local group of people. It reduces the knowledge about the project and increases the anger of other developers. They do not feel part of the team, they do not feel that they have an impact on its development. What's more, if the idea does not please the rest, they begin to treat authors hostile which create conflicts and leads to a situation where people start to say “we” and “them”. Part of the team should not make important decisions, it should be taken by the whole team, if a smaller group has something to talk about everyone should know about it and have a chance to join them (even remotely!).

Secondly, if a group notices they are discussing things which other people may be interested in, they should postpone local discussion and create remote room when discussion can be continued. Anyone can join it as if sitting next to them.

Thirdly, if it was not possible to include others in the conversation the conversation summary should be saved and made available to all team members.

Team integration

The second reason We found was an integration of the parts of the team. The natural thing was that people sitting together knew each other better, thus, a natural division into groups within the team was formed. Sadly, this can not be avoided… but we can reduce the impact of this factor.

Firstly, if possible, we should ensure the integration of the parts of the team. They have to meet at least once, and preferably in regular meetings not related to work, so-called integration trips.

Secondly, mutual trust among the team should be built. The team should talk about difficult situations in full composition, not in local groups over coffee. And if a local conversation took place, the problem should be presented to the whole team. Everyone should be able to speak honestly and feel comfortable in the team, it is very important!

Language and insufficient communication

Another obstacle is a different culture or language. If there are people who speak different languages in the team, they will usually use English which will not be a native language for a part of the team… Different team members may have different levels of English speaking skills, less skilled team members may not understand intricate phrases.

It is very important to make sure everyone understands the given statement. If you know that you have some people in your team whose English is not so fluent, you can ask and make sure they understood everything. Confidence should be built inside the team, everyone should feel that they can ask for an explanation of the statement in the simplest words without taunting and consistency. We have seen such a problem many times in teams especially multicultural. A lack of understanding leads to misunderstandings and the collapse of the project. Each of the team members should learn and improve their skills, the team should support colleagues with lower language skills, politely correcting them and communicating that they use some language form incorrectly. We recommend doing it in private unless the confidence in the team is so large that it can be done in a group.

Communication can also lead to misunderstandings, at the beginning of our careers our language skills were not the best. Our statements were very simple and crude. As a result, sometimes our messages were perceived as aggressive… We did not realize it until We started to notice the tension between us and some of the team members. It is very difficult to remedy this, after all, we do not know what others think. Therefore, small advice from us - talk to each other, seriously and try to build a culture of open feedback in the team, address even uncomfortable topics. Even if you have a language problem it is sometimes better to try to describe something in 100 simple sentences than not to speak at all...

Time difference

Let’s focus on one more challenging difficulty that may cause a lot of troubles while working in half-remote teams. While working in teams distributed over a larger area of the world, the time difference between team member’s locations might cause an issue that is very hard to overcome. We have been working in a team where team members were located in the USA (around both eastern and western coasts), Australia and Poland. As per our experience, it is nearly impossible to gather all team members together because of working hours in those locations. We have observed some common issues that such a situation may cause.

Team members working in different time zones have limited capabilities of teamwork. There is often not enough time for team activities like technical or non-work-related discussions over a cup of coffee that build team spirit and good relations between members. It is impossible to integrate distributed teams without cyclic meetings in one place. We have seen how such odds and ends lead to team divisions on “we” and “they” mentioned before. It is also a blocker when it comes to applying good programming practices in the project like pair programming and knowledge sharing.

Distributed teams are more difficult to manage, and some of the Agile work methodologies are not applicable at all, as it often requires the participation of all team members. In the case of our team, Scrum methodology did not work at all, because we could not organize successful planning sessions, sprint reviews, retrospectives and demos on which everyone’s input matters. It was a common situation where after planning team members did not know what they are supposed to do next, and at first, they needed to discuss something with absent teammates.

If we take a look at distributed team performance, it will usually seem to be lower than in the case of local teams. That is mainly because of inevitable delays when some team member needs assistance from another. Imagine that you start working and after an hour you encounter a problem that requires your teammate’s help, but s/he will wake up no sooner than in 7 hours. You have to postpone task you were working on, and focus on some other - what usually slows your job down. Of course, it is a sunny day scenario, because there might be more serious issues where you cannot do anything else in the meantime (i.e. you have broken all environments including production, backup was “nice to have” on planning - and your mate from Australia is the only one who can restore it). It also takes more time to exchange information, process code reviews and share knowledge about a project if we cannot reach other team members immediately when they are needed.

On the other hand, distributed teams have some advantages. There are many projects or professions that require client support for 24/7 - and in this case, it is much easier for such time coverage. It can save a team from on-calls and other inconveniences.

We have learned that there is no cure for all the problems that distributed teams struggle with, but the impact of some of them can be reduced. Companies that are aware of how time difference impacts team performance often offer possibilities to work remotely from home in fully flexible hours. In some cases, it works and it is faster to get things done, but it does not solve all problems on a daily basis, because everyone wants to live their private life as well, meet friends on the evening or grab a beer and watch TV series rather than work late at night. Moreover, team integration and cooperation issue could be solved by frequent travels but it is expensive and the majority of people do not have the possibility to leave home for a longer period of time.

Summary

To sum it up, multicultural half-remote teams are really challenging to manage. Distributed teams struggle with a lot of troubles such as information exchange, teamwork, communication, and integration - which may be caused by cultural differences, remote communication and the time difference between team members. Without all this, there is just a bunch of individuals that cannot act as a team. Despite the above tips to solve some of the problems, it is hard to avoid the lack of partnership among team members, that may lead to divisions, misunderstandings and team collapse.

And while the struggles described above are real, we can't forget why we do it. Building a distributed team allows a company to acquire talent often not available on the local market. By creating an international environment, the same company can gain a wider perspective and better understand different markets. Diversification of the workforce can be a lifesaver when it comes to some emergency issue that may be a danger for a company that the entire team works in one location. We at Grape Up share different experiences, and thanks to knowledge exchange, our team members are prepared to work is such a demanding environment.

written by
Grape up Expert
Software development

Server Side Swift with Vapor - end to end application with Heroku cloud

In this tutorial, I want to show you the whole production process of back-end and front-end swift applications and push the backend side to the heroku cloud.

First, please make sure that you have installed at least Xcode 10. Let’s start from the backend side:

Open a terminal and check your Swift version and make sure that your swift version is 4.2.

swift –-version

Time to install Vapor CLI, for this one we should use homebrew.

brew install vapor/tap/vapor

Vapor CLI is already installed.

Let’s create a new project. Vapor CLI have simple commands for lots of things. Most of them you will learn in this tutorial later.

Vapor new {your-project-name}

Go into the project directory and build it using "vapor build" command:

vapor build

Let’s create xcode project using vapor xcode which will be needed later.

And verify if your backend works fine locally:

vapor run

As we can see the server is starting on our machine to be sure if that works fine, go to the web browser and check the localhost:8080.

Yupi! Here is your first application running with Perfect Swift. Let's try to deploy it to the cloud, but first, we need to install heroku cli.

brew install heroku/brew/ heroku

Now, deploy your first application to the cloud, but first you need to create free Heroku account. I will skip this process in tutorial. When the account was created go to the terminal install Heroku CLI and try to deploy.

vapor heroku init

Before we push to Heroku we have to change Heroku stack from 18 to 16 as 18 is in the beta stage and doesn’t support vapor yet.

Let’s deploy: git push Heroku master.

The app is already deployed. You can log in via a web browser to your Heroku account and see if it is running and if it works!
Now we need to configure the project. Go to Xcode or other source editor and install PostgreSQL framework. In Package.swift we need to add the FluentPostreSQL. In Package.swift we need to add the FluentPostreSQL.

Run vapor clean, vapor build and then vapor Xcode.

It's time to configure the database and clean up the project. First of all, go to configure.swift file and remove FluentSQL and remove related functions related to it. Then we need to configure the PostgreSQL database in our project. We need to remember that few things need to be done like at the beginning import the FluentPostgreSQL and register provider into services.

try services.register(FluentPostgreSQLProvider())

Then we need to create a database config and register the database.

// Configure a database

var databases = DatabasesConfig()

let databaseConfig: PostgreSQLDatabaseConfig

if let url = Environment.get("DATABASE_URL") {

guard let urlConfig = PostgreSQLDatabaseConfig(url: url) else {

fatalError("Failed to create PostgresConfig")

}

print(urlConfig)

databaseConfig = urlConfig

} else {

let databaseName: String

let databasePort: Int

if (env == .testing) {

databaseName = "vapor-test"

if let testPort = Environment.get("DATABASE_PORT") {

databasePort = Int(testPort) ?? 5433

} else {

databasePort = 5433

}

}

else {

databaseName = Environment.get("DATABASE_DB") ?? "vapor"

databasePort = 5432

}

let hostname = Environment.get("DATABASE_HOSTNAME") ?? "localhost"

let username = Environment.get("DATABASE_USER") ?? "robertbaran"

databaseConfig = PostgreSQLDatabaseConfig(hostname: hostname, port: databasePort, username: username, database: databaseName, password: nil)

}

let database = PostgreSQLDatabase(config: databaseConfig)

databases.add(database: database, as: .psql)

services.register(databases)

Once the database is registered we need to create our model and controller. In the example project, we have todo model and controller you could remove that as we will create a new model and controller. The idea for the app is creating a shopping list, so we need to think about what we need for that. So we have already database registered so we have to create a model which will contain our productName and id. Create ShoppingList.swift file which will be our PostgreSQL model.

import FluentPostgreSQL

import Vapor



final class ShoppingList: PostgreSQLModel {

var id: Int?

var productName: String



init(id: Int? = nil, productName: String) {

print("Shopping List Model was sucesfully created")

self.id = id

self.productName = productName

}

}



extension ShoppingList: Migration { }

extension ShoppingList: Content { }

extension ShoppingList: Parameter { }

Go back to the configuration file and register database migration for that model.

var migrations = MigrationConfig()

migrations.add(model: ShoppingList.self, database: .psql)

services.register(migrations)>

The model is already done, the database is registered so it’s time to create a controller which we gonna use for getting, creating and deleting data from our database. We have provided functions for that:

import Vapor



final class ShoppingListController {

func index(_ req: Request) throws -> Future<[ShoppingList]> {

return ShoppingList.query(on: req).all()

}



func create(_ req: Request) throws -> Future {

return try req.content.decode(ShoppingList.self).flatMap { item in

return item.create(on: req)

}

}



func delete(_ req: Request) throws -> Future {

return try req.parameters.next(ShoppingList.self).flatMap { item in

return item.delete(on: req)

}.transform(to: .ok)

}

}

Also, we need to configure the routes. Go to routes.swift and initialize ShoppingListController and create routes for the functions which we already created in the controller:

let shoppingListController = ShoppingListController()

router.get("shoppingList", use: shoppingListController.index)

router.post("shoppingList", use: shoppingListController.create)

router.delete("shoppingList", ShoppingList.parameter, use: shoppingListController.delete)

Try to build the project. Once the project is built successfully you can test that locally but first, you need to setup PostgreSQL locally. I would recommend you to configure Postgres on Heroku and push it directly. First, add Heroku add-ons. I’m using the free hobby-dev database for this project:

heroku addons:create heroku-postgresql:hobby-dev

Then:

heroku config

Edit procfile web: Run serve --env production --port $PORT --hostname 0.0.0.0

Commit the changes and push to Heroku .

gu98:VaporSwiftTutorial robertbaran$ cd HerokuVaporSwiftTutorial/

gu98:HerokuVaporSwiftTutorial robertbaran$ clear

gu98:HerokuVaporSwiftTutorial robertbaran$ heroku addons:create heroku-postgresql:shoppingList-dev

Creating heroku-postgresql:shoppingList-dev on ⬢ heroku-vapor-swift-tutorial... !

▸ Couldn't find either the add-on service or the add-on plan of "heroku-postgresql:shoppingList-dev".

gu98:HerokuVaporSwiftTutorial robertbaran$ clear

gu98:HerokuVaporSwiftTutorial robertbaran$ heroku addons:create heroku-postgresql:hobby-dev

Creating heroku-postgresql:hobby-dev on ⬢ heroku-vapor-swift-tutorial... free

Database has been created and is available

! This database is empty. If upgrading, you can transfer

! data from another database with pg:copy

Created postgresql-tetrahedral-24833 as DATABASE_URL

Use heroku addons:docs heroku-postgresql to view documentation

gu98:HerokuVaporSwiftTutorial robertbaran$ heroku config

=== heroku-vapor-swift-tutorial Config Vars

DATABASE_URL: postgres://hpeyjgreawzvrs:f42deaf2a16932b18b3a693fff7a577711e2c10c19efe731a0d0165e12d5a6c4@ec2-23-21-188-236.compute-1.amazonaws.com:5432/dab0tadl6uco6a

gu98:HerokuVaporSwiftTutorial robertbaran$

Verify the Heroku build, you can use it for that Postman or any other tool for API Development. Make post/get/delete and verify if that works. If you followed this tutorial everything should work fine.

Now let's begin work on the iOS app for creating a shopping list. Usually, I create a workspace and new project for iOS, but you can do it whatever you like. There are a few options. You can have 2 separate projects for frontend and backend, you can group everything in one project and use different targets for it or you can have a workspace with 2 projects. In this tutorial, I will be working with the workspace and 2 separates projects.

Save your backend project as a workspace from Xcode -> File menu now close the current project and open the workspace. Create SingleViewApplication for frontend application select shared scheme for it and drag n drop your project file to the workspace. Now you have to close Xcode and open again your workspace. All projects should appear properly.

Make sure that you have all schemes here and also your projects appear properly.

Let’s start working on the iOS app. This will be just quick as we need just add a button text field and table view with the list. I’m not gonna create MVVM but it’s recommended also will use just NSURLSession for it as this tutorial it’s not about it. I just want to show you how quick and easy the whole application can be created.

Go to Main.storyboard and create UI for this app first. Add tableView, button, and textField into you view Controller and setup constraints.

Create outlets for button, textField and connect delegate and dataSource for the tableView. Create an action for the button, create an extension for tableView delegate and data source. Create CellView. In your storyboard, you could create a prototype cell and use it in the data source. You could also create some labels and loading indicator to make that application a nice look. This tutorial is not about swift – iOS so I just give you an example of usage APIs here. You can find the whole project here. This for both frontend and backend with the workspace created. To be able to run it first you need to follow all the changes related to backend specific and Heroku as URL needs to be changed here for Heroku deployed an app or if you want to run it locally you need to configure PostgreSQL database on your machine and change database configuration in the configure.swift file.

Conclusion

Vapor is the best choice until you don’t need to use a non-SQL database like Kassandra as there is no easy way to configure that and make it work. As for me, this is no issue at all. This tutorial is easy but this is just to show you how that simple is writing backend in Swift. It’s the most important thing for the ios developers and also for every other delveloper as in my opinion this language is becoming to be the most popular language in a few years. I’m not a backend developer just enthusiast of server-side swift but thinking about change my frontend role to become a full stack.

written by
Grape up Expert
Software development
Data platforms

Be lazy – do not reinvent the cloud wheel from scratch

Keeping up with the newest technologies is demanding. A lot of companies tend to do that once in a while, and it is totally understandable. It is hard to follow and discover the perfect momentum to choose the cloud technology that will be valuable and cost-effective for years to come.

Cloud is not only a technology, but it also determines how to design, build and maintain applications and systems. For someone who is currently engaged in the process of digital transformation that is a huge deal – it's like upgrading your rickshaw to the newest Mustang car. All of the development tasks seems to be so easy after adopting to all cloud and DevOps conditions.

The common mistake in the situation where your process and software tools are outdated is to reinvent the new environment by yourself. Especially, when sophisticated platforms or infrastructures come into the picture. Becoming an expert in cloud technologies is a perfect example. It requires a lot of time and resources to master the best tools and to make them work together in the way we want them to. That time should be spent instead on what drives the company’s business and what the customer cares about the most: developing the product that will be running in the cloud.

At Grape Up, we follow a technology agnostic approach. We choose tools and technologies that are tailored to every specific customer and project. While working with the various companies on digital transformation, we helped several teams leverage cloud-native technologies and adapt a DevOps approach to deliver software faster, better, and safer. And despite the fact that every case was different, we have identified a visible pattern. Our team discovered a strong market demand on a cloud-native platform based on open source solutions. We also noticed that in most cases platform maintenance operations were outsourced. Again, building know-how is a hard process, so why not let someone with the right experience lead the way?

This is how Cloudboostr was born. A complete solution to run any workload in any cloud . All based on available open source tools that are well-known by the community and most importantly widely used and maintained. In the beginning, we’ve created a type of reference architecture to accelerate the digital transformation process. It was used as our internal boilerplate, not to reinvent to the whole thing with every customer, but rather use something based on the experiences from previous projects. Along the way, what used to be our template, got mature, and the time has come to make it an external platform, available for other companies.

How does the Cloudboostr platform work?

The Cloudboostr’s technology stack incorporates a number of leading open source technologies proven to work well together. Cloudboostr’s Cloud Engine is built with Kubernetes and Cloud Foundry, making applications deployment easy and both solutions cover a wide spectrum of application use cases. Terraform, BOSH & Concourse are used to automate the platform’s management and operations, and also track who, when & why applied a particular change. Built-in Telemetry toolkit consisting of Prometheus, Grafana and ELK stack allows to monitor the whole platform, investigate any issues and also recognize areas which should be improved.

The main abstraction layer of the platform is called Ops Control. It is a control panel for the whole platform. Here, all of the environments and also all shared resources like managing, monitoring or alerting services, are created and configured. Every environment can be created with either Cloud Foundry or Kubernetes or both of them. Shared resources help to monitor and maintain all of the environments and ensure proper workflows. From the operator’s perspective, the main point of interaction is Concourse with all of the pipelines and CLI for more advanced commands. Plus all of the Telemetry dashboards, with all sorts of graphical charts and metrics, where all concerns are clearly visible.

Cloudboostr as a platform includes all of the third party components under its wings. This means that any new platform releases will incorporate not only internal features or improvements but also upgrades of its elements if available. From a perspective of platform user, this is convenient - there is no need to worry about any upgrades, but one - the platform.

From a production perspective, it is crucial to have your platform always up and running. Cloudboostr uses HA-mode by default and leverages the cloud’s multi-AZ capabilities. It also automatically backs up and restores individual components of the platform and their state and data, allowing you to easily recreate and restore a corrupted or failed environment.

It is also important that Cloudboostr comes pre-configured, so if you're starting a journey with the cloud, you can install the platform very fast and be ready to deploy your applications very quickly. The initial cost of getting platform working is fairly low when it comes to getting the know-how, and that's a huge benefit.

How can Cloudboostr be tested?

It is obvious that platform is a big deal and no one would like to buy a pig in a poke. Especially when long term services are in the picture. Grape Up enables customers to try out the platform for a while before making a final decision. It is also understandable that most of the companies need more time to get to know the technology behind the platform. That's why the trial time is done in the dojo style. For a few weeks, the Grape Up crew settles in a customer location, installs the out of the box product, teaches how to use it and also navigates migration of the first application to the platform. Support provided by experienced platform engineers helps to understand the full capabilities of the product and how to use it in particular scenarios.

After the first period, there is another time frame for unassisted tests, where the client’s operations team can play with the platform, apply various tests or migrate other applications to see how they will run on the new environment. During that time, Grape Up platform engineers are still available to help and guide.

What about costs?

All the mentioned technologies are open source and if one would like to build this platform on their own they can do that. So why should anyone pay for that? The answer is, no one needs to pay for Cloudboostr license – it's totally free.

In order to use Cloudboostr platform though, you need to have a support service subscription plan. It's obvious that complex solutions, such as cloud platforms, need to be maintained by someone. Majority of the market outsource the services, which is especially convenient if you use software organized as a product. Again, it's not worth it to gain experience by yourself and spend weeks or months on that, while your competitors are moving forward. Also knowing that there is already a fair share of the market, who knows the technical details that you're looking for. However if one feels that they already have all of the knowledge needed to maintain the platform themselves, that's fine too - they can simply choose a minimal support plan.

Conclusion

The wisdom coming out from this article is – if you want to go cloud, do not reinvent the cloud wheel from scratch. Do the things that you were meant to – focus on building your applications and delivering true value to your customers and let the experts help you run your applications in the cloud and maintain the cloud environment. Boost your cloud journey with Cloudboostr !

written by
Grape up Expert
Software development

5 concourse CI tips: How to speed up your builds and pipeline development

With ever-growing IT projects, automation is nowadays a must-have. From building source code and testing to versioning and deploying, CI/CD tools were always the anonymous team member, who did the job no developer was eager to do. Today, we will take a look at some tips regarding one of the newest tools - Concourse CI. First, we will speed up our Concourse jobs, then we’ll ease the development of the new pipelines for our projects.

Aggregate your steps

By default, Concourse tasks in a job are executed separately. This is perfectly fine for small Concourse jobs that last a minute or two. It also works well at the beginning of the project, as we just want to get the process running. But at some point, it would be nice to optimize our builds.

The simplest way to save time is to start using the aggregate keyword. It runs all the steps declared inside of it in parallel. This leads to time-savings in both - script logic execution and in the overhead that occurs when starting the next task.

Neat, so where can we use it? There are 2 main parts of a job where the aggregation is useful:

1. Resource download and upload.
2. Tests execution.

Get and put statements are ideal targets because download and upload of resources are usually completely independent. Integration tests, contract tests, dependency vulnerabilities tests, and alike are also likely candidates if they don’t interfere with one another. Project build tasks? Probably not, because those are usually sequential and we require their output to proceed.

How much time can aggregating save? Of course, it depends. Assuming we can’t aggregate steps that build and test our code, we do get the advantage of simultaneous upload and download of our resources as well as we get less visible step-to-step overhead. We usually save up to two, maybe even three minutes. The largest saving we got was from over half an hour to below ten minutes. Most of the saved time came from running test-related tasks in parallel.

Use docker images with built-in tools

This improvement is trickier to implement but yields a noticeable build time gains. Each task runs in a container, and the image for that container has a certain set of tools available. At some point in the project comes a time where no available image has the tool required. First thing developers do is they download that tool manually or install it using a package manager as a part of the task execution. This means that the tool is fetched every time the task runs. On top of that, the console output is flooded with tool installation logs.

The solution is to prepare a custom container image that already has everything needed for a task to complete. This requires some knowledge not directly related to Concourse, but for example to Docker. With a short dockerfile and a couple of terminal commands, we get an image with the tools we need.

1. Create dockerfile.
2. Inside of the file, install or copy your tools using RUN or COPY commands.
3. Build the image using docker build.
4. Tag and push the image to the registry.
5. Change image_resource part in your Concourse task to use the new image.

That’s it, no more waiting for tools to install each time! We could even create a pipeline to build and push the image for us.

Create pipelines from a template

Moving from time-saving measures to developer convenience tips, here’s one for bigger projects. Those usually have a certain set of similar build pipelines with the only differences being credentials, service names, etc. - parameters that are not hardcoded in the pipeline script and are injected at execution time from a source like CredHub. This is typical for Cloud Foundry and Kubernetes web projects with microservices. With a little bit of creativity, we could get a bash or python script to generate those pipelines from a single template file.

First, we need to have a template file. Take one of your existing pipeline specifications and substitute parameter names with their pipeline agnostic version. Our script needs to loop over a pipeline names list, substitute generic parameter names with proper pipeline related ones that are available in Credhub and then set the pipeline in Concourse with the fly CLI.

The second part of the equation here is a Concourse job that watches for changes in the template file in a Git repository and starts the pipeline generation script. With this solution, we have to change only one file to get all pipelines updated, and on top of that, a commit to pipeline repository is sufficient to trigger the update.

Log into a task container to debug issues

When debugging Concourse task failures, the main source of information on failure is the console. A quick glance at the output is enough to solve most of the problems. Other issues may require a quick peek into the environment of an unsuccessful task. We can do that with fly intercept command.

Fly intercept allows us to log into a container that executed a specific task in a specific job run. Inside we can see the state of the container when task finished and can try to find the root of failure. There may be an empty environment variable - we forgot to set the proper param in a yml file. The resource has a different structure inside of it - we need to change the task script or the resource structure. When the work is done, don’t forget to log out of the container. Oh, and don’t wait too long! Those containers can be disposed of by Concourse at any time.

Use Visual Studio Code Concourse add-on

The last thing I want to talk about is the Concourse CI Pipeline Editor for Visual Studio Code. It’s a plugin that offers suggestions, documentation popups, and error checking for Concourse yml files. If you use the pipeline template and generation task from the previous tip, then any syntax error in your template will be discovered as late as the update task updating the pipelines from the template. That’s because you won’t run fly set-pipeline yourself. Fixing such issue requires a new commit in the pipeline repository.

With the plugin, any unused resource or a typo in property name will be detected immediately. Add-on will also help you write new pieces of automation code by suggesting keywords and showing available values for Concourse commands. The only action required is to update the files.associations section in the settings. We use separate directories for pipelines and tasks, so we have set it up as follows:

Conclusion

And that’s it! We hope you have found at least one tip useful and will use it in your project. Aggregate is an easy one to implement, and it’s good to have a habit of aggregating steps from the start. Custom images and pipeline templates are beneficial in bigger projects where they help keep CI less clunky. Finally, fly intercept and the VSC add-on are just extra tools to save time during the pipeline development.

written by
Michał Bławat
Legacy modernization
Software development

5 Tips on how to deal with common problems when running large production environments

Working as a platform operator with cloud-native technologies, L2 technical support , and participating in CF installations give a unique opportunity to observe how different companies implement new technologies in their works and how they deal with running large production environments. Among various bad experiences, imperfect ideas, and the most reprehensible habits related to running and maintaining cloud infrastructures those listed below can generate the most complicated problems.

Bad practices often occur when it comes to productive CF infrastructures. However, these guidelines should help everyone who runs or uses any of the production-ready workloads.

Neglected capacity planning

Let’s start with this: you have to be aware that you will run out of resources eventually. Then you should plan how to scale up. If you run on-premises software, you should consider hardware and virtualization layer’s requirements. Proper sizing of the availability zones will always save you many problems.

On top of IaaS there is always a PaaS or some container orchestrator. The key to success here is to optimize all the limits, quotas and other configurations (like application scaling rules, etc.) so the microservices never consume available resources, even under high load.

It’s obvious that both hardware and virtualized capacity planning requires a buffer. You need to be prepared for issues, maintenance and infrastructure changes. There is no best configuration. It always depends on many factors but nevertheless, it is always worth taking into consideration.

Capacity and resources have to be monitored. A good monitoring tool with decent alerting rules will help you predict possible problems and react quickly if anything bad happens to your infrastructure.

Poor or no CI/CD

If you want to maintain any piece of software, don’t forget how valuable is automation. Many times people quit on CI/CD implementation because of the deadline or tasks formally more important. In most cases, it doesn't end up well.

It's hard to build, test and deploy software without automation. The manual process is highly exposed to the risk of human error. Apart from that, it is almost impossible to keep track of deployed software (version, updates, hotfixes, security patches, etc.) in large production environments. Sometimes you have to maintain CF platforms hosting 1K+ applications. Consider how problematic would be the migration process if there is a business decision to switch to a different solution.

For operators maintaining the infrastructures, platforms, and services used by developers it’s critical to keep everything up to date, take care of security patches and configuration changes. It is impossible to handle this manually with minimal or zero downtime of the services. That is why automated pipelines are so important, and you should never give up on implementing them in the first place.

Poor or no backup/restore procedures

Backup/restore is another important process that people often put in the background. You may think that your applications are safe if your IaaS offers you a highly available environment or containers you run have an auto-healing function. This is not true. Any disaster can happen, and in order to recover quickly, you have to create a well-defined backup and restore procedures that work. That’s not all, as the procedures have to be tested periodically. You need to be sure that backup/restore works fine since the process may depend on some external services that might have changed or just brake.

No periodic updates

Every software has to be updated regularly in order to keep it secure. It is also much safer to perform minor updates with a little chance of failure or downtime than doing ‘big jumps’. Major updates introduce higher risk, and it is hard to catch up with versions especially if there is no automation implemented.

You may see cloud infrastructures that were just installed and never upgraded and that generates a lot of issues for platform operators (users can’t see any difference). It is not a problem until everything works correctly. But after some time people may start escalating issues related to the versioning of the services. Unfortunately, it is too late to upgrade smoothly. It becomes a big spider’s web of dependencies. It may take weeks to plan the upgrade process and months to execute it.

Flawed architecture

Defective architecture generates serious problems. Many times developers are not aware of the issue until it shows up in production. After that, it’s really hard to admit the architecture needs to be changed and people often try to get rid of the effect instead of fixing the cause of the problem.

Let’s take a real-life example often faced. You may be receiving Prometheus alerts saying that ELK stack is overloaded. After investigating the issue, it may turn out that microservices are so verbose that they generate thousands of log messages per second. What if you raise the possible architecture problem, but nobody cares? As a result, you’ll have to scale ELK. In those cases, it may waste hundreds of CPUs and terabytes of memory and storage. That makes somebody spend money just to store 90% of useless data and maybe 10% of valuable information. This is really a simple way to put yourself in a situation without a way out.

Conclusion

Following these guidelines will definitely not be easy. Sometimes people responsible for making decisions are just not aware of the consequences of taking some actions. The role of every technically skilled person in the project is to spread the knowledge and make people aware of what may happen if they ignore those basic rules that matter. You can’t step back if you encounter such practices in the future. Be an example for others and drive change - it’s always worth trying.

written by
Tomasz Ćwięk
Data platforms
Manufacturing

What Is an exit strategy and why do you need one?

Cloud is everywhere these days. There is no doubt about that. Everyone already did or is about to make an important decision - which cloud provider to choose?

For enterprise solutions, the first thought goes to the big players: Amazon, Microsoft, Google. They provide both private and public cloud solutions using their own set of APIs, command-line tools and of course pricing.

It is very easy to get in, but is it the same with getting out? All your services may already benefit from great tools and services shared by your provider. Awesome tools, but specific to the company. Does Amazon CLI work with Azure? Not really.

The important part is that the scale of your infrastructure you can manage. The software can be optimized, to some point, but still, it may be a viable choice. Resources can be hosted using CDN to better manage bandwidth and storage cost. But what can be done if the provider increased the prices? How this can be mitigated?

The exit strategy

This is why you need an exit strategy. The plan to move all your resources safely and possibly without any interruptions from one warehouse to another. Or even the plan to migrate from one provider to the another. The reasons for the change may be very different. Pricing change, network problems, low latency or even a law change. All this may push your company against the wall. It is fine when you are prepared, but how to do this?

It may seem more on the paranoic side or catastrophic thinking, but in reality, it is not so uncommon. Even worse case, which is a provider shut down, happened a lot lately. It is especially visible with an example of cloud storage startups. $15 million dollars CloudMine startup filed for bankruptcy last year. A few years ago the same thing happened to Nirvanix and Megacloud. Nobody expected that and a lot of companies has been facing that problem - how can we safely move all the data if everything can disappear in 2 weeks?

Does it mean AWS will go down tomorrow? Probably not, but would you like to bet if it will be there in 10 years? A few years ago nobody even heard about Alibaba Cloud, and now they have 19 datacenters worldwide. The world is moving so fast that nobody can say what tomorrow brings.

How to mitigate the problem?

So we have established what the problem is. Now let’s move to the solutions. In the following paragraphs, I will briefly paint a picture of what consists of an exit strategy and may help you move towards it.

One of them is to use a platform like Cloud Foundry or Kubernets which can enable you to run your services on any cloud. All big names offer some kind of hosted Kubernetes solutions: Amazon ECS, Microsoft AKS and Google GKE. Moving workloads from one Kubernetes deployment to another, even private hosted, is easy.

This may not be enough though. Sometimes you have more than containers deployed. The crucial part then will be to have infrastructure as a config. Consider if the way you deploy your platform to IaaS is really transferable? Do you use provider-specific databases, storage or services? Maybe you should change some solutions to more universal ones?

Next part will be to make sure services your write, deploy and configure are truly cloud-native. Are they really portable in a way you can get the same source code or docker image and run on the different infrastructure? Are all external services loosely coupled and bounded to your application so you can easily exchange them? Or if all your microservices platforms are independent?

Last, but not least is to backup everything. Really, EVERYTHING. Not just all your data, but also the configuration of services, infrastructure, and platforms. If you can restore everything from the ground up to the working platform in 24 hours you are better than most of your competitors.

So why do I need one?

Avoiding provider lock-in may not be easy, especially when your company just started to use AWS or Azure. You may not feel very comfortable creating an exit strategy for different providers or just don’t know where to start. There are solutions, like Grape Up’s Cloudboostr, that manage backups and multi-cloud interoperability out of the box. Using this kind of platform may save you a lot of pain.

An exit strategy gives you a lot of freedom. New, small cloud provider comes to the market and gives you a very competitive price? You can move all your services to their infrastructure. Especially when we consider that small cloud providers can be more flexible and more keen to adapt to their client needs.

An exit plan gives safety, freedom, and security. Do not think of it as optional. The whole world goes toward the multi-cloud and the hybrid cloud . According to Forbes, this is what 2019 will matter most in the cloud ecosystem. Do you want to stay behind? Or are you already prepared for the worse?

written by
Adam Kozłowski
Our experts

Grape Up consultants’ 101 – what does on-site work look like?

When I first came to Grape Up, almost two years ago, I could hardly say I did not know what to expect. The kind of work I was to be doing was known to me, as was the location. Still, I felt anxious about going over the pond and spending almost three months in a peaceful Chicago suburb. Another challenge – working with people with whom I seemingly had little in common, apart from the knowledge of the tech and language. I was kind of obsessed with those tiny details that set the American and European cultures apart – so similar, yet so different. But let me get back to where the story started.

In the first half of 2017, I happened to be looking for a different job, just when Grape Up tried to reach me. An inconspicuous e-mail, detailing a work opportunity in the United States, for a period of time, possibly in an Illinois or Texas location, appeared somehow in my inbox. I decided to bite the bullet, well knowing myself as a not-so-adventurous person – apparently, the more I thought about the implications of going abroad for a fairly long time, the more I hesitated. So, the decision was a quick one – still, I wish I knew things back then that I now do.

Because of that, to clarify matters for anybody pondering about working with Grape Up as a contractor programmer here is a little guide, which I modeled after a typical FAQ. This should be a good start – if any questions arise, do not hesitate to contact us!

Can you tell me more about the job you got?

I have been working for a big insurance company, perhaps the second largest of its kind in the US by revenue. By all means, this could be likened to working in a Polish branch of a corporation, but the work culture is quite different from what you could get in the country of my residence.

Was your job interview anyhow different from the usual?

I would say it was as good as any interview I have been to, save for one thing. Since I would be working in pairs (a very common practice in teams adhering to extreme programming), I had a short call with a technical person from the US company. I was presented with a problem, and we then tried to alternatively write tests and implementations together – the purpose was to have a quick outlook on how I would find myself in the new environment, and also for me to get a gist of how pair coding looks like. This might seem unusual at the beginning – try not to let it intimidate you!

Was getting to the USA hard?

I found the process of obtaining the necessary visa daunting. First, the ancient form on the US Embassy’s website (watch out for mistakes), then waiting for the appointment and finally getting into the queue at the Consulate. I cannot say that I did not enjoy getting that sticker in my passport – it made me proud! Apart from that, the flight itself and resulting jet lag can be physically challenging for some (myself included). Still, once that initial fatigue subsided, America was all mine to enjoy.

What did your usual day look like?

For the morning routine, we would usually drive or walk to work (had we been fortunate enough to live downtown), attend a group standup at 8AM, have a team stand-up shortly afterward and then have a complimentary breakfast, provided by our company. After working for a bit, we would head for lunch at 12 (lots of options, regardless of the location). Informal breaks also happen, more or less frequently – ping-pong and Mario Kart are among popular leisure activities in the US workplace. Workday would end at 5PM with some exceptions, such as “summer hours” on Fridays, in which case we could go home a little earlier. Still, all of this varies from company to company, and even from location to location.

What about the projects?

I happened to work on-and-off with various product teams. The projects spanned internal enterprises, such as data buses, as well as front-facing customer applications, like video chatrooms and remote consulting services. Worth noting is the fact that I have always felt like an important part of a team – little difference was being made between people working as contractors and full-time employees.

How did you find the people there?

Whether you are headed for America or otherwise, you may be in a culture shock. In America, they may seem unusually nice, in other places, to the contrary, somewhat uptight. Either way, unhelpful and rude people are a surprisingly rare occurrence. Some of them were externals, like me and we had little problem finding common ground – regardless, cultural discrepancies are a fun topic to chit-chat about.

What’s there to do in the time after?

Speaking of Chicago, depending on the closeness to the city center, there’s a lot to do or even more. Museums are abounding, as are clubs. Try eating something different every day! In the suburbs, some local Asian restaurants are well worth paying a visit. Apart from that, you may want to save some of your planned leave for visiting the other cities or sights after your assignment is completed.

Is food in the US really as unhealthy as it gets?

Depends on what you are having, of course. There are plenty of decently healthy options to choose from but finding them requires a little effort. It’s easy to end up eating junk food every day, because of the convenience, so watch your carbs while you are there.

Was there anything you felt uncomfortable with?

The 9-hour working day with an hourly lunch break takes some time to get used to. American cities are vast and half an hour drives to the nearest Wal-Mart are not uncommon while living in the suburbs. Downtown, on the other hand, while offering a lot of services within walking distance is very loud and lively – I certainly did not expect being woken up to car sirens at 2:00 in the morning on the weekends!

To sum it up, how was your experience?

Overall, quite good – I really appreciate the opportunities that on-site work was able to provide me with, like working with people from different backgrounds and getting a taste of what’s it like to live in a different country.

Would you go back?

Absolutely – as a matter of fact, I already did and came back. It has only been getting better with every new place I traveled to. I sure hope that impression lasts long enough.

written by
Grape up Expert
Software development
Our experts

What to know before introducing pair programming at your company?

Pair programming has become a popular way of developing the newest products and encouraging team members to share knowledge and grow together. Many of the world’s top enterprises adopt this innovative technique. Some of them with success and some of them with less expected results.

Before a company embraces pair programming its leaders and developers should get to know some basic knowledge about this approach. As a company that not only uses this practice on a daily basis but also implements pair programming in the customer’s engineering teams, we’ve answered to the commonly asked questions about this technique, its pros, and cons, how to prepare a workstation for developers working in pairs, and if pair programming is good for everyone.

What is pair programming and how should it look like?

You can find various definitions, but most of them agree that pair programming is an agile software development technique that requires two developers working together on the same block of code, using one workstation. In these pairs, one programmer is becoming a “Driver” and the second one is known as a “Navigator”. Where they work together “Driver” manages a computer and writes code, whereas “Navigator” is supposed to check the written code, follow colleague’s job, and work on problems faced during implementation. When using the pair programming technique, programmers should discuss ongoing works, search for effective solutions, and support each other. What’s important, roles in a pair coding team should be changed during a workday as they are not assigned permanently.

How to prepare a workstation?

The most important thing when designing a workstation comes to making it comfortable for both programmers. A desk for a pair should be properly larger than a desk for an employee working alone. And when two people work together for eight hours a day both need their own, dedicated space to build a comfort zone. A workstation for coding duo requires at least two monitors that displaying the same things, and a keyboard and mouse for each developer.

Teams that work remotely proved that pair programming can be done effectively even when collaborating developers do it at a distance. It’s important to equip these developers with a working gear that enables productive communication, and high-quality headphones are a must-have.

Here is a list of useful tools and resources for pair programming:

  • Cloud IDEs: AWS Cloud9, Codenvy,
  • Communication and screen sharing tools: Skype, Slack, Zoom, TeamViewer, Stride,
  • Live Code Share Extensions: Atom Teletype/VSCode Live Share,
  • GIT Scripts: pivotal-legacy/git_scripts.

Advantages and disadvantages of pair programming

And when pair programming delivers real value for companies that implemented this technique, the leaders responsible for adopting these practices in their teams have to be aware of its threats. Here are some of the most valuable pros of pair programming:

  • Higher quality of designed solutions - more compound and more meaningful development for given tasks.
  • Different views and perspectives regarding the same problem.
  • Reduced risk of failure or making a mistake.
  • Time and cost savings - software is delivered faster which decreases the costs of development.
  • Agile and productive knowledge transfer in projects.
  • Breaking down silos in projects - avoiding situations when only one person in a project owns key knowledge.
  • Building stronger relationships among team members.

And here come some cons every company has to take into account:

  • Dealing with distractions to individual comfort zones and independent working style.
  • Personalized working environment - getting used to a non-standard way of doing a job.
  • Common lack of convenience to not that obvious and innovative way of programming among developers used to works on their own.
  • More demanding and exhausting way of work.

All of the disadvantages may result in rising discomfort and drops in productivity which sabotage the whole idea of pair programming and potential wins it could provide.

Was every developer born for pair programming?

Pair programming may not be that easy as it is often presented. This technique obligates developers to be communicative, conscientious and focused on the subject matter. For many coding professionals, the hardest part of programming in pairs comes to the necessity to adjust to the other person - her/his way of work, time, and individual habits.

People who value working on their own manner and their own comfort, need focus and quiet environment, or just avoiding too much engagement in communication, may find pair programming too demanding for them.

My experiences with PP

My experience with pair programming has started two years ago. The theory seemed reasonable, but I was a little bit skeptical, and to be honest, I didn't believe in promised advantages. For me, it was rather a waste of time and resources when two developers work on the same problem at the same time.

At first, it was hard. I had to change my entire working routine and adjust to a colleague I was paired with. Actually, we both had to modify our working style, and we all know how difficult it is to change habits.

Adjusting to the new technique of doing my job influenced my time after work. I was more tired but at the same time, I felt that we had done a lot of good quality jobs. And step by step, it became easier as we got familiar with those practices. What was really encouraging - the result of our work and the amount of new knowledge was insane .

Working at that level of focus is demanding and costs a lot of energy. So, having a break for 2 or 3 minutes, do some small talk about topics unrelated to problems you’re currently trying to solve, may result in a more productive attitude during the day. It’s also worth noting, that direct collaboration with one of my colleague gave me the opportunity to better get to know that person and having a good friend at work also improved my comfort.

After a few weeks, I’ve noticed all these advantages of pair programming that previously wasn’t that obvious to me. We’ve improved the quality of our work, and our solutions were well thought out. As a result, we’ve minimized the number of mistakes which significantly enriched our outcome. And then it has become clear for me that pair programming is not just a cool theory but an effective way of developing new solutions and useful tool for learning new things. And when it’s a challenging technique, maybe not for everyone, but for sure it’s worth giving it a try. Anyone can try it on his/her own and in case of any troubles get back to the previous way of doing the job.

To sum up all my experiences with pair programming, I have to admit that this technique definitely works and expect the simple tasks, it’s worth to use it every day.

written by
Łukasz Fiedoruk
Software development

Outsourcing of authorization and authentication

Nowadays, we can observe many repetitive patterns in existing implementations. From microservices, to MVC layers, or REST APIs. Authorization and authentication are the fundamental parts of application security. The most popular pattern it the login and password, frequently with the addition of two-factor authentication via SMS code, e-mail or a software token.

But the problem is more complex than it seems at first. There is a chance that every year the current “best practices” become obsolete as they fail to provide a sufficient level of security. Sometimes a critical bug, like HeartBleed, creates an earthquake in the security world and forces everyone to immediately review update their services.

To top it all off, unfortunately security is not something that’s painfully obvious for the non-technical clients - so from the business perspective it does not directly “make money”. As a result, some companies try to reduce resources spent on it. The management is more likely to spend a month on improving the appearance of a landing page than a week on improving passwords encryption.

The complexity and lack of a business reason to invest in the problem make it a great candidate to outsource . Those reasons made our customers do so.

When to delegate?

In this section I would like to list the cases which can make delegation extremely profitable.

Providers have everything we need. Almost.

Major identity providers addressed a wide range of their clients’ problems. Namely, the configuration of their service takes most likely less time than writing everything from scratch on your own. This makes even more sense when we consider startup-like projects when time is putting on us the pressure of “sell fast or die” and having a working prototype is crucial as soon as possible.

We don’t know the requirements yet.

Another problem that frequently appears in the initial phase is postponing the creation of the security model to the remote future. Writing your own temporary solution from scratch would take days and if key decisions are not made yet it would end up being rewritten again and again. Integration with an external provider in default configuration takes hours, and usually there is a free trial period (and it doesn’t always have to be expensive, Auth0 starts at $13 /month for databases less than 1k users). Migration from the external provider to your own in an optimistic case, may be as simple as changing a couple of entries in the configuration file.

Relatively high costs of support authorization.

Perfect software should not need any maintenance. Unfortunately, the reality isn’t that perfect, nor any software we write. If our services contain their own authorization mechanism it may require some changes in it even it has been coded properly long time ago. Improving security (for instance changing password encryption from MD5 to blowfish) requires a couple of days of the developer’s work. It engages other departments such as the UX team - users have to be notified in a non-invasive way, a test team - someone has to check if the new path works and check whether or not any features have been affected by it.

If your software is rarely changed and the identity management is the most common reason to change it - then rewriting the authorization mechanism to use external provider makes a lot of sense.

What should you not be afraid of?

There are cases when such delegation provides significant value, but it also comes with an additional cost.

Social networks integration

Most identity providers offer Facebook integration. It is slightly different to other social networks, because it is mandatory to fill an additional form which is required by Facebook’s bureaucracy. It also happens when you integrate Facebook SSO yourself. Integration with other social networks is easier in both approaches.

Data import

It is possible to import existing user data to an external provider. Unfortunately, in most cases it won’t be transparent from the users perspective. For example, Auth0 doesn’t import passwords even if for some reason we have them in plain text (of course, nobody has). It forces the user to set their password at next login via the password reset mechanism.

Vendor lock-in

I have mentioned before that, in an optimistic case, the migration from an external provider to your own solution may be as simple as changing configuration files. The problem is that we don’t live in a perfect world. A lot of provides share libraries that speed up integration, but make the migration to a different vendor much harder. If your software runs on a local machine or a mobile device, chances are that a certain group of users may resist to download the latest update from the store. You have to decide if you prefer to change the authorization, lose them, or provide both solutions simultaneously for some period of time.

There is a story of team, that have built their own authorization service once provider wasn’t able to handle as many requests as they need. It shows that despite the long and successful relationship with the provider, they were able to break it up and move to the totally new solution.

When not to delegate?

There are also cases when the integration is not possible or creates more problems than it actually solves.

Law and other regulations

There are governmental or organizational regulations that control how data should flow and where it can be stored. Auth0 allows you to store data somewhere in EU, but you cannot control if it is stored in Spain, Germany or Poland. Having GDPR in mind it may have consequences to store data in EU when you create a project for the United States. Also, government projects generally require all (or at least confidential) data to stay inside the country.

Costs

Most providers work in the SaaS model. It means that the monthly fee depends on the number of users and the features you use. In some rare cases, it may not scale very effectively and the invoice from the provider may get really high. In this case, you should consider preparing your own local solution for the problem. Storing user data in your own datacenters may be cheaper.

Communication with other servers

Delegation requires communication with the provider. By default, providers require to redirect users to the external or to use iframe element on your site. It is impossible to use that in the application which should, even temporarily, work without an internet connection. This sometimes forces team to create mock services for end to end automatic tests.

Integration with custom solutions

  • Software that works in an internal network (for instance LDAP or Active Directory) is harder, ask your operations team if it is possible.
  • Authorization with the client-side software – the most popular solution for digital signature in Poland requires the installation of the Java Applet based plugin. Integration with it, even without integrating it with an identity provider poses many obstacles.

To delegate or to do not delegate?

If you aim for PoC or MVP type of project, just integrate a trial version of the existing authorization and authentication service provider - if it’s not the core feature of your product. Start fast, implement the most important features and then, when you feel that the solution is too limited, switch to your own implementation or maybe a different provider.

written by
Grape up Expert
Our experts

How to overcome main challenges when collaborating with teams from different countries?

In one of his latest articles, Filip Ryba describes the  specificity of working on on-site projects in the USA . Let's compare his experiences with similar projects run in other parts of the world.

Let's take the US first.

What can we learn from U.S work culture?

Teams in the USA are very customer-oriented and everyone knows their users. Ask your teammate about some functionality in the product. In Poland you will usually get a technical answer, something along the lines "Oh, it is because we need to get additional data from..." and you have to ask some follow-up questions, to get to actual functionality needed by customers and in many cases, you won't get that answer at all. In the USA you will most likely get "Oh, it is because our customers need to know if...", which puts you in the right business context from the start. I find it really positive since instead of just focusing on technical parts you are focusing more on the actual usability of the feature for the customers.

The teams that I worked with always had a flat structure. Sure, there was a PM and a PO, but they were sitting with their team at the same desks. It's another factor that makes communication super easy and effective.

We are actually closer culturally to the United States than you might think. We have Hollywood movies in our cinemas and on TV, we listen to news involving the US, and Netflix becomes popular worldwide. It gives us something to talk about to break the ice.

Obviously, everyone there speaks English, although its level differs from person to person. It is pretty common especially in IT that teams are multi-cultured and you will have teammates from India, China, Mexico to name a few. This might be challenging at first, but you will catch up pretty quick. Being constantly exposed to a foreign language is the best way to learn.

Another aspect of American culture that is especially contrasting with our culture is that when you share something you will always get positive feedback. You will hear "Oh, that's great!" or "Oh, that's so cool!" and it does not matter if they really think that. What matters is that it encourages to share more and it works.

Differences between American and German working culture

How do they compare to our neighbors from Germany? While the teams here also have a flat structure there is also a more rigorous division of responsibilities between teams. There are parts that belong to the architecture team, testing team, CI/CD teams, etc. This inevitably leads to more internal dependencies and generates more meetings. The Germans are very polite and open to discuss everything. Even if they do not have to agree with you, they will at least hear you out. Even when a decision comes from the top, they will still seek consensus within a team.

While collaborating with German companies your PO/PM will be more technical than their US counterpart. Firstly, it helps to translate business goals to engineering tasks. Secondly, the goals that do not bring immediate business value like code refactoring or improvements to the development environment are better understood, and easier accepted. All of that makes projects more technical with teams having less influence on the business side of the product.

Collaborating with Italian teams

The US might be 7000 km from Poland, but we are culturally closer to them than let’s say Italy. If you add to it that not many Italians speak English, the communication may be a bit challenging. The project I worked on was managed in a more freestyle manner. When seeking some information about a project all you have to do is to walk around the office and ask around until you find someone who has that knowledge. Of course, sometimes you also have to find someone who can help you as a translator.

Due to the same reasons communication with an end user is next to impossible. Trying to change the way things are done is even more challenging. Let’s say you finally encouraged your PM to agree to change things a bit. You go back home happy that things are finally moving into the right direction only to discover next day that there was another meeting with some other PM’s and everything is going back to how it was. Sometimes it requires time to work out a way of running projects that suits both sides - when you deal with the early struggles the job becomes more effective.

Collaborating with people from other countries sometimes may be demanding but when you invest some time in better understanding their work culture, a way of communicating, and daily routines it becomes easier. Despite all the differences and challenges at the beginning, on-site projects can be a valuable lesson and helpful experience. There is one more aspect of working abroad that I really like. You can of course visit the same countries by yourself during vacation. However as a tourist you don’t have to figure out what is the best way to commute, where to go shopping, where to eat, what to do on Friday evening etc. This allows you to experience how it is to live there without actually moving there.

written by
Grape up Expert
Previous
Load more

Stay updated with our newsletter

Subscribe for fresh insights and industry analysis.

About UsCase studiesContactCareers
Capabilities:
Legacy ModernizationData PlatformsArtificial Intelligence
Industries:
AutomotiveFinanceManufacturing
Solutions:
DataboostrCloudboostr
Resources
BlogInsights
© Grape Up 2025
Cookies PolicyPrivacy PolicyTerms of use
Grape Up uses cookies

This website uses cookies to improve its user experience and provide personalized content for you. We use cookies for web analytics and advertising. You can accept these cookies by clicking "OK" or go to Details in order to manage your cookies preferences more precisely. To learn more, check out our Privacy and Cookies Policy

Accept allDetails
Grape Up uses cookies

Essential website cookies are necessary to provide you with services available through the website, autosave your settings and preferences, and to enhance the performance and security of the website - you have the right not to accept them through your web browser's settings, but your access to some functionality and areas of our website may be restricted.

Analytics cookies: (our own and third-party : Google, HotJar) – you can accept these cookies below:

Marketing cookies (third-party cookies: Hubspot, Facebook, LinkedIn) – you can accept these cookies below:

Ok