About us
Our services

Capabilities

Legacy Modernization
Data Platforms
AI & Advanced Analytics

Industries

Automotive
Finance
Manufacturing
Aviation

Solutions

Databoostr

Data Sharing & Monetization Platform

Cloudboostr

Multicloud Enterprise Kubernetes

Looking for something else?

Contact us for tailored solutions and expert guidance.

Contact
Case studies
Resources

Resources

Blog

Read our blog and stay informed about the industry’s latest trends and technology.

Ready to find your breaking point?

Stay updated with our newsletter.

Subscribe

Insights

Ebooks

Explore our resources and learn about building modern software solutions from experts and practitioners.

Read more
Careers
Contact
Blog

Thinking out loud

Where we share the insights, questions, and observations that shape our approach.

All blog post
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Our experts

Key takeaways from XaaS Evolution Europe: IaaS, PaaS, SaaS

At Grape Up we attend multiple events all over the globe, particularly those that focus on cloud-native technologies, gather thousands of attendees and have hundreds of stunning keynote presentations.
But keeping in mind the fact that beauty comes in all sizes, we also exhibit at smaller, but just as important events such as the XaaS Evolution Europe which took place on November 26-27 at the nHow hotel Berlin.
With only 100 exclusive attendees and a few exhibiting vendors, XaaS stresses the importance of delivering a great and professional networking experience through its unique “Challenge your Peers”, “Icebraker” and “Round Tabele” sessions.

Summary

It was great to see that we are in the right place at the right time with our business model: digital transformation, moving workloads to the cloud, and market disruption are still hot topics among decision makers across Europe. We had great conversations which we hope will convert to business opportunities and long-lasting partnerships!

About the event

XaaS Evolution Europe 2018 is the only all-European XaaS use and business cases' event. IT decision-makers from large as well as medium size companies from all major industries meet to present their case studies and discuss major challenges, strategies, integration approaches and technological solutions related to the practical implementation of cloud services. Among those great presentations our very own Roman Swoszowski presented Cloudboostr as a go to platform for all those companies that want to avoid vendor lock-in and use open source but in the same time have Enterprise Support and professional enablement partner.

written by
Marcin Wiśniewski
Software development

Secure Docker images

Containers are great. They allow you to move faster with your development team, deploy in reliable and consistent way and scale up to the sky. With some orchestrators it is even better. It helps you  grow faster, use more containers, accelerate growth even more and use even more containers. Then, at some point you may wonder how I can be sure that the container I have just started or pulled is the correct one? What if someone injected there some malicious code and I did not notice? How can I be sure this image is secure? There has to be some tool that guarantees us such confidence… and there is one!

The update framework

The key element that helps and in fact solves many of those concerns is The Update Framework (TUF) that describes the update system as “secure” if:

  •  “it knows about the latest available updates in a timely manner
  •  any files it downloads are the correct files, and,
  •  no harm results from checking or downloading files.”

(source: https://theupdateframework.github.io/security.html)

There are four principles defined by the framework that makes almost impossible to make a successful attack on such update system.

The first principle is responsibility separation. In other words, there are a few different roles defined (that are used by e.g. the user or server) that are able to do different actions and use different keys for that purpose.

The next one is the multi-signature trust. Which simply says that you need a fixed number of signatures which has to come together to perform certain actions, e.g. two developers using their keys to agree that a specific package is valid.

The third principle is explicit and implicit revocation. Explicit means that some parties come together and revoke another key, whereas implicit is when for e.g. after some time, the repository may automatically revoke signing keys.

The last principle is minimize individual key and role risk. As it says, the goal is to minimize the expected damage which can defined by the probability of event happening and the impact. So if there is a root role with high impact to the system the key it uses is kept offline. The idea of TUF is to create and manage a set of metadata, signed by corresponding roles which provide general information about the valid state of repository at a specified time.

Notary

The next question is: how can a Docker use this update framework and what does it mean to you and me? First of all, a Docker already uses it in the Content Trust which definition seems to answer our first question about the image correctness. As per documentation:
“Content trust provides the ability to use digital signatures for data sent to and received from remote Docker registries. These signatures allow client-side verification of the integrity and publisher of specific image tags.”

(source: https://docs.docker.com/engine/security/trust/content_trust)

To be more precise, Content Trust does not use TUF directly. Instead, it uses Notary, a tool created by Docker which is an opinionated implementation of TUF. It keeps the TUF principles so there are five roles with corresponding keys, same as in TUF, so we have:

  •  root role – it uses the most important key that is used to sign the root metadata, which specifies other roles so it is strongly advised to keep it securely offline,
  •  snapshot role – this role signs snapshot metadata that contains information about filenames, sizes, hashes of other (root, target and delegation) metadata files so it ensures the user about their integrity. It can be held by the owner or admin or the Notary service itself,
  •  timestamp role – using timestamp key Notary signs metadata files which guarantees freshness of the trusted collection, because of short expiration time. Due to that fact it is kept by Notary service to automatically regenerate when it will be outdated,
  •  target role – it uses the target’s key to sign the target’s metadata file with information about files in the collection (filenames, sizes and corresponding hashes) and it should be used to verify the integrity of the files inside the collection. The other usage of the target key is to delegate trust to other peers using delegation roles,
  •  delegation role – which are pretty similar to the target role but instead of the whole content of repository, those keys ensure integrity of some (or sometimes all) of the actual content. They also can be used to delegate trust to other collaborators via lower level delegation role.

All this metadata can be pulled or pushed to the Notary service. There are two components in the Notary service – server and signer. The server is responsible for storing the metadata (those files generated by TUF framework underneath) for trusted collections in an associated database, generating the timestamp metadata and - most importantly - validating any uploaded metadata. Notary signer stores private keys, it is done not to keep them in the Notary server and in case of request from Notary server, it signs metadata for it. In addition, there is a Notary CLI that helps you manage trusted collections and supports Content Trust with additional functionality. The basic interaction between client, server and service can be described when the client wants to upload new metadata. After authentication, and if required, the metadata is validated by the server it generates timestamp metadata (and sometimes snapshot based on what has changed) and send it to the Notary signer for signing. After that, the server stores the client metadata, timestamp and snapshot metadata which ensure that the client files are valid and it their most recent versions.

What's next?

It sounds like a complete solution and it surely is one, but just for one particular area of threats. There are a lot of other threats that just sit there and wait to impact your container images. Along with those problems there are solutions:

  •  Vulnerability Static analyzer, which scans container images for any known vulnerabilities (mostly based on CVE databases),
  •  Container firewalls or activity monitors designed to detect some inappropriate behavior when your application is running,
  •  There are plenty of “Best practices for secure Docker images” – rules that you should follow when creating a Docker image,
  •  and many more…
written by
Michał Różycki
Automotive
Software development

How to avoid cloud-washed software?

The term "cloud-washed" was invented in the exact same way as the phrase "green-washed" came into being, which was used to describe products that don’t have anything to do with being "eco", but were sold in "green" packaging. The same with cloud. There is a trend in the software development community where many companies take perfect advantage of the cloud computing hype and re brand their products as "cloud", while in reality their solutions don’t have much in common with cloud. But first, let’s define what exactly "cloud washing" is:

 Companies stick a "cloud" label on their products in order to stay on this competitive market. The only thing that they don’t do is explain whether it is cloud-native, cloud-based or something else.

Don’t get lost in the clouds

In order for you to easily expose fake cloud solution, I have prepared a list of questions to ask cloud solutions vendors to make sure you are buying a product with all the benefits of cloud-native software.

1. Is the platform hosted in the cloud or does it rely on an on-site server?

If the platform is hosted purely in the cloud with proper backup strategy, availability zones and replication in place, you will not be affected in case the server fails and should not worry a lot about data loss. But if it’s an on-site server, be sure to expect an outage when the server fails make sure there are backups – preferably in the different data center.

2. Is the service multi-tenant or single-tenant?

In the case of single-tenant services, support may not be as dedicated as you would expect it to be. Some issues might take weeks to resolve. Multi-tenant and cloud-based are a much better option, but only when you go cloud-native can you expect the best support, scalability and quick fixes. Also, cloud native platforms provide better level of system-level security for all deployed services.

3. How often are updates and new features released? Do they require downtime?

The answer is simple, if updates require downtime, it’s not cloud-native at all. If the features are available immediately, the software is cloud-native. Also, critical updates are deployed to all customers. On top of that, there is no disruption in service once they are deployed, because rolling update strategies are already in place. In addition, when failure happens rollback to previous version can be done immediately, or even automatically.

4. Who is responsible for security? Do you include security certifications or do I need to pay extra?

Remember one rule,  cloud-native software is secure by definition . Cloud platforms "by design" are more secure because the whole system layer is abstracted from the developers and therefor easier to update with the latest security patches.

written by
Grape up Expert
Software development

Continuous deployment of iOS apps with Fastlane

Automation helps with all the mundane and error-prone tasks that developers need to do on a daily basis. It also saves a lot of time. For automating iOS beta deployment, a very popular tool is Fastlane. The tools automate entire iOS  CI/CD pipelines and allow us to keep an iOS infrastructure as code. Today I will share my observations and tips regarding setting up fastlane for iOS project.

Setting up Jenkins

Jenkins is an open-source automation server, but you can use fastlane on your local machine too. However, a separate build server can certainly help you create a more streamlined process.

The Jenkins installation process is pretty straightforward, so I will not be going into details here. But there are some quirks that we should take into consideration.

To elevate the problem with detached HEAD, we should provide Branch Specifier with „origin” e.g. „origin/master” and choose the option Check out to specific local branch with branch name including  ** . For more details, see the Jenkins' help.

All the tool commands should be put in the Execute shell phase.

Setting up Fastlane

Fastlane is used to simplify deployment of iOS and Android apps. It is well documented and its community always tries to help in case of any bugs etc. We can install it using Homebrew or RubyGems and I personally recommend using the latter. I stumbled upon some problems during installation of fastlane plugins after using `brew` to install fastlane.

To integrate the tool with our project, we should open its directory in terminal and type:

 fastlane init

It will start a very intuitive creator that guides us through the process of setting up fastlane for our project. After this process is finished, we have configured it. Now we can customize and create new lanes.
Fastlane lanes are written in Ruby. Even if you are unfamiliar with this programming language, learning the basics that are needed to configure it shouldn't take too long. There is also a version of fastlane that supports Swift. However, it is still in beta.

Fastlane tips

The tool provides a few ways of handling code signing e.g. „cert” and „sigh” or „match”. However, if you plan to use fastlane mainly on CI/CD server you can rely on automatic code by signing in Xcode and installing the certificates manually on that machine. In that case you don't need to use any of those actions in your lanes.

If you prefer to move your Fastfile (fastlane configuration file) to a different directory than the project's directory, you will have to provide manually paths to the actions. In that case it is better to use absolutes paths since when you just use  ../ prefix you may end up in a different directory that you would expect.

Error block  error do |lane, exception| is very useful for error notifications. For example, you can post message to Slack when lane executed by the CI server failed.

written by
Grape up Expert
Our experts

DevOps Enterprise Summit 2018: Driving the future of DevOps

What happens when you gather 1500 attendees, more than 50 speakers and a few hundred media representatives under one roof? The answer is DevOps Enterprise Summit – one of the largest global events devoted to DevOps enabling top technology and business leaders from around the world to learn and network for 3 days in a row.

DevOps Enterprise Summit describes itself as an „unfolding documentary of ongoing transformations which leaders are helping drive in their large, complex organizations”. The atmosphere at the Cosmopolitan Hotel in Las Vegas, the event venue, we could literally feel the excitement of all the attendees throughout the whole time. Everything thanks to the openness of the attendees to learning, connecting and new experiences from peers from the industry. Throughout the discussions, we have also gotten inspired by key thought leaders and industry experts and found the answers to constantly debated questions.

As Grape Up, we had over 180 visitors in just 3 days at our booth. All of them were genuinely interested in our services, both those who knew our company from previous events and those who haven’t heard of us yet (well, now they have). On top of that, we have seen one of the biggest domestic airlines in the US, that had delegated over 70 of her employees to DOES to learn. The DevOps Enterprise Summit was definitely the event of the year for this company and served the purpose of helping further accelerate their digital transformation – from our perspective this is a company that is going all-in DevOps. There have also been several others companies who have come to Las Vegas all the way from New Zealand – this is also a great example of the global nature of digital transformation.

Our new product, Cloudboostr, has really caught the eye of many visitors. The ability to run any workload anywhere, using enterprise-grade open source, supported by an experienced company like Grape Up who has been around for over a decade, has proved to be a winning combination. Apart from booth visitors, we have also collected many inquiries from large companies who are already interested in the product. The future for Cloudboostr as Grape Up’s first product looks bright.

Overall, the DevOps Enterprise Summit has been a successful event for Grape Up. Besides having met many DevOps enthusiasts, we had the opportunity to get to know plenty of sponsors who are also delivering amazing value through their products and services . The event definitely feels like a one-stop-shop for all needs DevOps.

written by
Grape up Expert
Software development

Yet another look at cloud-native apps architecture

Cloud is getting extremely popular and ubiquitous. It’s everywhere. Imagine, the famous “everywhere” meme with Buzz and Woody from Toy Story in which Buzz shows Woody an abundance of...whatever the caption says. It’s the same with the cloud.

The original concepts of what exactly cloud is, how it should or could be used and adopted, has changed over time. Monolithic applications were found to be difficult to maintain and scale in the modern environments. Figuring out the correct solutions has turned into an urgent problem and blocked the development of the new approach.

Once t  he cloud platform like Pivotal Cloud Foundry or Kubernetes is deployed and properly maintained , developers, managers and leaders step in. They start looking for best practices and guidelines for how cloud applications should actually be developed and how these apps should operate. Is it too little too late? No, not really.

If cloud platforms were live creatures, we could compare them to physical infrastructure. Why? Because they both don't really care if developers deploy a huge monolith or well-balanced microservices. You can really do anything you want, but before you know it, will be clearly visible that the original design does not scale very well. It may fail very often and is virtually impossible to configure and deploy it on different environments without making specific changes to the platform or extremely complex and volatile configuration scripts.

Does a universal architecture exist?

The problem with the not-really-defined cloud architecture led to the creation of the idea of the “application architecture” which performs well in the cloud. It should not be limited to the specific platform or infrastructure, but a set of universal rules that help developers bootstrap their applications. The answer to that problem was constituted in Heroku, one of the first commonly used PaaS platforms.

It’s a concept of twelve-factor applications which is still applicable nowadays, but also endlessly extended as the environment of cloud platforms and architectures mutate. That concept was extended in a book by Kevin Hoffman, titled "Beyond Twelve-Factor App" to the number of 15 factors. Despite the fact that the list is not the one and only solution to the problem, it has been successfully applied by a great deal of companies. The order of factors is not important, but in this article, I have tried to preserve the order from the original webpage to make the navigation easier. Shall we?

I. Codebase

There is one codebase per application which is tracked in revision control (GIT, SVN - doesn’t matter which one). However, a shared code between applications does not mean that the code should be duplicated - it can be yet another codebase which will be provided to the application as a component, or even better, as a versioned dependency. The same codebase may be deployed to multiple different environments and has to produce the same release. The idea of one application is tightly coupled with the single responsibility pattern. A single repository should not contain multiple applications or multiple entry points - it has to be a single responsibility, a single execution point code that consists of a single microservice.

II. Dependencies

Dependency management and isolation consist of two problems that should be solved. Firstly, it is very important to explicitly declare the dependencies to avoid a situation in which there is an API change in the dependency which renders the library useless from the point of the existing code and fails the build or release process.
The next thing are repeatable deployments. In an ideal world all dependencies should be isolated and bundled with the release artifact of the application. It is not always entirely achievable, but should be done and it should be possible too. In some platforms such as Pivotal Cloud Foundry this is being managed by buildpack which clearly isolates the application from, for example, the server that runs it.

III. Config

This guideline is about storing configuration in the environment. To be explicit, it also applies to the credentials which should also be securely stored in the environment using, for example, solutions like CredHub or Vault. Please mind that at no time the credentials or configuration can be a part of the source code! The configuration can be passed to the container via environment variables or by files mounted as a volume. It is recommended to think about your application as if it was open source. If you feel confident to push all the code to a publicly accessible repository, you have probably already separated the configuration and credentials from the code. An even better way to provide the configuration will be to use a configuration server such as Consul or Spring Cloud Config.

IV. Backing services

Treat backing services like attached resources. The bound services are for example databases, storage but also configuration, credentials, caches or queues. When the specific service or resource is bound, it can be easily attached, detached or replaced if required. This adds flexibility to using external services and, as a result, it may allow you to easily switch to the different service provider.

V. Build, release, run

All parts of the deployment process should be strictly separated. First, the artifact is created in the build process. Build artifact should be immutable. This means the artifact can be deployed to any environment as the configuration is separated and applied in the release process. The release artifact is unique per environment as opposed to the build artifact which is unique for all environments. This means that there is one build artifact for the dev, test and prod environments, but three release artifacts (each for each environment) with specific configuration included. Then the release artifact is being run in the cloud. Kevin adds to this list yet another discrete part of the process - the design which happens before the build process and includes (but is not limited) to selecting dependencies for the component or the user story.

VI. Stateless processes

Execute the application as one or more stateless processes. One of the self-explanatory factors, but somehow also the one that creates a lot of confusion among developers. How can my service be stateless if I need to preserve user data, identities or sessions? In fact, all of this stateful data should be saved to the backing services like databases or file systems (for example Amazon S3, Azure Blob Storage or managed by services like Ceph). The filesystem provided by the container to the service is ephemeral and should be treated as volatile. One of the easy ways to maintain microservices is to always deploy two load balanced copies. This way you can easily spot an inconsistency in responses if the response depends on locally cached or stateful data.

VII. Port binding

Expose your services via the port binding and avoid specifying ports in the container. Port selection should be left for the container runtime to be assigned on the runtime. This is not necessarily required on platforms such as Kubernetes. Nevertheless, ports should not be micromanaged by the developers, but automatically assigned by the underlying platform, which largely reduces the risk of having a port conflict. Ports should not be managed by other services, but automatically bound by the platform to all services that communicate with each other.

VIII. Concurrency

Services should scale out via the process model as opposed to vertical scaling. When the application load reaches its limits, it should manually or automatically scale horizontally. This means creating more replicas of the same stateless service.

IX. Disposability

Application should start and stop rapidly to avoid problems like having an application which does not respond to healthchecks. Why? Because it starts (which may even result in the infinite loop of restarts) or cancels the request when the deployment is scaled down.

X. Development, test and production environments parity

Keeping all environments the same, or at least very similar, may be a complex task. The difficulties vary from VMs and licenses costs to the complexity of deployment. The second problem may be avoided by properly configured and managed underlying platform. The advantages of the approach is to avoid the "works for me" problem which gets really serious when happens on production which is automatically deployed and the tests passed on all environments except production because totally different database engine was used.

XI. Logs

Logs should be treated as event streams and entirely independent of the application. The only responsibility of the application is to output the logs to the stdout and stderr streams. Everything else should be handled by the platform. That means passing the logs to centralized or decentralized store like ELK, Kafka or Splunk.

XII. Admin processes

The administrative and management processes should be run as "one-off". This actually self-explanatory and can be achieved by creating Concourse pipelines for the processes or by writing Azure Function/AWS Lambdas for that purpose. This list concludes all factors provided by Heroku team. Additional factors added to the list in "Beyond Twelve-Factor App" are:

XIII. Telemetry

Applications can be deployed into multiple instances which means it is not viable anymore to connect and debug the application to find out if it works or what is wrong. The application performance should be automatically monitored, and it has to be possible to check the application health using automatic health checks. Also, for specific business domains telemetry is useful and should be included to monitor the current and past state of the application and all resources.

XIV. Authentication and authorization

Authentication, authorization, or the general aspect of security, should be the important part of the application design and development, but also configuration and management of the platform. RBAC or ABAC should be used on each endpoint of the application to make sure the user is authorized to make that specific request to that specific endpoint.

XV. API First

API should be designed and discussed before the implementation. It enables rapid prototyping, allows to use mock servers and moves team focus to the way services integrate. As product is consumed by the clients the services are consumed by different services using the public APIs - so the collaboration between the provider and the consumer is necessary to create the great and useful product. Even the excellent code can be useless when hidden behind poorly written and badly documented interface. For more details about the tools and the concept visit API Blueprint website.

Is that all?

This is very extensive list and in my opinion from technical perspective this is just enough to navigate the cloud native world. On the other hand there is one more point that I would like to add which is not exactly technical, but still extremely important when thinking about creating successful product.

XVI. Agile company and project processes

The way to succeed in the rapidly changing and evolving cloud world is not only to create great code and beautiful, stateless services. The key is to adapt to changes and market need to create the product that is appreciated by the business. The best fit for that is to adopt  the agile or lean process, extreme programming, pair programming . This allows the rapid growth in short development cycles which also means quick market response. When the team members think that their each commit is a candidate to production release and work in pairs the quality of the product improves. The trick is to apply the processes as widely as possible, because very often, as in Conway's law, the organization of your system is only as good as organizational structure of your company.

Summary of cloud-native apps

That’s the end of our journey through to the perfect Cloud-Native apps. Of course this is not the best solution to every problem. In the real world we don’t really have handy silver bullets. We must work with what we’ve got, but there is always room for improvement. Next time when you design your, hopefully, cloud-native apps, bear these guidelines in mind.

written by
Adam Kozłowski
Our experts

Key takeaways from Cloud Foundry Summit Europe 2018 in Basel

This year's Cloud Foundry Summit Europe was held in Basel, Switzerland. As a company which strongly embraces the cloud, we could simply not miss it! Throughout the event, all visitors to our Silver Sponsor booth no. 15 could not only discuss all things cloud-native, but also drink a cup of our delicious, freshly brewed vanilla coffee. First, lets take a look at some numbers from the event. There were:

  •  144 keynotes, break notes and lightning talks
  •  65 member companies
  •  8 certified platforms
  •  961 registered attendees

Why Cloud Foundry Summit?

However, coffee wasn’t the only reason we traveled over 1000 kilometers. What mattered to us was the impressive conference line-up. We were hungry for the upcoming tech news, to see what’s trending and, of course, to learn new things. By the very looks of the schedule we could tell that the European edition in Basel was slightly different from the Boston edition earlier this year.

Everyone’s talking about Kubernetes with Cloud Foundry

Let’s focus first on serverless. This topic has either been exhausted or has only been a temporary fling among the international circles. Why? Because this year’s edition has only devoted two speeches in the Containers and Serverless track to FaaS. Whereas, 6 months ago in Boston the number of speeches about it was twice as big. One more reason for that might be the growing popularity of Kubernetes. There were 6 tracks and, apart from Contrainers and Serverless, each had at least one speech about K8S/containers.

Currently, the interest in Kubernetes is so palpable and obvious that even Cloud Foundry themselves confirmed it by announcing a CF Containerization project:

  •     https://www.cloudfoundry.org/blog/cloud-foundry-focus-on-interoperability-continues-with-two-new-projects-integrating-kubernetes/  

But there’s more…

Finally, it's worth mentioning the second project approved by Cloud Foundry - Eirini - which brings the CF closer to K8S and enables using Kubernetes as a container scheduler in the Cloud Foundry Application Runtime. It means nothing else but the fact that we are getting closer to KubeFoundry or CloudFubernetes - platforms from platforms.

Our anncouncement

During the Summit, one of the most important moments for us was the announcement of our new product,  Cloudboostr , an Open Cloud Platform which is a complete cloud-native software stack that integrates open-source technologies like Cloud Foundry and Kubernetes. Thanks to Cloudboostr, companies can build, test, deploy and operate any workload on public, private or hybrid cloud.


Co-author: Julia Haleniuk

written by
Michał Różycki
Our experts

A day in the life of a Business Development Manager

Have you ever wondered what it's like to be a Business Development Manager? Marcin Wiśniewski of Grape Up, shares his experience and tips for successful business development management.

1. Hi Marcin, what is your background?

It’s absolutely non-technical and I have a degree in Management and Marketing which can be quite unusual as for someone who works in IT. But I just can’t help the fact that I like all these outlandish tech gimmicks and news. And I like learning new things – which is the crucial part of my job.

2. Why did you decide on a career in business development?

I don’t recall making any big decision of a lifetime to work in sales. What I do know is that I have always wanted to have a career which would allow me to meet new people, discover new views of the world, new beliefs and opinions. I like to talk, travel, I like to be around. But what gives me a rush of adrenaline is the intensity which comes with the sales process. So maybe not intentionally, but I have found myself in the right place.

3. What does your typical day at the office look like? Is there a typical day at all?

I don’t really have a typical day at work. Of course, there are tasks or processes which are repetitive and we do them according to a pattern, but it’s always about a different company, different people, new conferences, new challenges. Therefore, each and every case requires an individual approach.

4. What do you love most about the job and what is its most challenging aspect?

I like it that we work in a niche part of the IT market mainly abroad. We cooperate with Fortune 500 companies which is the most exciting and challenging part of the job at once. Selling to Fortune 500 companies which are basically dream clients isn’t easy-peasy so once you do it, that’s something to be proud of!

5. What competencies do you think makes a good Business Development Manager?

I’m not sure how my colleagues from the industry would answer this question, but I like people, I like talking to them and, of course, learning new things. And my job incorporates all of it, which is exactly why I love what I do.

written by
Marcin Wiśniewski
Our experts

Key takeaways from SpringOne Platform by Pivotal 2018

Business is happy with delivery acceleration and developers are happy with focusing only on writing code — Key Takeaways from Spring One 2018 in Washington D.C. SpringOne Platform was an amazing, large-scale event. As an IT consulting company, we were more than excited to be a Silver Sponsor of a conference which gathers hundreds, if not thousands of cloud experts in one place. Now that we’re back and the whole event winds down, we thought it would be a great idea to take a minute, look back and share with the world what we have learned at this year’s edition of SpringOne Platform. It seems like the best way to sum up the whirl of presentations, keynotes and tech talks is to break everything into the following:

Operating at scale

There is a lot of fuss going on around Kubernetes nowadays. The excitement is, of course, completely reasonable, and we could witness it in its pure form during workshops and at all the booths in the community hub. In addition to that, several presentations also hovered around the topic of Kubernetes. For example, there was an interesting speech about the K8s basics:  “All You Need to Know to Deploy Applications on Kubernetes” , but what was also in the center of attention was the aspect of operating at scale, which is the key feature of the platform. On top of that, Pivotal has also announced in  “Buildpacks Anywhere” that they are working on open buildpacks to run both on CFCR and PCF, which might be yet another game-changer on the market.

Apache Geode community meeting

However, if there was one thing that attracted the most attention at the conference, it would be the Apache Geode — a data management platform originated as commercial Gemfire. There were over 10 workshops on this topic, from such basics as  “Geode: Much More Than a Cache” , through an open panel  “Questions, Answers, and Oddly Specific Opinions with Apache Geode Committers” to quite complex topics like  “Implementing PII Encryption with PDX Serialization” . Not to mention all the success stories of large companies and banks which only proves the scale of the solution and how it’s getting more and more mature.

Kessel run

Part of SpringOne is always about business success stories for companies that use Cloud Foundry or Kubernetes in cooperation with Pivotal. One of the most amazing ones which definitely deserves a little more "airtime" than the rest is "Kessel Run" project of U.S. Air Force (no, it’s not a typo, the name does indeed come directly from Star Wars!) which is a cloud-native startup inside the DoD.
During his presentation, Capitan Byron Kroger (@bjkroger) explained in depth how they dealt not only with difficulties but also with all paperwork which was slowing them down at that time. And he proudly sports our Oh My BOSH t-shirt which gives him additional points

Reactive and event-driven methodologies

A great focus was also put on  the modern solutions and technologies, especially event-driven architecture in the cloud . Apache Kafka can definitely help with that and it makes architecture simpler and cleaner. All this was explained in this one presentation from Neha Narkhede, the co-creator of Kafka. On the main stage, Olive Gerkie and Rossen Stoyanchev announced R2DBC - Reactive Relational Database Connectivity Client which is still in its early stage of development but has already gained a great deal of momentum and may revolutionize how developers connect to databases from Spring. That topic was also further explored in  “Reactive Relational Database Connectivity” and some other lectures.

written by
Adam Kozłowski
Software development

7 tips on how to manage backlog in XP

Proper backlog maintenance is extremely important for the product to be successful. Usually, it's the Product Owner’s responsibility to manage the backlog. The right approach and tools can be very helpful in this case.

Imagine a situation in which the product has already been released to the customers. You had a very specific plan with every step written out and what should be done next. Then, every week brings in a few new ideas on how to improve the product. At one point, you realize that you have several hundred new feature ideas which haven’t even been analyzed yet. Been there, done that, right? Right!

There are many factors and models out there saying how backlog should be managed. As a matter of fact, there is no right answer as to how it should be done. It must be somehow unique and aligned to your team, company, strategy and your resources. Depending on the project size, the ability to work with the business or support departments there are many ways in which the workflow can be divided. The only sure thing is — it's always the Product Owner’s responsibility to keep the backlog in good shape and make sure that only the best feature ideas are being stored in it. Those feature ideas should be considered as a new value added to the product.

With all that said, let's discuss in more detail the good practices on how to manage the backlog in XP environment.

One backlog, no excuses

First and foremost, align your backlog management to the KISS principle (Keep it simple, stupid) to work efficiently. It is crucial that backlog management is done continuously and without any ice boxes or intermediate placeholders. Keep it plain, have a stack of ideas, accept or reject them, add to the backlog and develop. It's important to keep everything in order so that it is transparent to the team and especially to the stakeholders. Also, remember that everyone should have access to the backlog and be able to see the big picture. In other words, these are not the Product Owner’s personal notes.

Another aspect of this is feature idea management. The ideas will come from different sources - customers, answers to bugs, analysts’ vision, stakeholder ask, etc. Depending on the project size, the PO won’t be able to manage everything by himself and track every idea inside of the backlog. Therefore, if the projects start to grow, you should consider having a separate bucket for these ideas which haven’t been verified and which of them would be the support’s, assistant’s or the team’s responsibility. Those items would need to be verified, transformed into a legitimate improvement request, preferably with the use of a template.

It is worth mentioning that bugs are not always a part of the backlog. Given the project size, sometimes you'll need a separate tool for the support team and you don't want all customer issues to interfere with the business vision of your product. It’s great if you can manage both in one place, but it's not mandatory and at some point, you'll need to divide those two buckets. The backlog should focus on the business value — user stories — so basically anything new that will be added or enhanced.

Manage the backlog given the business perspective

Many Product Owners are deeply invested in team management. The common mistake, however, is structuring the backlog based on some other factors than the business value — i.e. specific resources being available.

It is important, though, to have the ability to sort and filter items in the backlog, so that you are able to achieve what had been mentioned earlier. If someone from your team has limited knowledge and can only work in particular areas, they should be able to find their way without the help from the PO.

Clean up the backlog regularly

If there is a feature idea that has been sitting in the backlog for a few months while you do frequent releases - remove it. Apparently, there’s always something more important to do and that particular feature is just not that mandatory. Priorities always tend to shift, product changes over time. You should focus only on the current and next release. If you know for sure that this feature will not be done in this timeline, better throw it away.

You need to make hard choices every day and reject feature ideas on the fly. If you need to consult your decision in some cases, do it on a weekly basis - don't postpone the decision. Think only about the current state of the project. A good way of prioritizing things is the MoSCoW method (identify items as a Must, Should, Could or Won't have). Using a number as priority (i.e. P1-P4) usually doesn't say nothing about the actual willingness to do the work - it rather indicates when you want to do something.

Verify feature ideas on the market

So, you've come to the point that you have more than enough to plan a release. What's next? Usually, it would be the implementation phase. I'd strongly suggest verifying every idea on the market. There are several ways of doing that. While conducting interviews with customers or people who work in the same industry would take too much time and resources, an online voting system would be perfect. On top of that, in-depth market research done by the Discovery Product Manager could also be a good idea. Keep the balance though, if stakeholders have a strong opinion about doing the feature which is estimated for a low cost, take a leap of faith if you feel like it.

This is also an important message sent to the market – hearing out users or customers, asking them for verification and priority. It will make them feel as if they have had a huge impact since the product is aligned with their needs.

Adjust backlog to release cycle

As I mentioned previously, the backlog should reflect the current and the next sprint. Let’s say that you release every 3 months. Once the current sprint is already planned, put new items only for the next sprint. You should have an idea of what the velocity of your team is, so if something more valuable comes to the backlog, you need to remove something from it.

Be responsible for the product

As a Product Owner, you can always ask for help from the team, but in the end, you're the driver. It's your mission. The team can help with the analysis, completing the requirements, designs or estimate but you are fully responsible for the outcome and maximizing the value of the product.

They say that the Product Owner should be an independent decision maker, but you know what it’s like. Your boss will sometimes have a different opinion, but if you feel strong about your assessment, you shouldn't let go.

Adjust and look for a silver lining

Don't adopt things as they are, just because someone said so. You need to take the process, adjust it to your situation and make it work. As I stated in the introduction, every project and company have their own unique structure. There is no point in making a revolution and changing everything.

It's all about  enhancing the product and making your life easier . Take one day at a time and do your best to improve the process with one thing in mind - the product always comes first. Processes shouldn't interfere with your daily work.

Summary

I hope that this article will help you manage backlog better. My main advice is to keep it simple and transparent. It’s all about keeping the whole team happy and engaged. Every vision, even the most ideal one needs a compromise to make it work in real life.

written by
Grape up Expert
Our experts

3 questions to Tomasz Ćwięk, DevOps Engineer at Grape Up

DevOps engineers have a diverse and unique skill set and the profile of the DevOps engineer is evolving. I asked Tomasz Ćwięk who is a DevOps Engineer at Grape Up to explain in more detail what it's like to be in that role.

1. What is the real impact that you have on the product, the processes and the team as a DevOps Engineer?

Since a DevOps Engineer is a person who operates in a wider IT area than a programmer or an operator, it is easier for him to see the “bigger picture”. Thanks to this, it is easier for him to predict the potential problems faster and determine in advance whether an idea is going in the right direction.

If a given solution requires a test or a proof of concept (PoC), a DevOps engineer is able to quickly and effectively run without involving more people. Then, using knowledge and appropriate tools, such as the CI/CD pipeline, monitoring, metrics, they can immediately draw conclusions, and save many hours or even days of work of the rest of the team.

2. Who can become a DevOps Engineer? Is there an ideal candidate profile?

Well, I used to work as a developer. Then one day, I felt that my work has slowly become “not-so-exciting” (not to call it boring) and monotonous. I felt the urge to change something back then. Now, the variety of tasks and problems that I encounter and have to deal with on a daily basis makes my day so much more exciting. Personally, I think that becoming a DevOps engineer is perfect for people who are good problem solvers. Additional experience as a system administrator and a programmer, is a nice to have.

3. What does a day in the life of a DevOps Engineer look like?

Currently, I work on several projects which differ from one another. Some of them are support projects, which is why it is very difficult to predict what the next day will look like. For example, one day we design the network infrastructure for new tools, the next day we debug the problem of updating the platform or installing a new tool. We wear many hats. Quite often we work as consultants, advising the client on how to best solve the problem, design a new solution or even reasonably plan tasks for the team.

When it comes to my daily tasks — there are plenty of them and all of them are different. The key to mastering this chaos is establishing a good relationship with the client, properly planned stand-up meetings, and effective use of tools.

written by
Tomasz Ćwięk
Our experts

How to explain your tech job to your family?

With new positions popping up all over the place on the IT job market, it might not be a piece of cake to explain what your tech job is about to your parents – people who aren’t necessarily die-hard fans of new technologies. Unless, of course, your mom or dad are avid tech geeks, chances are they have just, much to their excitement, discovered Facebook. Occassionally and unfortunately, they also happily take to social media to comment their children’s pictures.

So how do you explain what you do?

„No one knows what I really do.”, or „It’s actually pretty complicated.” are probably the worst explanations you can come up with. First, by saying that you come across as nonchalant. Second of all, you’re wasting a great chance to practice explainig it to a person that doesn’t have the slightest clue about it.

Try to understand them

If they usually don’t understand what you do, maybe it’s because you expect them to understand too much. Luckily, there is a way to explain almost anything to anyone, but before you do so, make sure to know your listener.

Some people may simply ask about your job to be polite, for example an uncle who’s just trying to break the awkward silence in between meals. However, who also work in IT may be genuinely interested in your position. Treat it as an elevator pitch. Once you know it, you will be able to adapt the pitch to the listener’s level of interest. And before you know it, you will have mastered a 1-minute description of your skills. Whether it’s going to be an extended description of your day at work with all your responsiblities, or a simple, one-sentence explanation of what you do in general.

For instance:  I develop software for international companies.

This sentence doesn’t really say it all, but for some it might be just good enough. Good enough for those who want to make sure that you’re clearly doing SOMETHING and making ends meet and footing your bill at the end of the month.

Try not to sound condescending

If you don’t really know how much someone else is into technology and how well-informed they are, it can be easy to cross the line between smart and awfully patronizing. The whole point here is not to make the other person feel any less intelligent.

It’s best to take into consideration your relative’s education and age. A good idea is to start off by sparking an interest in them by asking if they use a certain type of app or be it a website which perfectly relates to your professional field. You can also mention a story that recently went viral. Chances are they have heard of it. Either way, it’s a good starting point.

If by any chance you need to fully explain your field of expertise, try to do it as simple as possible.

Just like that:

    You:    Think of the news website that you visit every day…
    Your relative:    What about it?
    You:    I make sure that every piece of content on it is always up to date.

But don’t narrow it down to only one thing such as:

    You:    I spend ALL DAY coding like a madman.
    Your relative:    All day? Every day?
    You:    Correct.

Reality check – developers don’t spend eight hours a day producing lines of code. They fix bugs, go to meetings, and learn new things too.

Keep it simple and tell stories

The KISS method, also known as Keep It Stupid Simple is all about getting your message across as if you were talking to a five year old. Believe it or not, but some kindergartners are more savvy than adults. I mean, who hasn’t seen a child use an iPad? Let’s say you’re a marketing specialist. Instead of reducing it all to „I do marketing. Global marketing.” like the example above, go for something that’s fully-fledged:

„I am the person responsible for marketing, branding and all activities around it. I write social media posts – on Facebook and Twitter and organize events which my company attends in order to advertise or hire new employees. I also come up with new ideas on how to advertise our services and offer, and make sure that everything is in line with our business goals.”

This is no rocket science!

The key to success is to really just keep it simple. First of, your relatives aren’t your co-workers at your daily stand-ups. Your mom, dad, aunt, grandma… they might still use a flip phone and be strictly against upgrading it and avoid anything that has to do with a smartphone like the plague.

I hope those simple strategies work for you and make conversations with your family a tiny bit less awkward. After all, nobody is an expert on everything and even the most highly analytical minds need something explained if it’s not their cup of tea.

written by
Grape up Expert
Software development

Our way to reactive extensions

In the modern world we are inundated with information - TV, radio, newspapers, internet – to the extent that often we need some kind of filters to get only those pieces of information which we find interesting. That’s why the most recent news app from Google uses artificial intelligence to customize the newsfeed and make sure users receive only the kind of news they want. This also applies to the APIs that we use both for desktop and web development: sometimes the amount of notifications and data sent even by a simple interface may be overwhelming and needs to be managed by means of filtering before it is presented to the end user.

Architectural background

One of the products that we develop is a desktop app based on a complex library that creates abstraction over communication with the servers and different REST APIs. The image below is a part of the class that wraps a piece of the one API which consists of “Description” and “Size”:

public class Class1

{

   public void SetDescription(string description, CompletionHandler handler)

   {

        /* Some logic here */

   }



   public void SetSize(int size, CompletionHandler handler)

   {

       /* Some logic here */

   }



   public string Description { get; private set; }



   public int Size { get; private set; }



   public event EventHandler DescriptionChanged;



   public event EventHandler SizeChanged;

}

  •  The methods “SetDescription” and “SetSize” accept the value argument and the CompletionHandler object which is called when the operation succeeded or failed. The completion handler provides an information about the operation status (e.g. details about the error that occurred) and an optional result. To make our further work easier we have added some wrappers around those methods that translate completion handlers to .NET Task class [1 - https://msdn.microsoft.com/en-us/library/system.threading.tasks.task(v=vs.110).aspx] (similar to the promises in Java and JavaScript).
  •  The properties contain information about the last known state of the object.
  •  The events notify the application about state change: it doesn’t matter if they are caused by calling the method locally or by some activity of other applications or services.

The library processes all requests and responses on its own thread pool. It has its advantages as well as disadvantages. The advantages are: the UI is not affected by processing network traffic. On the other hand, the disadvantages include the following: events and completion handlers are called on non-UI thread, so the application has to use some kind of queue to process them to ensure that the methods are called on UI thread and the library has to properly route them to the provided thread or queue.

Manual threads management – you don’t want to do this

During the development we have figured out that posting each event to UI thread manually causes high volume of long and repeated code which looks like this:

public class SomeOtherClass : IDisposable

{

   private readonly SomeClass _someObject;

   private readonly DispatcherService _dispatcherService;



   public SomeOtherClass(

       SomeClass someObject,

       DispatcherService dispatcherService)

   {

       _someObject = someObject;

       _dispatcherService = dispatcherService;



       _someObject.DescriptionChanged += SomeObjectOnDescriptionChanged;

       _someObject.SizeChanged += SomeObjectOnSizeChanged;

   }



   private void SomeObjectOnDescriptionChanged(object sender, EventArgs e)

   {

       _dispatcherService.Invoke(LoadState);

   }



   private void SomeObjectOnSizeChanged(object sender, EventArgs e)

   {

       _dispatcherService.Invoke(LoadState);

   }



   private void LoadState()

   {

       PropertyVisibleOnUI = $"Description: {_someObject.Description} Size: {_someObject.Size}";

   }



   public string PropertyVisibleOnUI { get; set; }



   public void Dispose()

   {

       _someObject.DescriptionChanged -= SomeObjectOnDescriptionChanged;

       _someObject.SizeChanged -= SomeObjectOnSizeChanged;

   }

}

The code becomes even more complex if we have to consider multiple nested objects. Just imagine the code required to display a “Your message has been read” notification if you have the following API:

public interface IConversationService

{

   event EventHandler ConverstionStarted;

   event EventHandler ConversationEnded;

   IEnumerable Conversations { get; }

}



public interface IConversation

{

   event EventHandler MessageAdded;

   void SendMessage(string body, CompletionHandler handler);

   IEnumerable Messages { get; }

}



public interface IMessage

{

   event EventHandler IsReadChanged;

   bool IsMine { get; }

   string Body { get; }

   bool IsRead { get; }

}

Event aggregator to the rescue

Event aggregator (event bus, mediator) seems to be a pretty good solution to the described problem, especially because we had wrappers around the library’s objects. Our notification area subscribes some message type and displays the notification when the event occurs.

_messageAggregator.Register(MessageHandlingMethod);

It makes cross thread calls much easier. We have written a method that extends the event aggregator object and forced the notification-handler to be invoked on the UI thread:

public static void RegisterOnDispatcher(

   this MessageAggregator self,

   DispatcherService dispatcherService,

   Action handler)

{

   self.Register(message => dispatcherService.Invoke(() => handler(message)));

}

It takes a few seconds to understand the method, but it is extremely easy to use later on:

messageAggregator

   .RegisterOnDispatcher(

       dispatcherService,

       MessageHandlingMethod);

Problems with the event aggregator

This pattern was convenient enough to dominate most of our code. At one point, we have realized that we used it even if it was the cause of the problem.

First of all, many classes required 2 dependencies: a class that keeps the state and an event aggregator to subscribe for messages.

Sending a message by means of the aggregator required creating a new class for the message, mainly because developers like to take shortcuts and they tend to skip this step if they can, even if it breaks abstraction. In many cases, we had a service that subscribed to couple other services and computed some value. Then, other services used this value, because the service itself didn’t send any messages, those services subscribed for messages sent by the original service.
In our app, some settings can be changed on the fly, without having to restart the app. When at least one of them were changed, a  SettingsChangedMessage is sent.

A lot of classes subscribed to this event and were notified about every setting change even if they weren’t particularly interested in the change. We have also added an additional field to Message that contains values about which setting has been changed. However, it also forced us to add an additional enum value and logic that checks if the setting value has changed every time we add new field in the settings.

There was another performance problem that was non-trivial to fix in this architecture. Dispatching the method invocation to other thread is not very expensive if done rarely, but if there is a lot of thread-hopping, it starts to affect performance. It causes temporary hangs of the UI. There were cases where multiple instances of some class depended on a couple of values from different services. Those instances had to subscribe for all messages that were sent when those values changed. The frequent result was that the sent message didn’t affect the overall state of the subscriber. Despite of the fact that the handler was dispatched, we weren’t able to fix this problem easily in our architecture.

Observer and observable

This design pattern is rarely seen in .NET, mostly because events are sufficient in the common scenarios. Microsoft noticed that there are cases, like ours, that are not so common, but also not that simple and decided to add the  IObserver and  IObservable interfaces. Those are slightly different to the ones you can find in books about design patterns. First of all, those interfaces contain generic constraints. The Observable contains only one method: Subscribe. That method accepts Observer as an argument. It doesn’t contain the Unsubscribe method. Instead of this, the Subscribe method returns an object that implements IDisposable - disposing this object unregisters the observer. The Observer contains 3 methods: OnNext, OnError and OnCompleted and is focused more on values passed via OnNext than on the Observable.

RX – LINQ for observables

LINQ is a very powerful part of .NET standard library that has been founded on very simple interfaces:  IEnumerable and  IEnumerator . It made the manipulating of collections easy and became so popular that most of .NET developers would probably rather quit their jobs than resign from using it. It is based on extension methods and a decorator pattern: the extension method creates an object that implements IEnumerable and accepts IEnumerable which represents an object on which it is being called and a Predicate object as parameters. When the GetEnumerator method is called on this object, it calls the same method on the IEnumerable base implementation which yields the object filtered, mapped or merged using specified predicate.

Both interfaces,  IObservable and  IEnumerable manage and provide a way to access some data. Both are used to deliver object references and provide an information whether or not there are some more items available, but the key difference between them is in the way of delivering items. IEnumerable waits for client code to pull the value, IObservable pushes it.

It seems reasonable to try LINQ approach to extend IObservable and this is how Reactive Extensions were invented. It turns out to be such a good idea that most of modern programming platforms implemented the same paradigm. Since the great platform–independent documentation has been written, we have decided to try it with one of the new features which we were delivering. Surprisingly, despite of the fact that the integration wasn’t included in the estimation we were able to deliver the feature on time.

Delivering property value when event is called

Because of architecture mentioned before, we needed a good way of delivering new property value every time the event was raised. Fortunately, RX contains just right method.

var sizeObservable =

   Observable

       .FromEventPattern(

           handler => someObject.SizeChanged += handler,

           handler => someObject.SizeChanged -= handler)

       .Select(o => someObject.Size);

Because in most cases our code requires loading current property value initially, we have created our own (FromEventAndProperty) extension method which does just that.

Cross-thread communication

Because a lot of messages have to be passed between threads, a support for multithreading was important. RX contains ObserveOn extension that accepts reference to the dispatcher. Unfortunately, our app used a wrapper around the dispatcher that prevented us from using it directly, so we had to write our own extension. Luckily, it was pretty simple: 1 extension method, 2 classes (observable and observer) less than 100 lines of code.

Combining multiple observables

Many UI components depend on more than one value, so combining multiple observables into one was a must-have. RX contains the method CombineLatest that accepts a couple of Observables and a function that accepts values provided by the combined Observables.

Observable

   .CombineLatest(

       firstObservable,

       secondObservable,

       thirdObservable,

       (firstValue, secondValue, thirdValue) => firstValue * secondValue * thirdValue);

The result of combining Observables is also an Observable so there is no need for workarounds to use it in place of regular observable.

Reducing workload

 Filtering events

Just like LINQ, RX contains a Where extension that filters the items which match the criteria. All other implementations of RX call this method a Filter.

 Removing duplicates

Reactive extensions contain the Distinct method that ensures that each value is delivered to the observer only once. From our experience,  DistinctUntilChanged was more useful, because it removes consecutive duplicates.

 Debouncing

Debounce extension skips the delivery of some values if they are emitted too often. It is useful if the user cannot consume data in rate enforced by the source. Let’s imagine a sensor that checks level of coolant in a car driving on a bumpy road. If coolant level is somewhere around minimum safe level sensor would sent information that changes all the time, without debouncing it would cause indicator flickering which is annoying from the user perspective.

Problems with unit testing

We are trying to constantly increase our unit tests code coverage. We use mock libraries if it speeds up our work and both - Moq and NSubstitute which return null for observable properties. It extends our test setup, but it is not as painful as it looks.

Post-mortem debug

Similarly to the LINQ or async methods, RX makes call-stack extremely long. Sometimes it doesn’t even clearly point to any place in our code where the crash occurred, so we have it in mind.

We did not say good bye to the event aggregator

Event aggregator still has its place in our project. There are even areas in code where we use both: observable property and messages sent via the aggregator.

It was a very instructive journey from directly attaching event handlers to using reactive extensions. We have gathered a lot of knowledge  about differences between each approach and we have learned to use the right tool for the job .

written by
Grape up Expert
Our experts

3 questions to Adam Kozłowski, Software Engineer at Grape Up

1. Is it enough to know only one programming language?

That’s tricky, because it greatly depends on the goals you set for yourself. We tend to hear from other programmers all the time that you don’t need a college degree - you can learn everything yourself on YouTube and take online courses. Just spend a few months doing courses and tutorials and voila – you’re a self-made programmer!

On the other hand, if you really want to be a programmer or a software engineer, college will not prepare you for this role properly, neither will it teach you how to make the best first impression at work. At university you will learn how to think like an engineer and eventually you will become an engineer with, hopefully, enough knowledge in multiple fields - not only different programming languages, but also statistics, compilers design, AI, digital signal processing and tons of different stuff that you might never even use in the future. However, this kind of knowledge can also make you an invaluable asset in the project.

So if you don’t mind doing the same thing, similiar kinds of projects in repetitive scenarios: one programming language is fine. But if you totally feel like exploring new things and approaches, learning new programming languages and frameworks (of course - learning by doing!) will definitely make you a better programmer and a way better engineer.

2. Is pair programming difficult?

Yes and no. It is definitely more exhausting than programming alone. You need to pay attention all the time and, depending on the knowledge gap between people you pair with at work, sometimes think for the two of you. Also, you should synchronize breaks, lunch time etc. Obviously it is not a problem - just an inconvenience if you’re a lone wolf.

For some people, especially introverts, it may be hard to leave their comfort zone and get comfortable spending the whole day working with just one person. To be honest, I have never seen this kind of problem lasting longer than one day.

However, there is no better way to learn something new than working with a person who has experience in a particular field. The value of this way of transferring knowledge overcomes all disadvantages thay you may encounter working in pairs. Even the partner who is less experienced can have a valuable input thanks to which working in pairs rarely means getting stuck - two heads are better than one, even in googling. ;)

Don’t believe people who say that working in pairs means dividing the team’s velocity in half. Effectively, when properly executed, pair programming may go as fast as doing it by yourself. As a result, code quality is better and it contains less bugs in general. At some point, there is even no need for code review if both partners are experienced enough.

3. What can you learn from working on site with the client?

Humility. Working closely with the client makes you more aware of the caveats of their field of work, competencies, what they actually want and need. Not everything can be easily described in an email and it is definitively easier to directly speak with the person of interest and get all the answers.

Meeting people you work with on a daily basis and learning the specifics of their work is also extremely important and heavily affects the future work. In general, it is also easier to skip the chain of command and force the necessary change, but it has to be done with caution.

Additionally, it’s a great time when you meet new interesting people you can learn from and obviously go out with and see the city, grab a beer or just talk. Working on-site often means working abroad so automatically you get to experience a new country and its culture.

written by
Adam Kozłowski
Software development

Capturing objects in closures: Why you’re doing it wrong? – Part 2

Choose your closure context wisely

 In the first part of this article , we defined several simple principles of capturing objects in Closures. According to these principles, the closure code associated with a particular instance of a certain type should be considered separately from the code, which is associated either with the global scope or with the type itself. We also came to the conclusion that the pattern with “weakifying”  self and “strongifying” inside the closure should be limited to the code which really depends on self while the other code should be executed independently. But let’s take a closer look at such instance-dependent code. The following proves that the situation is not so obvious.

func uploadFile(url: URL, completion: @escaping (Error?) -> Void) {

 // Getting file content data from URL

 //...

 // Creating file info

 let fileInfoID = self.databaseController.createRecord(data: fileInfo)

 self.remoteFileManager.uploadBinaryData(fileContentData) { [weak self] error in

   guard let strongSelf = self else {

     completion(UploadError.dataError)

     return

   }



   if error != nil {

     strongSelf.databaseController.removeRecord(recordID: fileInfoID)

     completion(error)

   } else {

     // Wait while server will make needed changes to file info

     strongSelf.startTrackingOnServer(recordID: fileInfoID,

                                      completion: completion)

   }

 }

}

The code above creates a record in the database that contains the file info and loads the file to the server. If the upload is successful, the method will notify the server that changes to the file info should be made. Otherwise, in case of an error, we should remove the file info from the database. Both actions depend on  self , so neither of them can be performed if the object referenced by  self had been deallocated before the completion was called. Therefore, calling the completion at the beginning of the closure with the appropriate error in this case seems to be reasonable.

However, such approach breaks the closure logic. If the error occurs, but  self was deallocated before the call closure, we would leave the record about the file that hasn’t been uploaded. Hence, capturing a weak reference to  self is not completely correct here. However, since it is obvious that we cannot capture  self as a strong reference to prevent a retain cycle – what should be done instead in that case?

Let’s try to separate the required actions from the optional ones. An object referenced by  self may be deallocated, but we have to remove the record from the database. With that said, we shouldn’t associate the database with the  self object, but rather use it separately:

func uploadFile(url: URL, completion: @escaping (Error?) -> Void) {

 // Getting file content data from URL

 //...

 // Creating file info

 let fileInfoID = self.databaseController.createRecord(data: fileInfo)

 self.databaseController.uploadBinaryData(fileContentData) { [weak self, databaseController] error in

   if error != nil {

     databaseController.removeRecord(recordID: fileInfoID)

     completion(error)

   } else if let strongSelf = self {

     // Wait while server will make needed changes to file info

     strongSelf.startTrackingOnServer(recordID: fileInfoID, completion: completion)

   } else {

     databaseController.removeRecord(recordID: fileInfoID)

     completion(UploadError.dataError)

   }

 }

}

Pay attention to the closure capture list. It is where we explicitly specify the  databaseController property. This will create a separate local variable inside the closure with the same name referencing this property. Since we didn’t add any modifier to it, the  databaseController is captured by a strong reference. While  self is still a weak reference, there won’t be any retain cycle – which is exactly what we need. As a result, the code is now consistent.
We remove the record from the database in case of an error or in case further action cannot be performed because  self got deallocated (also treating this case as an error).
So, what is the key difference between this code and the previous one? Previously, we were treating  self as the only source of our actions inside the closure. Because of that, the weak reference semantic of  self forced all actions to be optional.

By capturing the object as weak reference we’re saying: “Hey, I don’t need to force this object to live until the closure is executed. It may be deallocated before and that’s fine for me”. However, we forgot about one important thing. Namely, it’s not our real intention to make  self optional in the closure. Instead, we had to use weak  self reference in order not to produce the retain cycle, while some of our actions are required (removing redundant file info from database in case of an error).

Based on this example, we can draw some important conclusions. Even if the object is associated with  self (is its property), we should not treat self as a root object from which we take other objects inside the closure to perform calls on them. Instead, the properties of the  self object may be captured independently if needed.

Let’s take a look at a more generic, yet clear example.

func uploadFile(url: URL, completion: @escaping (Error?) -> Void) {

 // Some preparations

 // ...

 self.someAsyncAction(parameter) { [weak self] in

   guard let strongSelf = self else {

     return

   }



   // ...

   // These calls should be performed

   strongSelf.someObject.requiredToBeCalled()

   strongSelf.someObject.requiredValue = someValue



   // While these ones have sense only if self object still exists

   strongSelf.otherObject.mayBeCalled()

   strongSelf.otherObject.someOptionalValue = someOtherValue



   // currentItem represents selected object that is required to be updated

   // on closure call. Selection may be change several times before our

   // asynchronous action completion

   strongSelf.anotherObject.currentItem.someProperty.requiredUpdate()

 }

}

According to what we’ve just learned,  self should not be used in such way for all calls. Therefore, let’s make some corrections to our code. In this example some calls are required and some are optional. We can safely use  self for all optional calls. For each required one we should determine which object needs to be captured by a strong reference from the call chain like the following:  strongSelf.oject0.object1...objectN.action() . For the first two calls such principle object is obviously the  someObject property. The first one is a call of  requiredToBeCalled() method on it.
The second one assigns value to its  requiredValue property. Consequently, instead of getting it as a property of  self , we should directly capture  someObject in the closure. The next two lines manipulate with the  otherObject property.

As seen in our example, these calls are optional. Meaning, they may be omitted if the object pointed by  self is deallocated (they don’t make sense without  self ). The last line is a bit trickier. It has several properties in a call chain. Since the object on which the call is performed is represented by  someProperty , we may want to capture it directly. However, the actual value returned by  anotherObject.currentItem may (by definition) change. That is, the call to  self.anotherObject.currentItem inside the closure may return a different object from the one it was returning before  someAsyncAction() was called.

Thus, in case of capturing  someProperty , we may potentially use an object which is out of date and is returned by some old  currentItem , while the actual one will remain unchanged. Of course, for the same reason we should not capture the  currentItem object itself. So, the right choice here is the  anotherObject property which is the source of the actual  currentItem object. After rewriting the example according to our corrections, we will receive the following:

func uploadFile(url: URL, completion: @escaping (Error?) -> Void) {

 // Some preparations

 // ...

 self.someAsyncAction(parameter) { [weak self, someObject, anotherObject] in

   // ...

   // These calls should be performed

   someObject.requiredToBeCalled()

   someObject.requiredValue = someValue



   // While these ones have sense only if self object still exists

   if let strongSelf = self {

     strongSelf.otherObject.mayBeCalled()

     strongSelf.otherObject.someOptionalValue = someOtherValue

   }



   // currentItem represents selected object that is required to be updated

   // on closure call. Selection may be change several times before our

   // asynchronous action completion

   anotherObject.currentItem.someProperty.requiredUpdate()

 }

}

In general, when we have a call chain as follows  self.oject0.object1...objectN.action() to determine which object from the chain should be captured, we should find  objectK that conforms to the following rule:

There are two ways of calling our  action() inside our closure:

 1. Capture  self and use it as a root (or source) object (full call chain).
 2. Using  objectK in closure directly (call subchain) should have exactly the same effect.

That is, if we were to substitute the call chain  self.oject0.object1...objectK...objectN.action() (capturing  self ) in the closure with the subchain  objectK...objectN.action() (capturing objects pointed by  objectK at the moment of the closure definition) the effect of the call will be the same. In case there are several objects conforming to this rule, it’s better to choose the one that is the closest to the action (method call or property change). This will avoid redundant dependencies in the closure. For example, if in the call chain  self.object0.object1.object2.object3.action() we have  object0, object1, object2 conforming to the rule, it’s better to use  object2.object3.action() in closure rather than  object0.object1.object2.object3.action() since the longest chain means more semantic dependencies - the source of our action will be  object0 from which we get the next  object1 and so on instead of using  object2 directly).

Bring it all together

Let’s now summarize our knowledge about the closure context. In cases where retain cycles may occur, we should be very careful with what we capture inside the closure. We should definitely not use the “weakify” - ”strongify” pattern in all cases as a rule of thumb. There is no “golden rule” here. Instead, we have a set of principles for writing the closure code that we should follow not only to resolve a retain cycle problem, but also to keep the closure implementation consistent. These are the following:

 1. Determine the  instance that can cause a retain cycle (  self or any other object the capturing of which can cause a retain cycle).

 2. The code that is not related to this particular  instance should be considered. Such code may perform actions on other objects or even types (including the type of our  instance ). Therefore, the code should be executed regardless of whether the  instance exists inside the closure or not.

 3. For the code which relates to the  instance we should define which part of it is  optional (may be omitted if  instance is deallocated) and which part is  required (should be called independently of  instance existence).

 4. For the  optional code we may apply the “weakify” - ”strongify” pattern to the  instance . That way, we’ll try to obtain a strong reference to that  instance inside the closure using captured weak reference. And we’ll perform optional actions only if it still exists.

 5. For performing  required code we cannot apply reference to the  instance . Instead, for each call chain like  instance.property0.property1...propertyN.requiredAction() we need to define what property to use for capturing the corresponding object in the closure. In most cases, however, it’s simple. For instance, in the example mentioned earlier for  self.someObject.requiredToBeCalled() call we choose  someObject to be captured.

Please note that the proposed solution isn’t only limited to capturing  self in closures. The principles listed above may be applied to any object that may cause a retain cycle inside the closure.

But let’s point out that we’re not defining strict rules. There are no such rules when it comes to closure context. What we’ve done here is we deduced some principles based on common use cases of closures. There may be other, much more complicated examples in real code. Sometimes it’s really challenging to choose what objects to retain inside the closure, especially when refactoring existing code. The main goal of this article is to give useful tips on how to deal with the closure context, how to have the right mindset when choosing the objects that should be used inside the closure and a correct reference semantic for them.

written by
Andrii Biehunov
Previous
Load more

Stay updated with our newsletter

Subscribe for fresh insights and industry analysis.

About UsCase studiesContactCareers
Capabilities:
Legacy ModernizationData PlatformsArtificial Intelligence
Industries:
AutomotiveFinanceManufacturingAviation
Solutions:
DataboostrCloudboostr
Resources
BlogInsights
© Grape Up 2025
Cookies PolicyPrivacy PolicyTerms of use
Grape Up uses cookies

This website uses cookies to improve its user experience and provide personalized content for you. We use cookies for web analytics and advertising. You can accept these cookies by clicking "OK" or go to Details in order to manage your cookies preferences more precisely. To learn more, check out our Privacy and Cookies Policy

Accept allDetails
Grape Up uses cookies

Essential website cookies are necessary to provide you with services available through the website, autosave your settings and preferences, and to enhance the performance and security of the website - you have the right not to accept them through your web browser's settings, but your access to some functionality and areas of our website may be restricted.

Analytics cookies: (our own and third-party : Google, HotJar) – you can accept these cookies below:

Marketing cookies (third-party cookies: Hubspot, Facebook, LinkedIn) – you can accept these cookies below:

Ok