About us
Our services

Capabilities

Legacy Modernization
Data Platforms
AI & Advanced Analytics

Industries

Automotive
Finance
Manufacturing
Aviation

Solutions

Databoostr

Data Sharing & Monetization Platform

Cloudboostr

Multicloud Enterprise Kubernetes

Looking for something else?

Contact us for tailored solutions and expert guidance.

Contact
Case studies
Resources

Resources

Blog

Read our blog and stay informed about the industry’s latest trends and technology.

Ready to find your breaking point?

Stay updated with our newsletter.

Subscribe

Insights

Ebooks

Explore our resources and learn about building modern software solutions from experts and practitioners.

Read more
Careers
Contact
Blog

Thinking out loud

Where we share the insights, questions, and observations that shape our approach.

All blog post
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Our experts

Highlights from KubeCon + CloudNativeCon Europe 2019 – Kubernetes, Service Mesh Interface, and Cloudboostr

This year’s KubeCon was a great occasion to sum up 5 years of Kubernetes and outline its future. It was also a great place to make some important announcements, and we got it. Our team spent those days not only on learning about new features and networking with this amazing community but also talking about how Cloudboostr can impact the adoption of cloud-native technology .

Grape Up team joined 7,700 attendees, top enterprises, media and analysts in Barcelona, Spain at KubeCon + CloudNativeCon Europe 2019 to discuss what’s new in Kubernetes, open source and cloud-native, learn from the most inspiring people in the industry, network with peers from different projects and together work on developing the community gathered around the Cloud Native Computing Foundation.

KubeCon + CloudNativeCon Europe 2019: Kubernetes is thriving

This year’s KubeCon provided us with many promising news and announcements. From the very first keynote to the last workshop, a strong focus was on leading technologies, mainly Kubernetes, and its improvements. Dan Kohn, Executive Director at CNCF, in his opening keynote, presented the Kubernetes place in the cloud-native environment and explained why Kubernetes, like many technologies and applications before, is winning the race called simultaneous invention.

Dan Kohn used engaging storytelling, to present that from ages people have been working on similar ideas at the same time, taking advantage of works done before them. He stressed the fact that there wouldn’t be Kubernetes without technologies, that had built the foundations. He gave us a glimpse of what, in his opinion, makes Kubernetes standing from the crowd:

  • It works really well!
  • Vendor-Neutral Open Source and Certified Partners.
  • The People that develop and promote it.

The end of the opening keynote was strong – all the members of the community gathered around Kubernetes and CNCF have an impact on the technology improvements, as Kubernetes is a foundation on which many new technologies arise.

Hot news: Service Mesh Interface

Microsoft stole the show announcing the launch of Service Mesh Interface, an open project that defines a set of common APIs that provide developers with interoperability across different service mesh technologies.

The Service Mesh Interface includes:

  • A standard interface for meshes on Kubernetes.
  • A basic feature set for the most common mesh use cases.
  • Flexibility to support new mesh capabilities over time.
  • Space for the ecosystem to innovate with mesh technology.

This announcement from Microsoft led to many discussions during the conference. As the service mesh is a technology at the early stage of its development, an environment designed for growth should impact its further evolution. SMI is supposed to provide interoperability that will help the emerging ecosystem of tools integrate with existing mesh providers, and instead of doing it individually, gain cross-mesh capabilities through SMI. We’re looking forward to seeing how it will affect the developers' job and help to solve customer problems in the nearest future.

Cloudboostr – an enterprise-grade Kubernetes

What makes KubeCon + CloudNativeCon a special event is an incredible capability of CNCF to engage so many people representing multiple tech and business areas to collaborate together on developing a brighter future for the cloud-native ecosystem. As Dan Kohn said at the beginning of the conference, Kubernetes and parts of the cloud-native landscape build the foundation for the technologies of the future. We at Grape Up also want to participate in this process of developing cloud-native technology and helping various companies implementing Kubernetes.

This is why KubeCon + CloudNativeCon was a great occasion to talk with the end users and developers about their adoption of cloud-native tools. They had a chance to learn more about Cloudboostr, our cloud-native software stack, built with Kubernetes at its core , and allowing companies to stand up a complete cloud-native environment in a couple of hours. Our experts spent those days in Barcelona on discussing the evolution of Kubernetes, its business use cases, and ways to get the most of its capabilities. Our team value this kind of events the most – it's crucial for our product development and a better understanding of people’s challenges and business needs.

Seeing companies highly interested in Cloudboostr capabilities, we are encouraged to work even harder on new improvements. And this the clue to the Kubernetes success – by building a community that inspires other members to grow and connecting them to accelerate the development of associated services, CNCF has created a self-engaging mechanism that helps Kubernetes thrive and acquire new users. This strategy makes Kubernetes an important part of the technology development as a whole, which with every year becomes even more impressive.

written by
Szymon Kozak
Software development

Why nearshoring may be the best choice for your software development

Adapting the latest technologies obligates executives to decide if they should build an in-house team or outsource demanding processes and hire an external team. And while outsourced projects become more sophisticated, leaders responsible for making decisions are taking into account more factors than just cost savings. Here comes nearshoring.

Disruptive economy, the fast-changing landscape of cutting-edge technologies, and extremely demanding customers – regardless of the established position on the market, today’s most powerful enterprises need help to retain once gained a competitive advantage . Most of the world’s largest companies are undergoing radical changes that focus on adapting the latest technology and game-changing approaches to company culture.

To embrace digital transformation and get the most out of it, leaders in their fields – automotive, telco, insurance, banking, etc. – utilize knowledge and experience of external teams. As this collaboration is getting more recognition it also takes more sophisticated forms. For decades, the biggest enterprises have been opening their branches in different countries or delegating parts of their processes to specialized teams outside the organization – this is the way outsourcing and offshoring were born.

And while it works perfectly for many business fields, companies that implement the most comprehensive technologies, rebuilding their core businesses, and trying to adopt promising approaches to software delivery, need a solution that is more sophisticated.

What is nearshoring?

Delegating tasks to external teams is full of benefits, but also has some real treats – like cultural differences in communication between teams, challenges to manage processes remotely, disruption in transforming knowledge, and many more. After years of testing good and bad experiences, some enterprises have mastered the most effective way to delegate job – nearshoring.

Nearshoring is often described as the outsourcing of services, especially regarding the newest technologies, to companies in nearby countries. In practice, it means that an enterprise from Germany hires a company from Poland to develop a particular project like building an app or implementing software. Nearshoring is often used by brands expanding their services to the new yet advanced business fields, where building and an in-house team responsible for a given area would be too expensive and challenging.

Why do enterprises prefer nearshoring over offshoring?

While using nearshoring and offshoring comes to some common profits – cost savings, tax benefits, finding skilled professionals in given niches, the first one supposed to be an answer to the challenges addressed in a previous part. Companies that complain about their experiences with offshoring too often make the same mistakes. They decide on moving their processes to the far-distant countries regardless of cultural differences and working style. They focus on thing easier to measure, like financial results and don’t take into consideration less obvious factors.

Backed with that experiences, enterprises from various industries have started to collaborate with teams of professionals that have more in common with their employees; geographically, by living in the same part of the world, often sharing a border, and culturally – by belonging to similar cultural circle, sharing common working culture, and being at a comparable level of language used in project (English, first!).

English proficiency in the adult population among European countries - english as a second language. Source: europelanguagejobs.com

According to many leaders responsible for workforce management, when deciding on hiring a company that have to help you with improving your business, especially if you're trying to implement complex technology, you should pay at least as much attention to the communication and soft skills as to the know-how and experiences in working on similar projects.

Countries with the best developers. Source: hackerrank.com

Where do enterprises from Western and Northern Europe look for partners?

Agile teams from Central Europe that consist of experts in transforming businesses, implementing DevOps culture, and enabling cloud-native technologies are getting a strong recognition among established companies from Germany, France, UK. Norway and Italy. Professionals in Poland, Czechia or Hungary are known for their skills, language proficiency, and working culture. And it has never been more difficult to find some real differences in the ways people from these parts of Europe live.

All these things make collaboration more fluent and easier to manage. Partnering with companies from the same time zones can also save the cost of communication and transport, and makes them available during similar working hours.

Taking into account flexibility, language skills, technical knowledge, experience in various international projects, lower operational costs and cultural similarities to the leading European countries, companies from countries like Poland have become natural nearshoring hubs.

Collaboration with top enterprises from a nearshoring partner’s perspective

For the last couple of years, Grape Up has been working as a nearshoring partner with the most recognized brands in various industries like leading automakers from Germany and large telcos from Italy and Switzerland. These experiences have helped our team develop soft skills needed to get the most out of the cooperation, both for hiring enterprises and our employees who can now master their expertise working on demanding projects for the most competitive businesses.

We have to admit that our competitive advantage may seem unfair. We’ve gathered a huge team of experienced engineers familiar with the latest cloud-native technologies, Open Source tools, DevOps and Agile approaches. They feel well in the international environment, speak fluent English, and are good in adapting to a new working style. We have two R&D centers in Poland and a few offices in Europe. We use experience amassed through the years of working with companies that are willing to be early adopters of cutting-edge technologies and innovative methodologies. We’re active contributors to cloud-native and open source communities, we attend top conferences and industry events – that give us direct access to knowledge and innovative ideas. Being ahead of the competitive crowd allows us to focus on the most promising projects.

If your enterprise is working on digital transformation, trying to implement DevOps or adapt cloud-native technologies and you need some support or consultancy reach us without a doubt . We are responsible for numerous successful migration to the cloud, enabling cloud platforms, transforming legacy software into cutting-edge applications, and tackling business challenges that at first sight might seem unsolvable.

written by
Szymon Kozak
Software development
Our experts

What to know before introducing pair programming at your company?

Pair programming has become a popular way of developing the newest products and encouraging team members to share knowledge and grow together. Many of the world’s top enterprises adopt this innovative technique. Some of them with success and some of them with less expected results.

Before a company embraces pair programming its leaders and developers should get to know some basic knowledge about this approach. As a company that not only uses this practice on a daily basis but also implements pair programming in the customer’s engineering teams, we’ve answered to the commonly asked questions about this technique, its pros, and cons, how to prepare a workstation for developers working in pairs, and if pair programming is good for everyone.

What is pair programming and how should it look like?

You can find various definitions, but most of them agree that pair programming is an agile software development technique that requires two developers working together on the same block of code, using one workstation. In these pairs, one programmer is becoming a “Driver” and the second one is known as a “Navigator”. Where they work together “Driver” manages a computer and writes code, whereas “Navigator” is supposed to check the written code, follow colleague’s job, and work on problems faced during implementation. When using the pair programming technique, programmers should discuss ongoing works, search for effective solutions, and support each other. What’s important, roles in a pair coding team should be changed during a workday as they are not assigned permanently.

How to prepare a workstation?

The most important thing when designing a workstation comes to making it comfortable for both programmers. A desk for a pair should be properly larger than a desk for an employee working alone. And when two people work together for eight hours a day both need their own, dedicated space to build a comfort zone. A workstation for coding duo requires at least two monitors that displaying the same things, and a keyboard and mouse for each developer.

Teams that work remotely proved that pair programming can be done effectively even when collaborating developers do it at a distance. It’s important to equip these developers with a working gear that enables productive communication, and high-quality headphones are a must-have.

Here is a list of useful tools and resources for pair programming:

  • Cloud IDEs: AWS Cloud9, Codenvy,
  • Communication and screen sharing tools: Skype, Slack, Zoom, TeamViewer, Stride,
  • Live Code Share Extensions: Atom Teletype/VSCode Live Share,
  • GIT Scripts: pivotal-legacy/git_scripts.

Advantages and disadvantages of pair programming

And when pair programming delivers real value for companies that implemented this technique, the leaders responsible for adopting these practices in their teams have to be aware of its threats. Here are some of the most valuable pros of pair programming:

  • Higher quality of designed solutions - more compound and more meaningful development for given tasks.
  • Different views and perspectives regarding the same problem.
  • Reduced risk of failure or making a mistake.
  • Time and cost savings - software is delivered faster which decreases the costs of development.
  • Agile and productive knowledge transfer in projects.
  • Breaking down silos in projects - avoiding situations when only one person in a project owns key knowledge.
  • Building stronger relationships among team members.

And here come some cons every company has to take into account:

  • Dealing with distractions to individual comfort zones and independent working style.
  • Personalized working environment - getting used to a non-standard way of doing a job.
  • Common lack of convenience to not that obvious and innovative way of programming among developers used to works on their own.
  • More demanding and exhausting way of work.

All of the disadvantages may result in rising discomfort and drops in productivity which sabotage the whole idea of pair programming and potential wins it could provide.

Was every developer born for pair programming?

Pair programming may not be that easy as it is often presented. This technique obligates developers to be communicative, conscientious and focused on the subject matter. For many coding professionals, the hardest part of programming in pairs comes to the necessity to adjust to the other person - her/his way of work, time, and individual habits.

People who value working on their own manner and their own comfort, need focus and quiet environment, or just avoiding too much engagement in communication, may find pair programming too demanding for them.

My experiences with PP

My experience with pair programming has started two years ago. The theory seemed reasonable, but I was a little bit skeptical, and to be honest, I didn't believe in promised advantages. For me, it was rather a waste of time and resources when two developers work on the same problem at the same time.

At first, it was hard. I had to change my entire working routine and adjust to a colleague I was paired with. Actually, we both had to modify our working style, and we all know how difficult it is to change habits.

Adjusting to the new technique of doing my job influenced my time after work. I was more tired but at the same time, I felt that we had done a lot of good quality jobs. And step by step, it became easier as we got familiar with those practices. What was really encouraging - the result of our work and the amount of new knowledge was insane .

Working at that level of focus is demanding and costs a lot of energy. So, having a break for 2 or 3 minutes, do some small talk about topics unrelated to problems you’re currently trying to solve, may result in a more productive attitude during the day. It’s also worth noting, that direct collaboration with one of my colleague gave me the opportunity to better get to know that person and having a good friend at work also improved my comfort.

After a few weeks, I’ve noticed all these advantages of pair programming that previously wasn’t that obvious to me. We’ve improved the quality of our work, and our solutions were well thought out. As a result, we’ve minimized the number of mistakes which significantly enriched our outcome. And then it has become clear for me that pair programming is not just a cool theory but an effective way of developing new solutions and useful tool for learning new things. And when it’s a challenging technique, maybe not for everyone, but for sure it’s worth giving it a try. Anyone can try it on his/her own and in case of any troubles get back to the previous way of doing the job.

To sum up all my experiences with pair programming, I have to admit that this technique definitely works and expect the simple tasks, it’s worth to use it every day.

written by
Łukasz Fiedoruk
Software development
Data platforms

Be lazy – do not reinvent the cloud wheel from scratch

Keeping up with the newest technologies is demanding. A lot of companies tend to do that once in a while, and it is totally understandable. It is hard to follow and discover the perfect momentum to choose the cloud technology that will be valuable and cost-effective for years to come.

Cloud is not only a technology, but it also determines how to design, build and maintain applications and systems. For someone who is currently engaged in the process of digital transformation that is a huge deal – it's like upgrading your rickshaw to the newest Mustang car. All of the development tasks seems to be so easy after adopting to all cloud and DevOps conditions.

The common mistake in the situation where your process and software tools are outdated is to reinvent the new environment by yourself. Especially, when sophisticated platforms or infrastructures come into the picture. Becoming an expert in cloud technologies is a perfect example. It requires a lot of time and resources to master the best tools and to make them work together in the way we want them to. That time should be spent instead on what drives the company’s business and what the customer cares about the most: developing the product that will be running in the cloud.

At Grape Up, we follow a technology agnostic approach. We choose tools and technologies that are tailored to every specific customer and project. While working with the various companies on digital transformation, we helped several teams leverage cloud-native technologies and adapt a DevOps approach to deliver software faster, better, and safer. And despite the fact that every case was different, we have identified a visible pattern. Our team discovered a strong market demand on a cloud-native platform based on open source solutions. We also noticed that in most cases platform maintenance operations were outsourced. Again, building know-how is a hard process, so why not let someone with the right experience lead the way?

This is how Cloudboostr was born. A complete solution to run any workload in any cloud . All based on available open source tools that are well-known by the community and most importantly widely used and maintained. In the beginning, we’ve created a type of reference architecture to accelerate the digital transformation process. It was used as our internal boilerplate, not to reinvent to the whole thing with every customer, but rather use something based on the experiences from previous projects. Along the way, what used to be our template, got mature, and the time has come to make it an external platform, available for other companies.

How does the Cloudboostr platform work?

The Cloudboostr’s technology stack incorporates a number of leading open source technologies proven to work well together. Cloudboostr’s Cloud Engine is built with Kubernetes and Cloud Foundry, making applications deployment easy and both solutions cover a wide spectrum of application use cases. Terraform, BOSH & Concourse are used to automate the platform’s management and operations, and also track who, when & why applied a particular change. Built-in Telemetry toolkit consisting of Prometheus, Grafana and ELK stack allows to monitor the whole platform, investigate any issues and also recognize areas which should be improved.

The main abstraction layer of the platform is called Ops Control. It is a control panel for the whole platform. Here, all of the environments and also all shared resources like managing, monitoring or alerting services, are created and configured. Every environment can be created with either Cloud Foundry or Kubernetes or both of them. Shared resources help to monitor and maintain all of the environments and ensure proper workflows. From the operator’s perspective, the main point of interaction is Concourse with all of the pipelines and CLI for more advanced commands. Plus all of the Telemetry dashboards, with all sorts of graphical charts and metrics, where all concerns are clearly visible.

Cloudboostr as a platform includes all of the third party components under its wings. This means that any new platform releases will incorporate not only internal features or improvements but also upgrades of its elements if available. From a perspective of platform user, this is convenient - there is no need to worry about any upgrades, but one - the platform.

From a production perspective, it is crucial to have your platform always up and running. Cloudboostr uses HA-mode by default and leverages the cloud’s multi-AZ capabilities. It also automatically backs up and restores individual components of the platform and their state and data, allowing you to easily recreate and restore a corrupted or failed environment.

It is also important that Cloudboostr comes pre-configured, so if you're starting a journey with the cloud, you can install the platform very fast and be ready to deploy your applications very quickly. The initial cost of getting platform working is fairly low when it comes to getting the know-how, and that's a huge benefit.

How can Cloudboostr be tested?

It is obvious that platform is a big deal and no one would like to buy a pig in a poke. Especially when long term services are in the picture. Grape Up enables customers to try out the platform for a while before making a final decision. It is also understandable that most of the companies need more time to get to know the technology behind the platform. That's why the trial time is done in the dojo style. For a few weeks, the Grape Up crew settles in a customer location, installs the out of the box product, teaches how to use it and also navigates migration of the first application to the platform. Support provided by experienced platform engineers helps to understand the full capabilities of the product and how to use it in particular scenarios.

After the first period, there is another time frame for unassisted tests, where the client’s operations team can play with the platform, apply various tests or migrate other applications to see how they will run on the new environment. During that time, Grape Up platform engineers are still available to help and guide.

What about costs?

All the mentioned technologies are open source and if one would like to build this platform on their own they can do that. So why should anyone pay for that? The answer is, no one needs to pay for Cloudboostr license – it's totally free.

In order to use Cloudboostr platform though, you need to have a support service subscription plan. It's obvious that complex solutions, such as cloud platforms, need to be maintained by someone. Majority of the market outsource the services, which is especially convenient if you use software organized as a product. Again, it's not worth it to gain experience by yourself and spend weeks or months on that, while your competitors are moving forward. Also knowing that there is already a fair share of the market, who knows the technical details that you're looking for. However if one feels that they already have all of the knowledge needed to maintain the platform themselves, that's fine too - they can simply choose a minimal support plan.

Conclusion

The wisdom coming out from this article is – if you want to go cloud, do not reinvent the cloud wheel from scratch. Do the things that you were meant to – focus on building your applications and delivering true value to your customers and let the experts help you run your applications in the cloud and maintain the cloud environment. Boost your cloud journey with Cloudboostr !

written by
Grape up Expert
Our experts

Cloud Foundry Summit 2019 – continuously building the future

Cloud Foundry Summit 2019 in Philadelphia was the first event on our list of conferences we're heading to this year. It was an excellent season opening and another example that Cloud Foundry community is developing well. "Building the future" - this year’s motto is a good metaphor for the current situation. Cloud Foundry is evolving, and it's perfectly fine.

A quote from Abby Kearns, Executive Director of the Cloud Foundry Foundation, can summarize what's happening these days:

“We’re all kayakers now, navigating the rapids of change. The quickest learners will be the biggest winners.”

Abby Kearns started the summit with this powerful statement, and she addressed the fact that influence the way we develop our companies – technology is evolving at a rapid pace. The cloud-native landscape is continuously growing as new solutions and ideas are introduced on a regular basis. Often easy to use, but not developed enough to use them on production without taking some risk. How to choose the right technologies? When to adopt them? Should the whole application or component be rewritten? As Abby said, the quickest learners will be the biggest winners.

During the event, we had a chance to see how many of those tools have evolved into more mature and more customer-friendly apps. From the end-user perspective, it was promising to see them working together, as it gives a glimpse of how it may look in the future.

Cloud Foundry loves Kubernetes

One of the hottest topics in the corridors was for sure the Eirini project. This is a perfect example of how software should evolve – and yes, Cloud Foundry Foundation isn’t too proud to admit that Kubernetes is better than Diego/Garden in particular cases. This is how one open source should work with another.

A year ago one could ask if Kubernetes will push Cloud Foundry into the shadow. The question was wrong – how could we compare apples and oranges? To simplify things, we can call Cloud Foundry an Application PaaS, while Kubernetes can be described as a Container PaaS. This year we’re discussing how they can work together. As this seems to be the right question the Eirini project may be the right answer.

We can't wait to hear more about Cloud Foundry and Kubernetes collaboration. Especially given the fact that the project was proclaimed to pass core functional tests and it is ready early adopters.  Learn more about Project Eirini

Interoperability

One of the keywords announcing the conference was interoperability. Eirini is a good example of that. TheCloud Foundry project continues to integrate with other open source projects to provide more capabilities for users. Platform maturity allows to incorporate new tools easily and makes the whole process user-friendly.

Worth noting is the fact that all of these tools are somehow used (or will be used soon) in ongoing projects, so it's not just state of the art. This is for sure a hard work of the community experts that help Cloud Foundry stay ahead of the curve.

The Comcast story

Philadelphia is home to a large Cloud Foundry member – Comcast. The company story of successful digital transformation is a practical guideline on how it should look like. In this year's agenda there was no case study about it, but anyway it is worth mentioning and congratulating the Comcast crew. This technology voyage and company restructuring – to be explicit, breaking siloses – should be an inspiration to all of us.

Cloudboostr

It wouldn't be an honest review without writing some personal summary of how do we feel about our participation in the summit. As an IT consulting company, we were more than excited to be a Silver Sponsor of a conference which gathers hundreds, if not thousands of cloud experts in one place.

Besides an engaging business case study of helping an established company to go through a digital transformation, we've also seen a huge interest in  our cloud platform - Cloudboostr . A lot of companies want to automate their deployment & delivery tools. Becoming a cloud expert is a long and demanding process for firms that need to be focused on running their business operations. Majority of the world's top enterprises collaborate with external teams. Some of them use proven solutions that help them adjust to the fast-changing environment. A complete cloud platform (not only PaaS) meets market needs perfectly, and we were happy to tell our new friends about it.

As the "kayakers" we know that it's a long way to go and we'll be happy to share our knowledge and experience in order to help our customers move forward.

written by
Grape up Expert
Legacy modernization
Software development

5 Tips on how to deal with common problems when running large production environments

Working as a platform operator with cloud-native technologies, L2 technical support , and participating in CF installations give a unique opportunity to observe how different companies implement new technologies in their works and how they deal with running large production environments. Among various bad experiences, imperfect ideas, and the most reprehensible habits related to running and maintaining cloud infrastructures those listed below can generate the most complicated problems.

Bad practices often occur when it comes to productive CF infrastructures. However, these guidelines should help everyone who runs or uses any of the production-ready workloads.

Neglected capacity planning

Let’s start with this: you have to be aware that you will run out of resources eventually. Then you should plan how to scale up. If you run on-premises software, you should consider hardware and virtualization layer’s requirements. Proper sizing of the availability zones will always save you many problems.

On top of IaaS there is always a PaaS or some container orchestrator. The key to success here is to optimize all the limits, quotas and other configurations (like application scaling rules, etc.) so the microservices never consume available resources, even under high load.

It’s obvious that both hardware and virtualized capacity planning requires a buffer. You need to be prepared for issues, maintenance and infrastructure changes. There is no best configuration. It always depends on many factors but nevertheless, it is always worth taking into consideration.

Capacity and resources have to be monitored. A good monitoring tool with decent alerting rules will help you predict possible problems and react quickly if anything bad happens to your infrastructure.

Poor or no CI/CD

If you want to maintain any piece of software, don’t forget how valuable is automation. Many times people quit on CI/CD implementation because of the deadline or tasks formally more important. In most cases, it doesn't end up well.

It's hard to build, test and deploy software without automation. The manual process is highly exposed to the risk of human error. Apart from that, it is almost impossible to keep track of deployed software (version, updates, hotfixes, security patches, etc.) in large production environments. Sometimes you have to maintain CF platforms hosting 1K+ applications. Consider how problematic would be the migration process if there is a business decision to switch to a different solution.

For operators maintaining the infrastructures, platforms, and services used by developers it’s critical to keep everything up to date, take care of security patches and configuration changes. It is impossible to handle this manually with minimal or zero downtime of the services. That is why automated pipelines are so important, and you should never give up on implementing them in the first place.

Poor or no backup/restore procedures

Backup/restore is another important process that people often put in the background. You may think that your applications are safe if your IaaS offers you a highly available environment or containers you run have an auto-healing function. This is not true. Any disaster can happen, and in order to recover quickly, you have to create a well-defined backup and restore procedures that work. That’s not all, as the procedures have to be tested periodically. You need to be sure that backup/restore works fine since the process may depend on some external services that might have changed or just brake.

No periodic updates

Every software has to be updated regularly in order to keep it secure. It is also much safer to perform minor updates with a little chance of failure or downtime than doing ‘big jumps’. Major updates introduce higher risk, and it is hard to catch up with versions especially if there is no automation implemented.

You may see cloud infrastructures that were just installed and never upgraded and that generates a lot of issues for platform operators (users can’t see any difference). It is not a problem until everything works correctly. But after some time people may start escalating issues related to the versioning of the services. Unfortunately, it is too late to upgrade smoothly. It becomes a big spider’s web of dependencies. It may take weeks to plan the upgrade process and months to execute it.

Flawed architecture

Defective architecture generates serious problems. Many times developers are not aware of the issue until it shows up in production. After that, it’s really hard to admit the architecture needs to be changed and people often try to get rid of the effect instead of fixing the cause of the problem.

Let’s take a real-life example often faced. You may be receiving Prometheus alerts saying that ELK stack is overloaded. After investigating the issue, it may turn out that microservices are so verbose that they generate thousands of log messages per second. What if you raise the possible architecture problem, but nobody cares? As a result, you’ll have to scale ELK. In those cases, it may waste hundreds of CPUs and terabytes of memory and storage. That makes somebody spend money just to store 90% of useless data and maybe 10% of valuable information. This is really a simple way to put yourself in a situation without a way out.

Conclusion

Following these guidelines will definitely not be easy. Sometimes people responsible for making decisions are just not aware of the consequences of taking some actions. The role of every technically skilled person in the project is to spread the knowledge and make people aware of what may happen if they ignore those basic rules that matter. You can’t step back if you encounter such practices in the future. Be an example for others and drive change - it’s always worth trying.

written by
Tomasz Ćwięk
Our experts

How to overcome main challenges when collaborating with teams from different countries?

In one of his latest articles, Filip Ryba describes the  specificity of working on on-site projects in the USA . Let's compare his experiences with similar projects run in other parts of the world.

Let's take the US first.

What can we learn from U.S work culture?

Teams in the USA are very customer-oriented and everyone knows their users. Ask your teammate about some functionality in the product. In Poland you will usually get a technical answer, something along the lines "Oh, it is because we need to get additional data from..." and you have to ask some follow-up questions, to get to actual functionality needed by customers and in many cases, you won't get that answer at all. In the USA you will most likely get "Oh, it is because our customers need to know if...", which puts you in the right business context from the start. I find it really positive since instead of just focusing on technical parts you are focusing more on the actual usability of the feature for the customers.

The teams that I worked with always had a flat structure. Sure, there was a PM and a PO, but they were sitting with their team at the same desks. It's another factor that makes communication super easy and effective.

We are actually closer culturally to the United States than you might think. We have Hollywood movies in our cinemas and on TV, we listen to news involving the US, and Netflix becomes popular worldwide. It gives us something to talk about to break the ice.

Obviously, everyone there speaks English, although its level differs from person to person. It is pretty common especially in IT that teams are multi-cultured and you will have teammates from India, China, Mexico to name a few. This might be challenging at first, but you will catch up pretty quick. Being constantly exposed to a foreign language is the best way to learn.

Another aspect of American culture that is especially contrasting with our culture is that when you share something you will always get positive feedback. You will hear "Oh, that's great!" or "Oh, that's so cool!" and it does not matter if they really think that. What matters is that it encourages to share more and it works.

Differences between American and German working culture

How do they compare to our neighbors from Germany? While the teams here also have a flat structure there is also a more rigorous division of responsibilities between teams. There are parts that belong to the architecture team, testing team, CI/CD teams, etc. This inevitably leads to more internal dependencies and generates more meetings. The Germans are very polite and open to discuss everything. Even if they do not have to agree with you, they will at least hear you out. Even when a decision comes from the top, they will still seek consensus within a team.

While collaborating with German companies your PO/PM will be more technical than their US counterpart. Firstly, it helps to translate business goals to engineering tasks. Secondly, the goals that do not bring immediate business value like code refactoring or improvements to the development environment are better understood, and easier accepted. All of that makes projects more technical with teams having less influence on the business side of the product.

Collaborating with Italian teams

The US might be 7000 km from Poland, but we are culturally closer to them than let’s say Italy. If you add to it that not many Italians speak English, the communication may be a bit challenging. The project I worked on was managed in a more freestyle manner. When seeking some information about a project all you have to do is to walk around the office and ask around until you find someone who has that knowledge. Of course, sometimes you also have to find someone who can help you as a translator.

Due to the same reasons communication with an end user is next to impossible. Trying to change the way things are done is even more challenging. Let’s say you finally encouraged your PM to agree to change things a bit. You go back home happy that things are finally moving into the right direction only to discover next day that there was another meeting with some other PM’s and everything is going back to how it was. Sometimes it requires time to work out a way of running projects that suits both sides - when you deal with the early struggles the job becomes more effective.

Collaborating with people from other countries sometimes may be demanding but when you invest some time in better understanding their work culture, a way of communicating, and daily routines it becomes easier. Despite all the differences and challenges at the beginning, on-site projects can be a valuable lesson and helpful experience. There is one more aspect of working abroad that I really like. You can of course visit the same countries by yourself during vacation. However as a tourist you don’t have to figure out what is the best way to commute, where to go shopping, where to eat, what to do on Friday evening etc. This allows you to experience how it is to live there without actually moving there.

written by
Grape up Expert
Data platforms
Manufacturing

What Is an exit strategy and why do you need one?

Cloud is everywhere these days. There is no doubt about that. Everyone already did or is about to make an important decision - which cloud provider to choose?

For enterprise solutions, the first thought goes to the big players: Amazon, Microsoft, Google. They provide both private and public cloud solutions using their own set of APIs, command-line tools and of course pricing.

It is very easy to get in, but is it the same with getting out? All your services may already benefit from great tools and services shared by your provider. Awesome tools, but specific to the company. Does Amazon CLI work with Azure? Not really.

The important part is that the scale of your infrastructure you can manage. The software can be optimized, to some point, but still, it may be a viable choice. Resources can be hosted using CDN to better manage bandwidth and storage cost. But what can be done if the provider increased the prices? How this can be mitigated?

The exit strategy

This is why you need an exit strategy. The plan to move all your resources safely and possibly without any interruptions from one warehouse to another. Or even the plan to migrate from one provider to the another. The reasons for the change may be very different. Pricing change, network problems, low latency or even a law change. All this may push your company against the wall. It is fine when you are prepared, but how to do this?

It may seem more on the paranoic side or catastrophic thinking, but in reality, it is not so uncommon. Even worse case, which is a provider shut down, happened a lot lately. It is especially visible with an example of cloud storage startups. $15 million dollars CloudMine startup filed for bankruptcy last year. A few years ago the same thing happened to Nirvanix and Megacloud. Nobody expected that and a lot of companies has been facing that problem - how can we safely move all the data if everything can disappear in 2 weeks?

Does it mean AWS will go down tomorrow? Probably not, but would you like to bet if it will be there in 10 years? A few years ago nobody even heard about Alibaba Cloud, and now they have 19 datacenters worldwide. The world is moving so fast that nobody can say what tomorrow brings.

How to mitigate the problem?

So we have established what the problem is. Now let’s move to the solutions. In the following paragraphs, I will briefly paint a picture of what consists of an exit strategy and may help you move towards it.

One of them is to use a platform like Cloud Foundry or Kubernets which can enable you to run your services on any cloud. All big names offer some kind of hosted Kubernetes solutions: Amazon ECS, Microsoft AKS and Google GKE. Moving workloads from one Kubernetes deployment to another, even private hosted, is easy.

This may not be enough though. Sometimes you have more than containers deployed. The crucial part then will be to have infrastructure as a config. Consider if the way you deploy your platform to IaaS is really transferable? Do you use provider-specific databases, storage or services? Maybe you should change some solutions to more universal ones?

Next part will be to make sure services your write, deploy and configure are truly cloud-native. Are they really portable in a way you can get the same source code or docker image and run on the different infrastructure? Are all external services loosely coupled and bounded to your application so you can easily exchange them? Or if all your microservices platforms are independent?

Last, but not least is to backup everything. Really, EVERYTHING. Not just all your data, but also the configuration of services, infrastructure, and platforms. If you can restore everything from the ground up to the working platform in 24 hours you are better than most of your competitors.

So why do I need one?

Avoiding provider lock-in may not be easy, especially when your company just started to use AWS or Azure. You may not feel very comfortable creating an exit strategy for different providers or just don’t know where to start. There are solutions, like Grape Up’s Cloudboostr, that manage backups and multi-cloud interoperability out of the box. Using this kind of platform may save you a lot of pain.

An exit strategy gives you a lot of freedom. New, small cloud provider comes to the market and gives you a very competitive price? You can move all your services to their infrastructure. Especially when we consider that small cloud providers can be more flexible and more keen to adapt to their client needs.

An exit plan gives safety, freedom, and security. Do not think of it as optional. The whole world goes toward the multi-cloud and the hybrid cloud . According to Forbes, this is what 2019 will matter most in the cloud ecosystem. Do you want to stay behind? Or are you already prepared for the worse?

written by
Adam Kozłowski
Software development

Server Side Swift with Vapor - end to end application with Heroku cloud

In this tutorial, I want to show you the whole production process of back-end and front-end swift applications and push the backend side to the heroku cloud.

First, please make sure that you have installed at least Xcode 10. Let’s start from the backend side:

Open a terminal and check your Swift version and make sure that your swift version is 4.2.

swift –-version

Time to install Vapor CLI, for this one we should use homebrew.

brew install vapor/tap/vapor

Vapor CLI is already installed.

Let’s create a new project. Vapor CLI have simple commands for lots of things. Most of them you will learn in this tutorial later.

Vapor new {your-project-name}

Go into the project directory and build it using "vapor build" command:

vapor build

Let’s create xcode project using vapor xcode which will be needed later.

And verify if your backend works fine locally:

vapor run

As we can see the server is starting on our machine to be sure if that works fine, go to the web browser and check the localhost:8080.

Yupi! Here is your first application running with Perfect Swift. Let's try to deploy it to the cloud, but first, we need to install heroku cli.

brew install heroku/brew/ heroku

Now, deploy your first application to the cloud, but first you need to create free Heroku account. I will skip this process in tutorial. When the account was created go to the terminal install Heroku CLI and try to deploy.

vapor heroku init

Before we push to Heroku we have to change Heroku stack from 18 to 16 as 18 is in the beta stage and doesn’t support vapor yet.

Let’s deploy: git push Heroku master.

The app is already deployed. You can log in via a web browser to your Heroku account and see if it is running and if it works!
Now we need to configure the project. Go to Xcode or other source editor and install PostgreSQL framework. In Package.swift we need to add the FluentPostreSQL. In Package.swift we need to add the FluentPostreSQL.

Run vapor clean, vapor build and then vapor Xcode.

It's time to configure the database and clean up the project. First of all, go to configure.swift file and remove FluentSQL and remove related functions related to it. Then we need to configure the PostgreSQL database in our project. We need to remember that few things need to be done like at the beginning import the FluentPostgreSQL and register provider into services.

try services.register(FluentPostgreSQLProvider())

Then we need to create a database config and register the database.

// Configure a database

var databases = DatabasesConfig()

let databaseConfig: PostgreSQLDatabaseConfig

if let url = Environment.get("DATABASE_URL") {

guard let urlConfig = PostgreSQLDatabaseConfig(url: url) else {

fatalError("Failed to create PostgresConfig")

}

print(urlConfig)

databaseConfig = urlConfig

} else {

let databaseName: String

let databasePort: Int

if (env == .testing) {

databaseName = "vapor-test"

if let testPort = Environment.get("DATABASE_PORT") {

databasePort = Int(testPort) ?? 5433

} else {

databasePort = 5433

}

}

else {

databaseName = Environment.get("DATABASE_DB") ?? "vapor"

databasePort = 5432

}

let hostname = Environment.get("DATABASE_HOSTNAME") ?? "localhost"

let username = Environment.get("DATABASE_USER") ?? "robertbaran"

databaseConfig = PostgreSQLDatabaseConfig(hostname: hostname, port: databasePort, username: username, database: databaseName, password: nil)

}

let database = PostgreSQLDatabase(config: databaseConfig)

databases.add(database: database, as: .psql)

services.register(databases)

Once the database is registered we need to create our model and controller. In the example project, we have todo model and controller you could remove that as we will create a new model and controller. The idea for the app is creating a shopping list, so we need to think about what we need for that. So we have already database registered so we have to create a model which will contain our productName and id. Create ShoppingList.swift file which will be our PostgreSQL model.

import FluentPostgreSQL

import Vapor



final class ShoppingList: PostgreSQLModel {

var id: Int?

var productName: String



init(id: Int? = nil, productName: String) {

print("Shopping List Model was sucesfully created")

self.id = id

self.productName = productName

}

}



extension ShoppingList: Migration { }

extension ShoppingList: Content { }

extension ShoppingList: Parameter { }

Go back to the configuration file and register database migration for that model.

var migrations = MigrationConfig()

migrations.add(model: ShoppingList.self, database: .psql)

services.register(migrations)>

The model is already done, the database is registered so it’s time to create a controller which we gonna use for getting, creating and deleting data from our database. We have provided functions for that:

import Vapor



final class ShoppingListController {

func index(_ req: Request) throws -> Future<[ShoppingList]> {

return ShoppingList.query(on: req).all()

}



func create(_ req: Request) throws -> Future {

return try req.content.decode(ShoppingList.self).flatMap { item in

return item.create(on: req)

}

}



func delete(_ req: Request) throws -> Future {

return try req.parameters.next(ShoppingList.self).flatMap { item in

return item.delete(on: req)

}.transform(to: .ok)

}

}

Also, we need to configure the routes. Go to routes.swift and initialize ShoppingListController and create routes for the functions which we already created in the controller:

let shoppingListController = ShoppingListController()

router.get("shoppingList", use: shoppingListController.index)

router.post("shoppingList", use: shoppingListController.create)

router.delete("shoppingList", ShoppingList.parameter, use: shoppingListController.delete)

Try to build the project. Once the project is built successfully you can test that locally but first, you need to setup PostgreSQL locally. I would recommend you to configure Postgres on Heroku and push it directly. First, add Heroku add-ons. I’m using the free hobby-dev database for this project:

heroku addons:create heroku-postgresql:hobby-dev

Then:

heroku config

Edit procfile web: Run serve --env production --port $PORT --hostname 0.0.0.0

Commit the changes and push to Heroku .

gu98:VaporSwiftTutorial robertbaran$ cd HerokuVaporSwiftTutorial/

gu98:HerokuVaporSwiftTutorial robertbaran$ clear

gu98:HerokuVaporSwiftTutorial robertbaran$ heroku addons:create heroku-postgresql:shoppingList-dev

Creating heroku-postgresql:shoppingList-dev on ⬢ heroku-vapor-swift-tutorial... !

▸ Couldn't find either the add-on service or the add-on plan of "heroku-postgresql:shoppingList-dev".

gu98:HerokuVaporSwiftTutorial robertbaran$ clear

gu98:HerokuVaporSwiftTutorial robertbaran$ heroku addons:create heroku-postgresql:hobby-dev

Creating heroku-postgresql:hobby-dev on ⬢ heroku-vapor-swift-tutorial... free

Database has been created and is available

! This database is empty. If upgrading, you can transfer

! data from another database with pg:copy

Created postgresql-tetrahedral-24833 as DATABASE_URL

Use heroku addons:docs heroku-postgresql to view documentation

gu98:HerokuVaporSwiftTutorial robertbaran$ heroku config

=== heroku-vapor-swift-tutorial Config Vars

DATABASE_URL: postgres://hpeyjgreawzvrs:f42deaf2a16932b18b3a693fff7a577711e2c10c19efe731a0d0165e12d5a6c4@ec2-23-21-188-236.compute-1.amazonaws.com:5432/dab0tadl6uco6a

gu98:HerokuVaporSwiftTutorial robertbaran$

Verify the Heroku build, you can use it for that Postman or any other tool for API Development. Make post/get/delete and verify if that works. If you followed this tutorial everything should work fine.

Now let's begin work on the iOS app for creating a shopping list. Usually, I create a workspace and new project for iOS, but you can do it whatever you like. There are a few options. You can have 2 separate projects for frontend and backend, you can group everything in one project and use different targets for it or you can have a workspace with 2 projects. In this tutorial, I will be working with the workspace and 2 separates projects.

Save your backend project as a workspace from Xcode -> File menu now close the current project and open the workspace. Create SingleViewApplication for frontend application select shared scheme for it and drag n drop your project file to the workspace. Now you have to close Xcode and open again your workspace. All projects should appear properly.

Make sure that you have all schemes here and also your projects appear properly.

Let’s start working on the iOS app. This will be just quick as we need just add a button text field and table view with the list. I’m not gonna create MVVM but it’s recommended also will use just NSURLSession for it as this tutorial it’s not about it. I just want to show you how quick and easy the whole application can be created.

Go to Main.storyboard and create UI for this app first. Add tableView, button, and textField into you view Controller and setup constraints.

Create outlets for button, textField and connect delegate and dataSource for the tableView. Create an action for the button, create an extension for tableView delegate and data source. Create CellView. In your storyboard, you could create a prototype cell and use it in the data source. You could also create some labels and loading indicator to make that application a nice look. This tutorial is not about swift – iOS so I just give you an example of usage APIs here. You can find the whole project here. This for both frontend and backend with the workspace created. To be able to run it first you need to follow all the changes related to backend specific and Heroku as URL needs to be changed here for Heroku deployed an app or if you want to run it locally you need to configure PostgreSQL database on your machine and change database configuration in the configure.swift file.

Conclusion

Vapor is the best choice until you don’t need to use a non-SQL database like Kassandra as there is no easy way to configure that and make it work. As for me, this is no issue at all. This tutorial is easy but this is just to show you how that simple is writing backend in Swift. It’s the most important thing for the ios developers and also for every other delveloper as in my opinion this language is becoming to be the most popular language in a few years. I’m not a backend developer just enthusiast of server-side swift but thinking about change my frontend role to become a full stack.

written by
Grape up Expert
Software development

Painless view controller configuration in Swift

Back in the pre-storyboard era of iOS development, developers had to write hundreds of lines of boilerplate code that served to manage UIViewController object hierarchies. Back then, some were inventing generic approaches to the configuring of controllers and transitions between them. Others were just satisfied with the ad-hoc view controller creation and presented them directly from other code controllers. But things changed when Apple introduced storyboards in iOS 5. It was a huge step forward in the UI design for iOS. Storyboards introduced an ability to visually define app screens and - what is the most important - transitions between them (called segues) in a single file. Storyboard segues allow to discard all the boilerplate code related to transitions between view controllers.

Of course, every solution has its advantages and disadvantages. When it comes to storyboards, some may note issues such as hard to resolve merge conflicts, coupling of view controllers, poor reusability etc. Some developers don’t even use storyboards because of such disadvantages. For others the advantages play a more important role. However, the real bottleneck of the storyboards is the initialization of view controllers. In fact, there is no true initialization for the view controllers presented by storyboard segues.

Problems with the view controller configuration

Let’s start from some basics. In Objective-C/Swift, in order to give an object an initial state, the initializer (  init() ) is called. This call assigns values to properties of the class. It always happens at the point where the object is created. When subclassing any class, we may provide the initializer and this is the only proper way. We may also provide such initializer for the  UIViewController subclass. However, in case such controller is created/presented using the storyboard, the segue creation takes place through a particular initializer –  init(coder:) . Overriding it in subclass may give us the ability to initialize properties added by the subclass. However, we don’t have the ability to pass additional arguments to the overridden method. Moreover, even if we had such an ability, it would make no sense. This is because for storyboard-driven view controllers there is no particular point in code which allows them to pass data to the initializer. That is, we cannot catch the moment of creation of such controller. The creation of view controllers managed by storyboard segues is hidden from the programmer. It happens when segue to the controller is triggered – either entirely handled by the system (when triggering action is set up in the storyboard file) or using  performSegue() method.

Apple, however, provides a place where we can pass some data to an already created view controller after the segue is triggered. It’s a  prepare(for : sender:) method. From its first parameter (of  UIStoryboardSegue type), we can get the segue’s destination view controller. Because the controller has already been created (initialization is already performed when triggering segue) the only option for passing the required data is to configure it. This means that after the initialization, but before the  prepare(for : sender:) is called, the properties of the controller that hold such data should not have initial value or should have fake ones. While the second option is meaningless in most cases, the first one is widely used. Such absence of data means that the corresponding controller’s properties should be of an optional type(s). Let’s take a look on the following sample:

override func prepare(for segue: UIStoryboardSegue, sender: Any?) {

 if segue.identifier == "ToSomeViewControllerSegueID",

    let controller = segue.destination as? SomeViewController {

     controller.someValue = 12345

   }

 }

}

This is how the view controller configuring is implemented in most cases when dealing with segues:

  •  check segue id;
  •  get the destination view controller from the segue object;
  •  try to cast it to the type of the expected view controller subclass.

In case all conditions are satisfied we can set values to the properties of the controller that need to be configured. The problem with the approach is that it has too much service code related to verification and data extraction. It may not be visible in simple cases like the one shown above. However, taking into account the fact that each view controller in application often has transitions to several other view controllers such service code becomes a real boilerplate code we’d like to avoid. Take a look at the following example that generalizes the problem with  prepare(for : sender:) implementation.

override func prepare(for segue: UIStoryboardSegue, sender: Any?) {

 if segue.identifier == "ToViewControllerA",

    let controller = segue.destination as? ViewControllerA {

     // configure  View Controller A

   }

 } else if segue.identifier == "ToViewControllerB",

    let controller = segue.destination as? ViewControllerB {

     // configure  View Controller B

   }

 } else if segue.identifier == "ToViewControllerC",

    let controller = segue.destination as? ViewControllerC {

     // configure  View Controller C

   }

 } else ...

   ...

 } else if segue.identifier == "ToViewControllerZ",

    let controller = segue.destination as? ViewControllerZ {

     // configure  View Controller Z

   }

 }

}

All those  if… else if… blocks are making code hard to read. Moreover, each block is for the a different view controller that has to be configured. That is, the more view controllers are going to be present by this one, the more  if… else if… will be added. This, in turn, reveals another problem with such configuration. There is a single method for a particular controller that does all configurations for every controller we’re going to present.

Solution

Let’s try to find the approach to the view controller configuration that may eliminate the outlined problems. We’re limited to the usage of  prepare(for : sender:) since it’s the only point where the configuration can be done. So we cannot do anything with the type of the destination view controller and with the check of segue identifier. Instead we’d like to generalize the process of configuration in a way that allows us to have a single type check and single verification for identifier. That is, check with some generalized type of destination view controller and variable segue identifier rather than enumerating all the possible concrete types/identifiers. For this, we need to pass somehow the information about the type and the segue identifier to the  prepare(for : sender:) method. We would like to have something like the following:

override func prepare(for segue: UIStoryboardSegue, sender: Any?) {

 if segue.identifier == ,

    let viewerController = segue.destination as?  {

   // configure viewerController

 }

}

In order to have a single configuration code for all the controllers we need two things: unified interface to configure the controller, and a way to get the configuration data for the particular destination controller and segue identifier. Let’s define each part of the solution.

1. Unified interface for view configuration

As defined previously, configuration means setting values to one or more properties of destination view controller. So it’s natural to associate the configuration interface with the destination controller rather than with the one that triggers the segue. Obviously, each destination view controller has a different number of properties of different types to configure.

In order to provide a unified configuration interface we may implement a method for configuring each controller. We should pass there the values that will be assigned to the corresponding controller properties. To unify such method, every configured controller should have the same signature. To achieve this, we should wrap a set of passed configuring values into a single object. Then such method will always have one argument – no matter how many properties should be set. The type of the argument is a type of the wrapping object and is different for each view controller. This means that the view controller should implement a method for configuring and somehow define a type of the argument of the method. This is a perfect task for protocols with associated types. Let’s define the following protocol:

protocol Configurable {

 associatedtype ConfigurationType

 func configure(with configuration: ConfigurationType)

}

Each view controller that is going to be configured (is configurable) should conform to this protocol by implementing the  configure(with:) method and defining a concrete type for  ConfigurationType . In the easiest case where we only have one property that needs to be configured, the ConfigurationType is the type of that property. Otherwise, the  ConfigurationType may be defined as a structure or tuple to represent several values. Consider the following examples:

class SomeViewController: UIViewController, Configurable {

 var someValue: Int?

 var someObject: MyModelType?

 …

 func configure(with configuration: (value: Int, object: MyModelType)) {

   value = configuration.value

   someObject = configuration.object

   }

}

class OtherViewController: UIViewController, Configurable {

 var underlyingObject: MyObjectType?

 …

 func configure(with object: MyObjectType) {

   underlyingObject = object

   }

}

2. Defining the configuration data for view controller

Now, let’s go back to the controller that is triggering a segue. We’re going to use the configuration protocol we’ve defined. For this, we need to have data for passing it to the  configure(with:) method. This should be something as follows:

override func prepare(for segue: UIStoryboardSegue, sender: Any?) {

 if let segueIdentifier = segue.identifier {

   // 1. Get configuration object

   // for segue.destination and segueIdentifier

   // 2. Pass configuration object to the destination controller

 }

}

Let's focus on how should we obtain the configuration object. Each segue is unique within a controller that triggers those segues. For each segue we have a single destination controller that has its own type of configuration. This means that segue id unambiguously defines a configuration type that should be used for configuring the destination view controller.

On the other hand, just returning the configuration of a concrete type per each segue id is not enough. If we did so, we would need to pass it somehow to a destination controller that has a type UIViewCotroller. It has nothing to do with the configuration. On the other hand, we cannot use the  Configurable protocol as a type of an object directly because it has an associated type constraint. That is, we cannot cast the destination view controller to the  Configurable type like as follows:
 (segue.destination as? Configurable)?.configure(with: data) . Instead, we need to use some proxy generic type that is constrained to being a Configurable.

Also, creating all the configuration objects for the controllers in a single method has no sense since it brings the same issue as the one with  prepare(for ) . That is, in this case we have a concentration of code intended to configuring different objects in a single method. Instead, the better solution is to group the code for creating the particular configuration and the type of the controller which is configured into a separate object. Consider the following example:

class Configurator {

 let configurationProvider: () -> ConfigurableType.ConfigurationType



 init(configuringCode: @escaping () -> ConfigurableType.ConfigurationType) {

   self.configurationProvider = configuringCode

 }



 func performConfiguration(of object: ConfigurableType) {

   let configuration = configurationProvider()

   object.configure(with: configuration)

 }

}

In the code above, a single  Configurator<T> instance is responsible for configuring the controller of a particular type. The code that creates the configuration is injected to the configurator in the  init() method during creation.

According with the reasoning given above, we should associate a segue ID with the particular configuration and type. Considering the approach with the  Configurator<T> , the easiest way to do it is to create a mapping object where the key is a segue ID and a value is the corresponding  Configurator<T> instance. We may also create those  Configurator<T> objects in place of the map definition. This will make the code more clear and readable. The following example demonstrates such map:

var segueIDToConfigurator: [String : Any] {

 return [

   "ToSomeViewControllerSegueID": Configurator {

     return (value: 123, object: MyModelType())

   },

   "ToOtherViewControllerSegueID": Configurator {

     return MyObjectType()

   }

 ]

}

Let’s now try to use the configuration from the dictionary above in  prepare(for ) method. Let’s take a look at the following example

override func prepare(for segue: UIStoryboardSegue, sender: Any?) {

 if let segueIdentifier = segue.identifier,

    let configuring = segueIDToConfigurator[segueIdentifier] as? Configurator {

     configuring.performConfiguration(of: segue.destination)

 }

}

The problem is that the value type of the dictionary  segueIDToConfigurator is  Any . We cannot call on it any method directly. Instead, we need to cast it to the type that contains the  performConfiguration(of:) method. On the other hand, the only type in our implementation that contains the  performConfiguration(of:) method is the generic type  Configurator<T> . And to use it we should pass a certain type of the destination view controller in place of the generic type placeholder. At this point, the problem is in  prepare(for ) method. In this method we don’t have the information about that view controller type. Let’s try to resolve the problem. We need  Configurator<T> only to call the  performConfiguration(of:) method. Instead of having the whole interface of  Configurator<T> type inside the  prepare(for ) method we may use some intermediate interface that does not depend on a generic type and allows us to call  performConfiguration(of:) .

var segueIDToConfigurator: [String : Configuring] {

 return [

   "ToSomeViewControllerSegueID": Configurator {

     return (value: 123, object: MyModelType())

   },

   "ToOtherViewControllerSegueID": Configurator {

     return MyObjectType()

   }

 ]

}

For this, let’s create a protocol  Configuring and modify the  Configurator<T> type to make it conform to it. The example below demonstrates the refined approach.

protocol Configuring {

 func performConfiguration(of object: SomeType) throws

}



class Configurator: Configuring {

 let configurationProvider: () -> ConfigurableType.ConfigurationType



 init(configuringCode: @escaping () -> ConfigurableType.ConfigurationType) {

   self.configurationProvider = configuringCode

 }



 func performConfiguration(of object: SomeType) throws {

   if let configurableObject = object as? ConfigurableType {

     let configuration = configurationProvider()

     configurableObject.configure(with: configuration)

   } else {

     throw ConfigurationError()

   }

 }

}

Now, the  performConfiguration(of:) is a generic method. This allows us to call it without knowing the exact type of the object which is configured. The method however became throwable. This is because the type of its argument is widened so that the arbitrary type can be passed. But the method can still handle only the objects that conform to the  Configurable protocol. And if the passed object is not  Configurable we don’t have anything to do with it. In this case we throw an error.

We may now use the newly defined  Configuring protocol to define the dictionary for segue-to-configurator mapping:

This allows us to use the  Configuring objects inside the  prepare(for ) method as shown below:

override func prepare(for segue: UIStoryboardSegue, sender: Any?) {

 if let segueIdentifier = segue.identifier,

    let configuring = segueIDToConfigurator[segueIdentifier] {

   do {

     try configuring.performConfiguration(of: segue.destination)

   } catch let configurationError {        

     fatalError("Cannot configure (segue.destination). " +

                "Error: (configurationError)")

   }

 }

}

Refining the solution

The above  prepare(for ) implementation is the same for any controller that is going to use the described approach. There are several ways to avoid such code duplication. But you must keep in mind that each has its downsides.
The first and the most obvious way is to use some base view controller across the project that will implement the method  prepare(for ) and the  segueIDToConfigurator property for holding configurations:

class BaseViewController {



 var segueIDToConfigurator: [String: Configuring] {

   return [String: Configuring]()

 }



 override func prepare(for segue: UIStoryboardSegue, sender: Any?) {

   if let segueIdentifier = segue.identifier,

      let configuring = segueIDToConfigurator[segueIdentifier] {

     do {

       try configuring.performConfiguration(of: segue.destination)

     } catch let configurationError {

       // throw an error or just write to log

       // if you just want to silently it ignore      

       fatalError("Cannot configure (segue.destination). " +

                  "Error: (configurationError)")

     }

   }

 }



}



class MyViewController: BaseViewController {



 // Define needed configurators

 override var segueIDToConfigurator: [String: Configuring]{

   ...

 }



}

The advantage of the first way is that any controller that subclasses  BaseViewController needs to define strictly the data that is needed for the configuration. That is, override the  segueIDToConfigurator property. However, it forces all the view controllers to subclass  BaseViewController . This makes it impossible to use the system  UIViewController subclasses like  UITableViewViewController , etc.

The second way is to use a special protocol that defines the interface of the controller that can configure other controllers. Consider the following example:

protocol ViewControllerConfiguring {

 var segueIDToConfigurator: [String: Configuring] { get }

}



extension ViewControllerConfiguring {



 func configure(segue: UIStoryboardSegue) {

   if let segueIdentifier = segue.identifier,

      let configuring = segueIDToConfigurator[segueIdentifier] {

     do {

       try configuring.performConfiguration(of: segue.destination)

     } catch let configurationError {        

       fatalError("Cannot configure (segue.destination). " +

                  "Error: (configurationError)")

     }

   }

 }



}



class MyViewController: UIViewController, ViewControllerConfiguring {



 // Define needed configurators

 var segueIDToConfigurator = ...



 // Each view controller still have to implement this method

 override func prepare(for segue: UIStoryboardSegue, sender: Any?) {

   configure(segue: segue)

 }



}

This way is more flexible in comparison to the first one. The protocol can be implemented by any object that is going to configure the segue destination controller. It means that not only  UIViewController subclass can use it. Moreover, it doesn’t limit us to use only the  BaseViewController as a superclass. On the other hand, each view controller still needs to override  prepare(for ) and call  configure(segue:) method in its implementation.

Summary

In this article, I described the approach to configuring destination view controllers with clean and straightforward code when using storyboard segues. The approach is possible thanks to useful Swift concepts, such as Generics and Protocols with associated types. The code is also safe as it uses static typing wherever possible and handles errors. Meanwhile the dynamic types are concentrated in single place and the possible errors are handled only there. This approach allows us to avoid unnecessary boilerplate code in the  prepare(for ) methods. On the other hand, it makes configuring particular view controllers clearer and more robust by using a specific  Configurable protocol.

written by
Andrii Biehunov
Our experts

Grape Up consultants’ 101 – what does on-site work look like?

When I first came to Grape Up, almost two years ago, I could hardly say I did not know what to expect. The kind of work I was to be doing was known to me, as was the location. Still, I felt anxious about going over the pond and spending almost three months in a peaceful Chicago suburb. Another challenge – working with people with whom I seemingly had little in common, apart from the knowledge of the tech and language. I was kind of obsessed with those tiny details that set the American and European cultures apart – so similar, yet so different. But let me get back to where the story started.

In the first half of 2017, I happened to be looking for a different job, just when Grape Up tried to reach me. An inconspicuous e-mail, detailing a work opportunity in the United States, for a period of time, possibly in an Illinois or Texas location, appeared somehow in my inbox. I decided to bite the bullet, well knowing myself as a not-so-adventurous person – apparently, the more I thought about the implications of going abroad for a fairly long time, the more I hesitated. So, the decision was a quick one – still, I wish I knew things back then that I now do.

Because of that, to clarify matters for anybody pondering about working with Grape Up as a contractor programmer here is a little guide, which I modeled after a typical FAQ. This should be a good start – if any questions arise, do not hesitate to contact us!

Can you tell me more about the job you got?

I have been working for a big insurance company, perhaps the second largest of its kind in the US by revenue. By all means, this could be likened to working in a Polish branch of a corporation, but the work culture is quite different from what you could get in the country of my residence.

Was your job interview anyhow different from the usual?

I would say it was as good as any interview I have been to, save for one thing. Since I would be working in pairs (a very common practice in teams adhering to extreme programming), I had a short call with a technical person from the US company. I was presented with a problem, and we then tried to alternatively write tests and implementations together – the purpose was to have a quick outlook on how I would find myself in the new environment, and also for me to get a gist of how pair coding looks like. This might seem unusual at the beginning – try not to let it intimidate you!

Was getting to the USA hard?

I found the process of obtaining the necessary visa daunting. First, the ancient form on the US Embassy’s website (watch out for mistakes), then waiting for the appointment and finally getting into the queue at the Consulate. I cannot say that I did not enjoy getting that sticker in my passport – it made me proud! Apart from that, the flight itself and resulting jet lag can be physically challenging for some (myself included). Still, once that initial fatigue subsided, America was all mine to enjoy.

What did your usual day look like?

For the morning routine, we would usually drive or walk to work (had we been fortunate enough to live downtown), attend a group standup at 8AM, have a team stand-up shortly afterward and then have a complimentary breakfast, provided by our company. After working for a bit, we would head for lunch at 12 (lots of options, regardless of the location). Informal breaks also happen, more or less frequently – ping-pong and Mario Kart are among popular leisure activities in the US workplace. Workday would end at 5PM with some exceptions, such as “summer hours” on Fridays, in which case we could go home a little earlier. Still, all of this varies from company to company, and even from location to location.

What about the projects?

I happened to work on-and-off with various product teams. The projects spanned internal enterprises, such as data buses, as well as front-facing customer applications, like video chatrooms and remote consulting services. Worth noting is the fact that I have always felt like an important part of a team – little difference was being made between people working as contractors and full-time employees.

How did you find the people there?

Whether you are headed for America or otherwise, you may be in a culture shock. In America, they may seem unusually nice, in other places, to the contrary, somewhat uptight. Either way, unhelpful and rude people are a surprisingly rare occurrence. Some of them were externals, like me and we had little problem finding common ground – regardless, cultural discrepancies are a fun topic to chit-chat about.

What’s there to do in the time after?

Speaking of Chicago, depending on the closeness to the city center, there’s a lot to do or even more. Museums are abounding, as are clubs. Try eating something different every day! In the suburbs, some local Asian restaurants are well worth paying a visit. Apart from that, you may want to save some of your planned leave for visiting the other cities or sights after your assignment is completed.

Is food in the US really as unhealthy as it gets?

Depends on what you are having, of course. There are plenty of decently healthy options to choose from but finding them requires a little effort. It’s easy to end up eating junk food every day, because of the convenience, so watch your carbs while you are there.

Was there anything you felt uncomfortable with?

The 9-hour working day with an hourly lunch break takes some time to get used to. American cities are vast and half an hour drives to the nearest Wal-Mart are not uncommon while living in the suburbs. Downtown, on the other hand, while offering a lot of services within walking distance is very loud and lively – I certainly did not expect being woken up to car sirens at 2:00 in the morning on the weekends!

To sum it up, how was your experience?

Overall, quite good – I really appreciate the opportunities that on-site work was able to provide me with, like working with people from different backgrounds and getting a taste of what’s it like to live in a different country.

Would you go back?

Absolutely – as a matter of fact, I already did and came back. It has only been getting better with every new place I traveled to. I sure hope that impression lasts long enough.

written by
Grape up Expert
Software development

Outsourcing of authorization and authentication

Nowadays, we can observe many repetitive patterns in existing implementations. From microservices, to MVC layers, or REST APIs. Authorization and authentication are the fundamental parts of application security. The most popular pattern it the login and password, frequently with the addition of two-factor authentication via SMS code, e-mail or a software token.

But the problem is more complex than it seems at first. There is a chance that every year the current “best practices” become obsolete as they fail to provide a sufficient level of security. Sometimes a critical bug, like HeartBleed, creates an earthquake in the security world and forces everyone to immediately review update their services.

To top it all off, unfortunately security is not something that’s painfully obvious for the non-technical clients - so from the business perspective it does not directly “make money”. As a result, some companies try to reduce resources spent on it. The management is more likely to spend a month on improving the appearance of a landing page than a week on improving passwords encryption.

The complexity and lack of a business reason to invest in the problem make it a great candidate to outsource . Those reasons made our customers do so.

When to delegate?

In this section I would like to list the cases which can make delegation extremely profitable.

Providers have everything we need. Almost.

Major identity providers addressed a wide range of their clients’ problems. Namely, the configuration of their service takes most likely less time than writing everything from scratch on your own. This makes even more sense when we consider startup-like projects when time is putting on us the pressure of “sell fast or die” and having a working prototype is crucial as soon as possible.

We don’t know the requirements yet.

Another problem that frequently appears in the initial phase is postponing the creation of the security model to the remote future. Writing your own temporary solution from scratch would take days and if key decisions are not made yet it would end up being rewritten again and again. Integration with an external provider in default configuration takes hours, and usually there is a free trial period (and it doesn’t always have to be expensive, Auth0 starts at $13 /month for databases less than 1k users). Migration from the external provider to your own in an optimistic case, may be as simple as changing a couple of entries in the configuration file.

Relatively high costs of support authorization.

Perfect software should not need any maintenance. Unfortunately, the reality isn’t that perfect, nor any software we write. If our services contain their own authorization mechanism it may require some changes in it even it has been coded properly long time ago. Improving security (for instance changing password encryption from MD5 to blowfish) requires a couple of days of the developer’s work. It engages other departments such as the UX team - users have to be notified in a non-invasive way, a test team - someone has to check if the new path works and check whether or not any features have been affected by it.

If your software is rarely changed and the identity management is the most common reason to change it - then rewriting the authorization mechanism to use external provider makes a lot of sense.

What should you not be afraid of?

There are cases when such delegation provides significant value, but it also comes with an additional cost.

Social networks integration

Most identity providers offer Facebook integration. It is slightly different to other social networks, because it is mandatory to fill an additional form which is required by Facebook’s bureaucracy. It also happens when you integrate Facebook SSO yourself. Integration with other social networks is easier in both approaches.

Data import

It is possible to import existing user data to an external provider. Unfortunately, in most cases it won’t be transparent from the users perspective. For example, Auth0 doesn’t import passwords even if for some reason we have them in plain text (of course, nobody has). It forces the user to set their password at next login via the password reset mechanism.

Vendor lock-in

I have mentioned before that, in an optimistic case, the migration from an external provider to your own solution may be as simple as changing configuration files. The problem is that we don’t live in a perfect world. A lot of provides share libraries that speed up integration, but make the migration to a different vendor much harder. If your software runs on a local machine or a mobile device, chances are that a certain group of users may resist to download the latest update from the store. You have to decide if you prefer to change the authorization, lose them, or provide both solutions simultaneously for some period of time.

There is a story of team, that have built their own authorization service once provider wasn’t able to handle as many requests as they need. It shows that despite the long and successful relationship with the provider, they were able to break it up and move to the totally new solution.

When not to delegate?

There are also cases when the integration is not possible or creates more problems than it actually solves.

Law and other regulations

There are governmental or organizational regulations that control how data should flow and where it can be stored. Auth0 allows you to store data somewhere in EU, but you cannot control if it is stored in Spain, Germany or Poland. Having GDPR in mind it may have consequences to store data in EU when you create a project for the United States. Also, government projects generally require all (or at least confidential) data to stay inside the country.

Costs

Most providers work in the SaaS model. It means that the monthly fee depends on the number of users and the features you use. In some rare cases, it may not scale very effectively and the invoice from the provider may get really high. In this case, you should consider preparing your own local solution for the problem. Storing user data in your own datacenters may be cheaper.

Communication with other servers

Delegation requires communication with the provider. By default, providers require to redirect users to the external or to use iframe element on your site. It is impossible to use that in the application which should, even temporarily, work without an internet connection. This sometimes forces team to create mock services for end to end automatic tests.

Integration with custom solutions

  • Software that works in an internal network (for instance LDAP or Active Directory) is harder, ask your operations team if it is possible.
  • Authorization with the client-side software – the most popular solution for digital signature in Poland requires the installation of the Java Applet based plugin. Integration with it, even without integrating it with an identity provider poses many obstacles.

To delegate or to do not delegate?

If you aim for PoC or MVP type of project, just integrate a trial version of the existing authorization and authentication service provider - if it’s not the core feature of your product. Start fast, implement the most important features and then, when you feel that the solution is too limited, switch to your own implementation or maybe a different provider.

written by
Grape up Expert
Our experts

What is it like to work on an on-site project?

Have you ever considered working abroad on an on-site project? Or maybe you have already tried it? Whether it's one of your goals for the upcoming year or you are just curious what the pros and cons are, it's always worth to explore other people's perspectives and scenarios of their episodes abroad. So, based on a recent episode of mine, let me share a few thoughts on the topic myself.

Different kinds of professional life abroad

There are a variety of ways to get a feeling of what it's like to work overseas. The first one is that you can get hired and work in a foreign country for a few years. Another option is to be employed by a company in your country, but work on a project abroad for, let’s say, half a year. Last but not least, you can travel to different places for just a few days on a weekly basis. My own experience below is based on a short-term contract with one of Grape Up’s US-based customers but can apply to any kind of professional experience in a foreign country in general.

Professional benefits of working on-site

While being involved in any type of project abroad you get to experience the global marketplace and have a chance to learn new ways of doing business. Interacting with people born and raised in a different country lets you understand their work culture, ethics and point of view. Failing to understand the culture norms is often a source of conflicts within geographically dispersed teams. For example, it takes time to become aware of different ways an email or conversation could be interpreted, both as a sender and a receiver. And although being on-site is not the only way to gain that experience, it is usually the most efficient and authentic one. Since you are able to meet face-to-face, you get to see the direct perspective of your international customers and peers.

Working on-site comes with all the benefits of a collaborative workspace. It goes without saying that it's easier to explain something face-to-face rather than on the phone or, let alone, via email. Especially when dealing with complex or urgent topics. There are no internet connection issues, there is just one time zone and there is a space for body language, which is quite an underestimated type of communication these days. In fact, social and teamwork-style settings are a perfect way to boost all kinds of interpersonal communication skills. Also, there is probably no better way to learn a foreign language than to be around people who use it every day, especially when those people happen to be native speakers.

Potential challenges of working on-site

Even though most of us prefer to work in a team, there are also those who like to work alone. Actually, there is a very good chance that even the most active and social teammates will need a moment alone every once in a while. Just a moment to zone out, avoid the potential distractions, focus and get their creative juices flowing. And that’s not always possible when you work on-site. If you travel to another office for a relatively short period of time to meet your customer or coworker, you want to make the most out of your visit. So usually you end up spending most of the time actively collaborating with others and there will be little time for individual work. Not only is it more intense but it also takes a lot of discipline and flexibility, which some may find quite challenging, especially at the beginning.

Another thing is that travelling to one office means missing out on events and meetings happening in the other office. You are solving a problem of your absence in one place but at the same time you are creating a similar problem somewhere else. So, it's always a matter of choosing which place is more beneficial to you and your company at a given moment.

Personal pros and cons of living abroad

What does living abroad really come with? Well, this part can vary in as many ways as there are people who have ever lived in a foreign country for a while. Some point out the ability to explore new places, cultures and cuisines. Others are happy to learn or practice their language skills. There are also those who do the exact same things as they would do in their hometowns, with the exception of leveraging the presence of local people and resources. Whether you choose to immerse yourself in a country or not, living abroad always gives you new perspectives and new ways of looking at things. And that's gold.

Of course, there are also the downsides of being away from your home country. Depending on what your current situation is, it very often means that you have to leave your home, family and friends for some period of time. You need to learn how to live without some people and things you got used to having around, or you can find a way to take them with you. For some people it can also be overwhelming to deal with all the cultural differences, local habits and the number of new things in general. It all depends on how open and flexible you are.

Summary

Getting a taste of living and working abroad has become very popular these days - and not without a reason. For one, it is easier than ever before, and it also gives you countless benefits and the kinds of experiences you wouldn't gain in any other way. So, if it has ever crossed your mind and if you ever come across such an opportunity , don't hesitate to take it.

written by
Grape up Expert
Our experts

STAMP - Satisfying Things About Managing Projects

Project Managers really are in the driver’s seat when it comes to creating new products. Compare it to a bumpy road - there are deadlines, a solid dose of uncertainty, and plenty of planning along the way, but on the flip side, there also these truly satisfying moments associated with that position which I tend to label as: Satisfying Things About Managing Projects. And here they are:

Making a difference

As the stakes get higher, the vision of taking the lead and being in charge can be quite paralyzing for some. Others on the other hand will find it really satisfying. This is why the role of an effective Project Manager is so important. A person on such position will lead with an understanding that if they drive an idea or IT project towards success, a real difference can be made. No one will ever know the world we would live in if all those ideas and projects had been executed perfectly. What we can imagine, though, is the impact and value that can be added not only to the business, but through the world at large.

Building amazing things

This one is self-explanatory. Regardless of market competition and the constraints provided by stakeholders, it feels good to know that you are creating something that will meet or exceed expectations. Whether your task is to solve a complex problem, improve an already existing solution or create something new, yet desired, it is crucial to believe that you can do great - if only you are able to pass that feeling on your team, it will motivate them to work hard and accomplish amazing things together.

Being always able to perform better

Think about the spreadsheets tracking status of a project. How do they make you feel? Motivated and excited? Most likely not. At first thought, we do not think of spreadsheets as the beginning of achieving a goal. According to the SMART principle, goals should be measurable. I realized that during my time as a software developer, I found it difficult to set measurable goals. I would always think to myself: How do I measure this if the architecture I created is already good? How can I quantify whether my code was easy to maintain and extend, so I know how much better it gets over time?

As a Project Manager this is no longer the case - you are surrounded by precise numbers, relentless dates, and unswerving percentages. You know what are the costs, you know the ROI (return of investment) and plenty of other metrics. You are able to compare this to the rules in sports where you know your scores and all the statistics. The environment is perfect for setting and reaching measurable goals and endless opportunities to adapt and perform better.

Quick throwback to the spreadsheets. Can you see all the available opportunities now?

Watching people grow

Being a Project Manager, just like any other manager, gives you the opportunity to work with many people. Some may see this as one of the most difficult aspects within the Project Manager role. But it is important to keep in mind, that in this role you have the chance to create an open communication and bring your team together in order to be more efficient. A manager also has the opportunity to help individuals grow and develop from the very beginning of a project till its very end . Knowing you had a hand in someone's development is very gratifying.

Overcoming challenges

There are plenty of them. During the execution of every project, regardless of internal or external factors, a lot of unexpected things can happen.
You need to adapt and react quickly to be most efficient at change and risk management. Not only that, but also today’s solutions are the beginning of tomorrow’s problems. This can perhaps be a pessimistic way to see things, but there is a grain of truth in it. There will always be challenges to overcome. The beauty is that after achieving any milestone, you can take a look back from the top of that mountain. From that peak, you will be able to see this steep and bumpy road, all the past iterations and realize that these were the things that you and your team have overcome to get to the top. This is truly a great feeling of pride, reward, satisfaction and accomplishment.

written by
Grape up Expert
Software development

Server Side Swift – quick start

Building Swift web applications allows you to reuse the skills you already have. However, you can also start from scratch if you are not an ”iPhone mac„ developer. This article shows you the most popular Server-Side Swift frameworks and lets you dive into the world of faster and safer APIs using Swift.

Swift was released in 2014. Ever since then, it has become a popular solution for the iOS and macOS development. Since Swift was the open source and people started playing with it. There were a lot of attempts to find other usages of Swift. They turned out to be perfect for the server-side while being faster and safer than other languages. The next step in the process was to apply Swift to the server-side development. Here, it is important to mention that Swift can be compiled on macOS as well as Linux and even Windows. Moreover, some frameworks allow developers to create cloud applications .

Just after Server-Side Swift has been released I felt the urge to take a quick look on it. It was really hard to tell if it at that point, it was ready for commercial use. Now, as I look at it, I would say it is stable enough. So let’s try to compare the most popular frameworks. Personally, I would recommend trying each framework before you even pick the one that works best for you – there are slight differences between them.

Currently, the most popular Server-Side Swift frameworks are Vapor, Perfect and Kitura. There are also a few other ones such as Zewo and NozelIO, however they’re not as popular, but that doesn’t take away from their value. As a matter of fact, I looked into each framework and my verdict is: Perfect and Vapor.

Why Server-Side Swift?

A few years ago, when Server-Side Swift was still new to me, I wasn’t convinced that it would be the optimal language for backend specific. Why? Because it worked just like every other backend solution and there was nothing particularly specific in it making it special, but still for me as an IOS developer it allowed me to be at a full-stack. I know Java and JS and, in my opinion, the newest technologies are good to get hold of mainly because they let us become better developers. I haven’t found too many tutorials or articles about it. Certainly not as many as there are about Java or JS. Therefore, you have to create lots of things on your own. In my opinion, Swift is also faster than other languages. If you want to compare it with others, here is the benchmark server-side swift frameworks vs nodeJS.

Tools

SPM - Swift Package Manager - manages the distribution Swift code, integrated with the build system to automate downloading, compiling and linking dependencies process runs on Mac and Linux.

XCode – Apple IDE to compile and debug Obj-C and Swift code.

AppCode – Jetbrains IDE for compile and debug Obj-C and Swift code.

Other text editors like Brackets, VS Code, Sublime etc. + SPM

There are also a bunch of very specific tools for the frameworks and I will tell a bit more about that in a quick overview below.

Overview

Vapor

It has a very active community and simple syntax as well. This framework is mostly focused on being written purely in Swift. Just as the Vapor - Swift is a very readable and understandable API, due to the fact that it keeps the naming and conventions.

Data formats:
JSON, XML

Databases:
MySQL, PostgresSQL, SQLite, MongoSB

Cloud:
Heroku, Amazon Web Services and Vapor Cloud

Tools:
Vapor toolbox

Perfect:

The most popular framework for the server-side swift development and as the author claims, it is production ready and can be used with commercial products. This one is highly recommended to use as it’s big, powerful and fast. The framework is updated on a monthly basis.

Data formats:
JSON, XML, native support for direct operations on files, directories and .zip files

Databases:
MySQL, PostgresSQL, SQLite, CouchDB, MongoSB, MariaDB, FileMaker

Cloud:
Heroku, Amazon Web Services

Tools:
Perfect Assistant

Kitura:

This Framework is like a WebFramework. Kitura is unique in its approach to databases. Example SQL DB types Kitura uses abstraction layer called "Kuery" instead of "Query". Kuery supports MySQL, SQLite and PostgreSQL. Kitura also has support for Redis, CouchDB, Apache Cassandra and ScyllaDB, but through native packages these are not related to Kuery.

Supported Data formats:
JSON, Aphid MQTT (IoT/IoT Messaging)

Databases:
MySQL, PostgresSQL, SQLite, CouchDB, Apache Cassandra, ScyllaDB

Cloud:
Heroku, Amazon Web Services, Pivotal Web Services, IBM Bluemix

Tools:
Kitura CLI

Summary

Server-Side Swift is a fast growing solution and can be used commercially with the newest technologies like cloud. After a few years, we have acquired a mature language and amazing frameworks which constantly make Server-Side Swift better. When it comes to the Vapor and Perfect, both of them seems to be in a pretty good shape by now. Vapor, for example, is growing quite fast since its first release in September 2016 and currently there is 3.1 version of it available. Perfect has similar qualities and was first released in November 2015. As I mentioned before, I would personally recommend trying to play a bit with each framework before choosing the right one.

Tutorials and code examples

Perfect

  • Source
  • Slack
  • Tutorials
  • Ray Wenderlich course
  • Sample code
  • Github

Vapor

  • Source
  • Discord
  • Tutorials
  • Ray Wenderlich course
  • Sample code
  • Github

Kitura

  • Source
  • Slack
  • Tutorials
  • Sample code
  • Gith u b

Swift NIO:

Released by Apple on 1st March 2018 – low-level, cross platform, event driven network application framework.

  • Github
  • NIO reference
  • Tutorial

Other frameworks

There are also Zewo and NozelIO but those just based on Swift and Zewo is a Go-Style framework, NozelIO is the node.js style framework.

  • Zewo Github
  • NozelIO Github

Swift for specific OS

  • Linux
  • Windows
written by
Grape up Expert
Previous
Load more

Stay updated with our newsletter

Subscribe for fresh insights and industry analysis.

About UsCase studiesContactCareers
Capabilities:
Legacy ModernizationData PlatformsArtificial Intelligence
Industries:
AutomotiveFinanceManufacturingAviation
Solutions:
DataboostrCloudboostr
Resources
BlogInsights
© Grape Up 2025
Cookies PolicyPrivacy PolicyTerms of use
Grape Up uses cookies

This website uses cookies to improve its user experience and provide personalized content for you. We use cookies for web analytics and advertising. You can accept these cookies by clicking "OK" or go to Details in order to manage your cookies preferences more precisely. To learn more, check out our Privacy and Cookies Policy

Accept allDetails
Grape Up uses cookies

Essential website cookies are necessary to provide you with services available through the website, autosave your settings and preferences, and to enhance the performance and security of the website - you have the right not to accept them through your web browser's settings, but your access to some functionality and areas of our website may be restricted.

Analytics cookies: (our own and third-party : Google, HotJar) – you can accept these cookies below:

Marketing cookies (third-party cookies: Hubspot, Facebook, LinkedIn) – you can accept these cookies below:

Ok