Thinking out loud
Where we share the insights, questions, and observations that shape our approach.
3 questions to Michał Darkowski, Project Manager at Grape Up
So you’re a new project manager, now what? Michał Darkowski, Project Manager at Grape Up gives some valuable insights that will help you nail it at project management. Since he joined the company in 2013, he has led a variety of projects implemented in multiple technologies, with a team ranging from 1 to more than 15 members in size. In addition to that, Michał has a solid software development background.
1. What are the 3 questions that every project manager should ask their team most often and why?
Michał: The first one would be, "What’s slowing us down?" . You will be surprised how much insight you can get from an answer to this. First of all, you will learn what needs to be changed to make it possible to deliver faster - which is very important in today’s fast-paced IT industry. You will also learn what impacts your team’s productivity and affects their morale. Perhaps something can be automated? Maybe the team is doing something that doesn’t bring any value? Maybe there’s something that you’re doing as the PM, that you shouldn’t do? I think that every project manager should strive to make their team as efficient as possible on a daily basis and this question gives space for that.
The second question is, "What would it look like if it was easy?" . Ever since I read this in a book by Tim Ferriss it became one of my favorite questions. We often think that complicated problems require even more complicated solutions. This question completely changes the perspective and matches perfectly the KISS principle (keep it simple, stupid) which I’m a huge fan of. Whenever someone in your project team is stuck with a challenging problem, asking this can help re-frame their thinking and get the answer they are looking for.
And last but not least, the simple, yet so powerful, "How are you?" . It is important to know the atmosphere in the team and get along well with all members. A little chit-chat in the kitchen can only do good things for your relationship with the team and also have a positive impact on your project.
2. What was the biggest problem you encountered as a PM and how did you overcome it?
Michał: Some time ago my team took over a significant part of an in-development application from an internal team after one of our customers.
The problem was pretty complex. We got a codebase without the documentation and a terribly confusing API. We didn’t know how many of the requirements have already been covered by the existing code. The requirements themselves were poorly defined. In fact, it was more of a bunch of ideas and keywords rather than a list of proper product requirements that developers can follow. To make matters worse - the team which has previously worked on this project was no longer around.
How did I overcome it? Long story short, by problem decomposition and teamwork.
Once we gathered all the questions and unknowns, we were able to clearly understand why our situation is difficult. The next step was to invole the entire team in gathering all the answers.
Which requirements are already covered? Well, let’s try to use the application ourselves and see what happens. Ambiguous acceptance criteria? Let’s properly define it and consult with the customer to ensure that this is indeed what the product should do.
API issues? Let’s successively clean them up, one piece at a time.
Of course, all this was a huge effort for each team member and it took plenty of time, but seeing improvement every single day was very motivating.
Thinking about it today I’m very proud of what we have achieved and how significantly the project changed since day one.
In my opinion, the approach to all problems consists of four simple steps.
- Analyze the current situation. What don’t you like about it? Try to break it down into smaller, preferably independent issues.
- Define the desired situation. Imagine that a year from now all the issues are solved - what does it look like?
- Think of at least one action that will get you closer to the desired situation you have just imagined.
- Execute! Repeat the process if needed.
Keep it in mind the next time you feel overwhelmed and believe that you’re facing a dead-end situation.
3. What’s the most useful tool you have discovered in the last 6 months?
Michał: That would be Evernote. We all take notes, don’t we? Doing it efficiently and keeping all your notes organized is a very important, yet downplayed need. With Evernote, which as you already guessed, is a note taking app, it is fun and very simple. I can for sure say that it took my note-taking to a new level. On a side note, this blog post isn’t sponsored! From rich formatting, to attaching files, and inserting images that can be easily annotated, or saving interesting web pages with just two clicks - it’s all there!
To be honest, I tried Evernote for the first time in 2016 and was a little bit disappointed with the tool and at the same time surprised that so many people recommended it. I have given it a second chance recently and I don’t regret it. Going all-in was the crucial part, I moved there all txt and docx files from my Dropbox, all screenshots with some important information from my phone and also imported pictures of my last paper notebook. Finally, I have everything in one place, and with Evernote’s powerful search tool I can find it in no time. This month I even got to convince a few of my colleagues to use it!
Capturing objects in closures: why you’re doing it wrong? – Part 1
The basics of closures
Many modern programming languages have a closure concept. Closures are self-contained blocks of functionalities which can be passed around and called. Additionally, they can work with environmental data (variables, functions, types etc.) captured from the outer lexical context in which the closure is defined. A closure may be bound to a variable (constant, parameter, etc.) and called through it.
With that said, there are two main aspects: the code of the closure itself and the data captured from the context. When talking about closures, we usually mean the code. However, the context isn’t any less important and, most often, it is the trickiest part of closures.
Retain cycles
Each programming language or runtime environment may determine a different way of handling and storing the closure’s context. Nevertheless, special attention should be paid to languages which use the reference counting semantic to deal with objects. With such languages, we have a retain cycle problem. Let’s quickly refresh and go through what we already know about reference counting as well as retain cycles, and how to deal with them.
In reference counting environment each object has associated a reference counter which shows how many objects use it by a strong reference. If the object is assigned to a variable representing a strong reference to the object, its reference (retain) count is increased. When such variable goes out of scope, the reference count decreases. As soon as the count comes to a 0, the object is deallocated. Therefore, in case the sequence of the objects cyclically reference each other, all of those objects will never be deallocated. Why? Because even if there will be no references which would point to a chosen object outside the cycle, each of these objects will still have a strong reference from inside the cycle which “retains” them. Such situation is known as “a memory leak” and is called a retain cycle.

Usually, languages in a reference counting environment provide a special type of references called weak references which don’t increase the retain count of the object they point to. Instead, when the object is deallocated, each weak reference is nilled in runtime. The right way to avoid retain cycles is to use weak references. In order to break the retain cycle in the previously mentioned object sequence it would be enough to make one object (for simplicity, let’s assume the last one in sequence) hold the weak reference to the next object (the first one).

However, let’s go back to closures and their context. We will use Swift to show certain examples of the code, but keep in mind that all solutions are applicable to any language that has closures and a reference counting environment. In case a closure uses objects from the closure’s outer context, that closure will capture the reference to those objects. The most common case when a retain cycle happens is when the object stores a closure while the closure (directly or through the reference chain with other objects) references the object itself using a strong reference. Take a look at the following example:
database.observeChanges(recordID) { snapshot in
// Some code...
database.stopObserving(recordID)
// Doing other stuff...
}
In the example above, the closure is retained inside the method which will be called later. On the other hand, the closure retains a database object since the object is used inside it.
Note that in simple situations when the closure is not stored by the object, (like closures passed to forEach() method of Sequences in Swift) or when the closure is removed from the object after the call (like with DispatchQueue.async() method), the retain cycle does not occur. Therefore, we’ll not consider such cases and instead we will focus on situations when the retain cycle takes place. The language/environment should provide a mechanism to deal with retain cycles for objects captured by the closure. That is, we should have a way to specify how the object should be captured – by means of weak or strong reference. In Swift closures, we have a capture list that allows us to specify how each object should be captured. Let’s take a look at the following example.
let completion = { [weak dataBaseAccessor, someController] (error: Error?) -> Void in
otherController.someAction()
// Doing other stuff
}
The capture list allows to explicitly specify what variables are captured and what reference type (weak or strong) should be used to capture them. In the example above, the capture list is defined at the beginning of the closure header: [weak dataBaseAccessor, someController] . In the given code, dataBaseAccessor is captured by weak reference. While someController and otherController (which is not explicitly specified in the capture list) are captured by strong reference. Note that apart from weak and strong references, Swift has a concept of unowned reference for the capture list. But its usage is strictly limited to the case when the captured object’s lifetime is guaranteed not to be shorter than the lifetime of the closure. Otherwise, its usage is unsafe. We won’t consider this type of references here.
Let’s show an example with retain cycles caused by closures. Commonly, in the example below we avoid the retain cycle by specifying that self should be captured as weak reference.
class DatabaseController: DatabaseAccessorDelegate {
// Some code ...
var requestCompletion: (([String : Any]?, Error?) -> Void)?
func requestData(query: Query, completion: @escaping ([String : Any]?, Error?) -> Void) {
requestCompletion = completion
dataBaseAccessor.performQuery(query)
}
// DatabaseAccessorDelegate
func accessor(_ acessor: DatabaseAccessor, didRetrieveData data: [String : Any], error: Error?) {
requestCompletion?(data, error)
}
}
class ViewController {
let databaseController = DatabaseController()
var cachedData: [String : Any]?
// Other code ...
@IBAction func dataRequestButtonAction(_ sender: UIButton) {
databaseController.requestData(query: query) { [weak self] dataSnapshot, error in
guard let strongSelf = self else {
return
}
strongSelf.cachedData = dataSnapshot
strongSelf.updateUI()
}
}
}
When calling a databaseController.requestData() in completion self has weak modifier. To ensure that self won’t be deallocated during the completion execution, we assign it to strong reference strongSelf at the beginning. Otherwise, if self was already deallocated before the completion closure called, we’d simply exit the closure. Why is this trick needed here? Let’s show the relations between our objects in terms of retaining/not retaining. ViewController retains the DatabaseController . In turn, the DatabaseController retains the completion closure in requestData() . If completion retained the ViewController via self variable, we would get the retain cycle here.
Such approach is often described in terms of “weakifying” and ”strongifying”. Suppose you need to use some object inside closure but it can lead to retain cycle. Capture weak reference to the object when defining closure (“weakifying”), then check if it still exists (reference not nil) at the beginning of closure. If it’s still there assign it to a reference with strong semantic (“strongifying”). Otherwise exit the closure. This pattern is what we actually do in 99% of cases when it comes to closures with potential retain cycles.
Well, programmers try to find (and in most cases reasonably) the “golden rule” for almost anything. However, when it comes to closure context it’s not so simple as it seems to be. The pattern is often overestimated and overused. It’s not a silver bullet (as it is considered to be by many programmers). As a matter of fact, it may completely break the logic of your closure when used unwarily.
A Closer Look
Our previous example is a simple case so the pattern described is pretty useful and works well. But let’s go further. As a matter of fact, some areas are more complicated than they look. To demonstrate this, we’ll extend a bit our implementation of the ViewController .
class ViewController {
// Other code ...
@IBAction func dataRequestButtonAction(_ sender: UIButton) {
databaseController.requestData(query: query) { [weak self] dataSnapshot, error in
guard let strongSelf = self else {
return
}
strongSelf.cachedData = dataSnapshot
strongSelf.updateUI()
let snapshot = dataSnapshot ?? [:]
NotificationCenter.default.post(name: kDataWasUpdatedNotification,
object: nil,
userInfo: [kDataSnapshotKey : snapshot])
}
}
}
We are now notifying about the data that is being retrieved using notification mechanism. In this case, if self was deallocated we’re just returning from closure. But this will skip posting the notification. So despite the data was successfully retrieved observers of the notification won’t get it just because the ViewController instance was deallocated. Thus, the approach with exiting from the closure creates an unnecessary and – what is more important – implicit dependency between the presence of the ViewController in memory and sending the notification. In this case, we should try to obtain self and, if it’s there, proceed with calls on it. And then independently of the self object state, post the notification. See improved code below:
@IBAction func dataRequestButtonAction(_ sender: UIButton) {
databaseController.requestData(query: query) { [weak self] dataSnapshot, error in
if let strongSelf = self {
strongSelf.cachedData = dataSnapshot
strongSelf.updateUI()
}
let snapshot = dataSnapshot ?? [:]
NotificationCenter.default.post(name: kDataWasUpdatedNotification,
object: nil,
userInfo: [kDataSnapshotKey : snapshot])
}
}
This allows us to make an important step towards understanding how context in closures is used. We should separate code that depends on self (and may cause the retain cycle) from code that doesn't depend on it.
It’s all about associating
Choosing which part of the closure code really depends on self (or generally on the instance of the type) and which doesn’t, is trickier than it seems to be. Let’s refactor our example a bit. Let’s suppose we have a delegate that should also be notified. Also, assume we need to notify about data changes in several places across our code. The right thing to do is to place the code for notifying clients into a separate method.
@IBAction func dataRequestButtonAction(_ sender: UIButton) {
databaseController.requestData(query: query) { [weak self] dataSnapshot, error in
guard let strongSelf = self else {
return
}
strongSelf.cachedData = dataSnapshot
strongSelf.updateUI()
let snapshot = dataSnapshot ?? [:]
strongSelf.notifyUpdateData(snapshot: snapshot)
}
}
func notifyUpdateData(snapshot: [String : Any]) {
delegate?.controller(self, didUpdateCahcedData: snapshot)
NotificationCenter.default.post(name: kDataWasUpdatedNotification,
object: nil,
userInfo: [kDataSnapshotKey : snapshot])
}
At first glance, in the code above all the calls depend on self (called on it) so the approach to make an exit from the closure if self is nil seems to be applicable. Although, in essence our code after refactoring does exactly the same what it had done before. Therefore, if self is nil a notification will be missed. How to deal with this situation? The problem lies in the wrong criteria which has been chosen to decompose the code. We’ve combined the delegate callback invocation and the posting notification into a single method because these actions are semantically close to each other. However, we didn’t pay attention to the dependency on a particular instance (expressed via the calling method using self ). The delegate has a strict dependency on the instance because it is a property of the ViewController (a part of its state). Sending the notification is not related to any instance of the ViewController , but it rather relates to the type itself. That is, the proper place for the delegate call which is an instance method, while the notification related code should be placed in the class method.
@IBAction func dataRequestButtonAction(_ sender: UIButton) {
databaseController.requestData(query: query) { [weak self] dataSnapshot, error in
let snapshot = dataSnapshot ?? [:]
if let strongSelf = self {
strongSelf.cachedData = dataSnapshot
strongSelf.updateUI()
strongSelf.notifyDelegateDataWasUpdated(snapshot: snapshot)
}
ViewController.notifyDataUpdate(snapshot: snapshot)
}
}
func notifyDelegateDataWasUpdated(snapshot: [String : Any]) {
delegate?.controller(self, didUpdateCahcedData: snapshot)
}
class func notifyDataUpdate(snapshot: [String : Any]) {
NotificationCenter.default.post(name: kDataWasUpdatedNotification,
object: nil,
userInfo: [kDataSnapshotKey : snapshot])
}
The example shows that associating the code with the instance or the type itself plays an important role in the case of using it inside closures.
Generally, when choosing between placing the code to the type method or to the instance one, use the following “rules of thumb”.
1. If the task done by your code is not associated with any particular instance, place your code in the type (class/static) method. Such code does not manipulate (neither accesses nor alters) the state of any particular instance. This means that such task will make sense in case:
- no instance of that type exists;
- the method is dealing with multiple instances of the same type in the same manner.
2. Otherwise, place the code into the instance method.
3. If the code has mixed associations (one part doesn’t depend on any instance while another one does), the task should be decoupled into smaller tasks. This rule should be then applied to each task separately.
In this case, you will be able to decompose the code.
These rules are also helpful when dealing with closures. That is, static functions are not associated with a particular instance, so such code should be called independently on the one that relates to the self object.
Let’s sum up what we have at this point. In general, inside the closure we should distinct the code that does not relate to the self instance (which retains the closure) from the rest, so that when performing, it should not rely on the existence of the object referenced by self . These rules describe how we can determine what should be associated with the particular instance and what should not. Thanks to that, they are helpful even if we’re not going to extract this code to a static method, but rather use it inline inside the closure.
However, it’s only the tip of the iceberg. The true complexity of the proper closure context usage resides in the code that depends on an instance of the particular type. We’ll focus on such code in the second part of our article .
Dojo is the way: DevOps Enterprise Summit 2018 key takeaways
The DevOps Enterprise Summit entered its third year in Europe on June 26-27, 2018 in London. Every year, its goal is to accelerate DevOps adoption in large companies .
In London participants had a chance to network and connect with speakers during and after sessions. These networking opportunities came in various forms such as online discussions, open panels with subject matter experts, or casual lean coffee breaks. All in all, the event gathered leaders from over 500 organizations. Companies that focus on leveraging DevOps to thrive within an increasingly software-driven economy.
Let me now underline five takeaways that I personally consider to be the most important:
Dojo is the fastest and most effective way to drive change
Introducing change to an organization is very challenging, especially when all teams are fully focused on delivering their current targets. Issues vary by department and require different kinds of approach in solving them. In that case, what is the best and ultimate solution for overall improvement? According to CapitalOne – Dojo is the way. If executed properly, Dojo is an immersive learning model that fosters culture and creates an environment for knowledge exchange to give the company a jump-start into the DevOps journey. However, it takes years of hands-on experience and in-depth knowledge for the dojo execution to be effective.
Focus on the outcome, not the function
Nick Funnell, VP of Cloud Technology at Barclays Investment Bank noticed that as a company grows, the people who build it lose their focus on the final outcome. Instead, they pay more attention to new functions that appear along with the bigger size of the company. This is often the key reason why startups fail to maintain their performance acceleration once they enter the stage of a larger enterprise. In the era of very dynamic markets and almost zero time-to-market for new players, this has to be changed by companies that want to move forward. Without a doubt, the DevOps approach is a way for them to bring back the startup spirit and attitude.
Software driven infrastructure is what moves business forward
Over 30% of workers at Barclays are tech people. In the last decade, the ranking of the largest companies in the United States was dominated by technology companies. Software is responsible for a large share of product value of many physical products. On top of that, more developers are hired now in other industries than in IT. Inevitably, software is eating the world and only companies that will structure themselves around software delivery will prevail.
Broken is a normal state
Richard Cook, a professor at The Ohio State University, touched a very sensitive area of the quality of IT systems - as we develop software and infrastructure, we stress about making them fail-proof and stable, and purely perfect. Yet, there are parts of the software that will turn out to be broken. Both apps and systems constantly evolve with features and functionalities that increase their complexity and, along with that, their defectiveness. This is why when developing software or building infrastructure we have to remember that errors may and will occur. The one thing for all of us to keep in mind throughout the creation process is that broken is a perfectly normal state. This approach gives a different perspective that allows us to build things differently and to prepare ourselves and systems to deal with failure in the long run.
The system is defined by how you deal with incidents
System breakdowns are inevitable. If we can’t avoid them, the one thing that we can do is to deal with them and learn. Drawing conclusions from incidents is crucial, but not easy. Since they are an inseparable part of the process, the best approach is to treat them as an investment and try to make the best ROI out of them. If we whitewash and ignore them we will irretrievably lose a chance to learn from our own mistakes. Usually, the human error is just a starting point of a proper investigation which leads to uncovering unknown system flaws.
The love-hate relationship between testers and developers
Admit it. Every time we think of the major forces that shape the world such as tech or science, we think of rocket launches, medicine, or even media that form our perceptions. Not many of us think of the people behind the scenes, the relationships between them, what inspires them and makes them drive it all forward. We just assume that things ”get done”.
Back in the day, people responsible for software development were usually hidden in their garages, they were a bunch of data geeks and brainy visionaries. The division between developers and testers was… non-existent. It truly is mind-blowing how much has changed since then! We literally hold the entire Internet in our palms. Apps and software play key roles in our daily lives. They teach, entertain, inform, and enhance our lives in so many ways. So why is the dependency between testers and developers so essential to that immense progress?
Most testers love their jobs, and they get paid to simply ”break stuff” and be annoying. The pickier you are about those bugs, the better. The problem is that nobody wants to hurt anyone’s feelings and no one wants to be pointed for their mistakes. The whole relationship between QA teams and developers is based on testers trying to spot errors made by developers who take so much pride in their work. So it all boils down to being professional, precise, but still polite. And that’s how the story begins.
No outcome is good enough
Both testers and developers work towards a common goal — to build a high-quality and bug-free product. They are equally responsible for the final outcome. However, since testers are solely responsible for catching bugs and stopping the bug infested code from being released, there are two, very common situations that QA teams find themselves in:
- The tester has found a bug
- The tester hasn't found a bug
Conclusion: neither situation is perfect
In the first case, usually, it goes the same old way. After a bug has been found, developers ask testers whether it really is their problem. Perhaps, they can put the blame on something else and not have to deal with the problem at all ;)? Or they will ask way in advance about the number of bugs they should expect as if testers had superpowers and predicting the future was one of them. And they really DO expect a precise number. The answer is very simple – as many as you have done in the code.
The second scenario creates the perfect opportunity for developers to start an open discussion with testers. If there is a leftover bug that hasn’t been pointed out before, some will say well, this may actually be your fault, this bug was so painfully obvious!
But no hard feelings. Even if such situations take place, it's all for the greater purpose.
It works for me!
There is a certain saying that is often used by some developers. Not all of them, just some. They say it when customers encounter bugs. They say it when QA says that something still doesn’t work.
It works on my machine!

Perhaps everyone has come across it on the Internet, or at work. That’s the all too famous excuse that developers say when confronted with a defect in the production environment. However, anyone who has been in IT for a while or scrolls through memes on a regular basis knows that developers say it as a joke. And that's a fact!
If testers could share one expert tip with all developers, it would be to push all updates before handing over the task to QA. It avoids releasing things that don’t work to production. It saves developers the embarrassment when telling someone they are wrong, while in reality that are actually right.
Let’s go, give it another try to see what you’re doing wrong … Oh, I see!
However, that isn’t something what experienced engineers say. As they progress in their careers, most of them come to realize how valuable QA feedback is. Senior developers know that testers want the best for them and that they’re not a bunch of manipulative liars. They cooperate with QA to iron out any bugs before they get to production. Most often, it is the matter of pride and admitting that no one is perfect, and we all make mistakes – even in the precious code. And it comes with time. At Grape Up, we seek to maintain a healthy cooperation between testers and developers every day.
What does it take to be a good tester?
Not just anyone can be an exceptional tester. QA team members wear many hats: they must have an eye for detail, but also think outside the box like a user. They must know how to prioritize and put on a brave face in all situations. Perfect communication skills are a must while it is highly appreciated that they learn from their own mistakes – and from others too. Being objective and professional is also a nice-to-have. Last but not least, every tester should know how to say “no” to the developer and that something isn’t their job.

This is important as sometimes QA members, after having identified an issue, get asked to fix bugs themselves. This, of course, should always be done by a developer – not a tester. Therefore, assertiveness above all! That’s just the tip of the iceberg of what employees require from testers. But with practice, QA teams learn to harness their inner Sherlock Holmes and hunt down every bug to build an impeccable piece of software.
After all, love is in the air
Putting everything aside, developers and testers do love each other. These two groups come together in symbiosis to create a product that works for the end user . Symbiosis by definition, is a relationship of mutual benefit for both sides. And just like in the case of developers and testers: both groups give and take. Testers give feedback, developers take that feedback, give their time to eradicate bugs and give back a bug-free product. Because after all, we’re all in this together, right?
Top five tips for DevOps in the cloud
At Grape Up, we define DevOps as an engineering culture which unifies software development and operations, from first designs through development to production. If followed correctly, DevOps bridges the gap between these two IT groups. DevOps engineers often work at the crossroads of coding, process re-engineering and communicating with others.
Cloud, without a doubt, is inseparable from the DevOps approach. As a philosophy built on mutual collaboration and streamlined development, DevOps can still pose many challenges to IT experts attempting to apply these practices to the cloud. We’ve put together this ultimate guide to help you avoid the top barriers.
Invest in proper cloud and DevOps training
There are a number of technological obstacles that people new to DevOps must overcome – from fragmented toolsets to neglected test automation while focusing on CI/CD deployment. In addition to these, there are cultural differences specific to every company and challenges all IT teams must face and to which they must adapt. Companies should build a culture that is collaborative and focuses on shared goals. This also means finding team members who are DevOps champions within the organization.
Go cloud-native
If you’ve already decided to go for the cloud, you should be all in! The world’s digital-first companies like Facebook or Amazon already design their applications completely removed from physical resources and moved them to cloud application platforms such as Cloud Foundry. In this way, they are able to abstract away the infrastructure and spend more time innovating. Doesn’t this prove cloud-native is the way to go?
Make containers an option
Containers help make your applications easily transportable and manageable. If applications are split into modules, changes to each of them can be made without the need to touch the entire application. Since a limited number of components require sophisticated adjustments, using Cloud Foundry for a typical application and Cloud Foundry Container Runtime for running customized containers is the most productive way for handling any type of workload.
Make security your priority
Since cloud security constantly changes, you have to make sure that your security doesn’t end where your DevOps tools begin. It should be an essential part of automated testing. Including it in the CI/CD processes is a good idea. Some companies even decide to hire a Chief Security Officer in charge of monitoring DevOps security in the cloud.
Earn support from the Board
Sometimes, all it takes to gain resource and budget support is the encouragement and assistance of the senior executives. Take the time to educate your Board about the benefits of DevOps so you can start putting the DevOps practices into place as soon as possible.
Key takeaways from SpringOne Pivotal in London
SpringOne by Pivotal is one of the most anticipated series of events for developers around the world. The London edition gathered the brightest and boldest minds of the technology industry. It co-located a string of presentations about Application Development, DevOps, CI/CD, Cloud and more.
This event brought together around 60 participants from across the world. We had the chance to attend presentation tracks, network, promote, and showcase our services.
SpringOne Tour in London covered the entire topic about Java Spring in the Cloud-Native world and the subject of focus was Event Driven Architecture which is strongly connected with widely discussed Reactive Spring. Other topics were focused on Cloud technologies, so we saw a demo with Pivotal Cloud Foundry, as well as Serverless implementation by riff. After the whole event, we can confidently say that Reactive Spring is already used in many projects and will be far more frequently used as it ideally suits Cloud-Native technologies.
Alongside other industry-leading speakers from Pivotal, we had the opportunity to listen to Jakub Pilimon, Spring Developer Advocate, Pivotal Trainer and Consultant who addressed the audience on the topic of Event Driven Architecture with Spring Cloud Stream 2.0. He explained the basics of Domain Driven Design and how they connect with Event Driven Architecture.
During the presentation Pilimon created a credit card management system on his own. To do that, he used a very interesting approach of Event Storming, in which all possible events in the current domain were noted on orange notes. Then, direct commands were taken on blue notes (those are the cause of each event).
As the final step, he defined the invariants which were the necessary conditions for the domain event to occur. Based on that, we received a clear view of the business logic and the design of the domain that we want to implement.
Besides architecture and different design approaches, Jakub showed a piece of real code for the purpose of which he used Spring Cloud Stream and tested it with Rabbit MQ. In general, the whole Event Driven Architecture concept for sure will be more popular than ever with the cloud-native distributed architectures.
Oracle shakes the world of Java releases
While Java 10 is said to be the new kid on the block, rumors about 11 are already spreading fast. Now, let’s be honest: how many of you have made the upgrade to J10? I can bet that the majority of us are still sticking with the good old 8. And do you remember the story with Java 9? It became obsolete in just 6 weeks! With the release of Java 10 on March 20, 2018 and Java 11 being already right around the corner, it is safe to say that Oracle is making the move towards faster release cycles which will give us more features and abilities faster than ever. The question is “is it really a change for better?”
Why so often, Oracle?
One may wonder about the reason behind such frequent releases. It is actually because of serious changes about Java as a platform from Oracle. First of all, the new versions are released more frequently, every 6 months, and this is called time-based release model. Before 9 we had a feature-based model and, as a result, a new version every 2-3 years. However, Oracle was facing constant criticism about the slow java evolution. On the other hand, developers had more than enough time to adapt to changes. Right now, we are left with the question „Is six months enough to assume a release is mature and stable?” Maybe instead of resolving the issue, Oracle jumped from one extreme to another?
By all means, in the light of these changes, not all developers are super enthusiastic about that as it requires a persistent migration effort . It raises many questions:
- Will my dependencies work on the next version?
- If my IDE be ready?
- Will my build tool (Maven, Gradle, etc.) be ready?
- If my other tools (spotbugs, checkstyle, PMD etc.) be compatible?
Right now, most of the software mentioned above is available for free. What if their authors decide to charge them? Certainly, they will bear some costs because of faster Java release train. Java, as a platform, had been considered as free and open up till now. And in my opinion that determined the success of Java.
What does the future of Java hold?
Additionally, Oracle itself announces that updates and patches will be published for free only for the latest version of Java (it doesn’t apply to an LTS release, what I described later). If the last version is X, you have X-1 and would like Oracle to provide a security patch to you - you will have to pay. Ok, and what about OpenJDK? Well, theoretically, the OpenJDK community could release security updates, but there is no sign of this yet.
Simon Ritter, Deputy CTO at Azul Systems and Java & JVM technologies enthusiast, has recently made a statement on his blog that right now we can have all three java qualities: stable, secure and free. But things are changing, and there will come a day when we will have to choose only two.
For the purpose of this article let’s assume that Java 9 is stable now. But the last version is 10, and free security updates are available only for Java 10. So if you need to have a stable and secure Java, it’s not going to be free of charges. If you don’t want to spend money at that point, you have to choose between the following two aspects: stability (9) or security (10 with critical security updates provided).
However, every cloud has a silver lining and things do have a way of working out smoothly. For enterprises and organizations that don't necessarily want or need to upgrade at a rapid pace, Oracle says that it will provide a long-term support (LTS) release every three years. Specific JDK versions will be identified as LTS releases, for them publicly available updates will be provided at least for three years. All other, intermediate releases (9, 10) will have updates only for six months. JDK 8 has been classified as an LTS release. The next one will be the JDK 11 to be released in September 2018. So the last free update of Java 8 for business users will be published in January 2019.
Summary
I think this is the main reason, why most of us, as I wrote at the beginning, stayed at Java 8. In a few months a part of enterprises probably will migrate to Java 11, skipping 9 and 10. However Java 11 is going to be significantly expanded comparing to Java 8. It creates a possibility that the migration will be expensive and risky. In summary: do you prefer one big step, from Java 8 to 11, or take it one day at a time with smaller steps without skipping a single version?
5 tips for transferring legacy apps to the cloud
With the growing popularity of cloud computing, legacy apps are driving toward an impending demise, and transferring legacy apps is becoming a must. Gartner forecasts that the worldwide public cloud revenue will grow 21.4 % in 2018 to a total of $186.4 billion, up from $153.5 billion in 2017. These staggering numbers are mainly due to the companies that take advantage of the unbeatable benefits that PaaS, SaaS or IaaS provide on a regular basis for businesses in this crazy world of change.
You may not be fully aware of this yet, but the time will come when your system starts playing tricks on you and you will have to transfer your business to a cloud platform. Are you ready for transferring legacy apps ?
As a company that specializes in cloud consulting, we have led many cloud platform implementation projects from start to finish. Most of them involved migrating data and supporting client teams throughout the entire process. We know how important it is to understand what comes along with such change and how to prepare for it in advance.
Choose the right platform
When shopping for the right solution, keep in mind that there are fake cloud providers that only claim to be „cloud”. This phenomenon is so common that it has its own name: „cloud-washing”. It stands for adding the „cloud” part to a product mainly for marketing reasons. Always make sure to carefully examine the product and see if it really supports cloud computing. Going for a fake provider can backfire and result in:
- Delayed and
painful product upgrades, - Expensive integrations,
- A lot of unplanned downtime,
- Poor security.
At Grape Up, we perform migrations to the Cloud Foundry platform, an open source, multi cloud application PaaS.
Motivate and engage your team
Not 100% of your team members will be super enthusiastic about the change at first. Most of them can perceive it as a challenge of learning a new tool or even fear dropping behind the new technology. Which is why it is crucial to assure them that proper support and traning will be there for them. A good idea is to involve managers to speak with their teams ahead of time. An alternative to this would be to delegate staff members to be project managers during the migration.
Prepare for deployment
There are companies out there that delay the date of going live out of pure fear of doing it. The system might be ready, and so can be 90% of staff, but still. There will always be someone who won’t feel at ease about the change. Keep in mind that it’s a much better idea to go live even if you’re not fully ready than put it off for another month or two.
Teach, explain, mentor
Train your staff and ensure that everyone is familiar with their responsibilities before the big day. Companies like Cloud Foundry or Pivotal offer cloud platform training courses and certification programs. Some of them are self-training programs as well.
Invest carefully
You’ve made the decision that you don’t want to maintain your own infrastructure in-house. Which is great! Just be careful with your expenses. Plan them out over a longer period of time and try to cover necessities first. Then, splurge a little on things that are nice-to-haves, but not must-haves.
Transferring legacy apps - summary
If there’s a will, there’s a way – everyone can find a solution that meets their expectations. Whether you go for a full PaaS implementation or only need post integration assistance, Grape Up offers full support, from platform enablement to the replatforming of exisiting apps, to helping you build your own lab and scale. We will walk you through the steps above and make sure that your journey to the cloud goes nice and smooth.
In case of a broken repository, break glass: mitigating real-life Git goofs
Imagine that you’re busy with a project that’s become a lifeblood of your company. The release is just a few days away and the schedule is tight. You work overtime or spend your 9-5 switching back and forth between a multitude of JIRA tickets, pushing a lot of pull requests as new issues come and go.
Focusing on the task at hand between one coffee break and another is tedious, and once you finish and push that final patch to your remote, you stop for a second and get this tingle in your chest. „Is it just some random piece of code that was not supposed to be there? What, release branch? Oh, my gosh! So, what do I do now?”
As many of my co-workers and myself have found ourselves in such situation, I feel obligated to address this issue. Thankfully, the good folks at Git made sure that undoing something we have already committed to a remote is not impossible. In this article, I will explore the ways of recovering your overtime mistakes as well as their potential drawbacks.
Depending on what your workflow looks like
Merge and rebase are examples of Git workflows that are used most often within corporate projects. For those who are not familiar with either of them:
- Merge workflow assumes that your team uses one or more branches that derive from the trunk (often indirectly, i.e. having been branched out from development/sprint branches), then merged into the parent branch using the classic Git mechanism utilising a merge commit. This has an advantage of seeing clearly when a given functionality has been introduced into the parent branch and also preserving commit hashes after introducing the functionality. Also, it is easier for VCS tracking systems (like BitBucket) to make sense of the progress of your repository.
The drawback: your repository tree gets polluted with merge commits and the history graph (if you are into such things) becomes quite untidy. - In a rebase workflow the features, after being branched out of their parents, become incorporated into the trunk seamlessly. The trunk log becomes straightforward and the history log gets easy to navigate. This, however, does not preserve commit hashes and unless used in conjuction with pull request tracking systems, can easily result in the repository becoming difficult to maintain.
As it happens, there are many ways to fix your repository. These ways depend on your workflow and the kind of mistake. Let’s go through some of the most common errors that can take place. Without further ado…
I pushed a shiny, brand new feature without a pull request
As we all know, pull requests do matter – they help us avoid subtle bugs and it never hurts to have another pair of eyes to look at your code. You can of course do the review afterwards, but this would require some discipline to maintain. It is easier to stick to pull requests, really.
Working overtime or undercaffeinated often leads one to forget to create a feature branch before an actual feature is implemented. Rushing to push it and to test it leads to a mess.
Revert to the rescue
git revert
git revert is a powerful command that can safely undo code changes without rewriting the history. It is powerful enough to gain respect in the eyes of many developers who usually hesitate to use its true potential.
Revert (2) works by replaying given commits in reverse , that is, creating new commits that remove added lines of code, they add back whatever was removed and undo modifications. This differs from git reset (1) that reverting is a fast-forward change that involves no history rewriting. Thus, if somebody already pulled down the tainted repository branch or introduced some changes on their own, it would be straightforward to address that. Here’s an illustration:

Revert takes a single ref, a range of refs or several arbitrary (unrelated) refs as an argument. A ref can be anything that can be resolved to a commit: the branch name (local or remote), relative commit (using tilde operator) or the commit hash itself. Only the sky is the limit.
git revert 74e0028545d52b680d9ac59edd3aff0ac4
git revert 74e002..9839b2
git revert HEAD~1..HEAD~3
git revert origin/develop~1 origin/develop~3 origin/develop~4
Reverting several changes at once
Normally, all commits in a range are reverted in order one by one. If you wish to make your log a little tidier and revert them all at once, use a no-commit flag and commit the result with a custom message:
git revert -n HEAD~1..HEAD~3
git commit -m „Revert commits x – y”
Undoing merges – wanted and unwanted
When using a merge flow, as opposed to a rebase flow, reverting changes becomes slightly more complicated – it requires the programmer to explicitly choose the parent branch based on which the changes are reverted. This often means choosing a release branch – it doesn’t affect what is actually being reverted, but what’s preserved in the history log.
Let’s suppose you base your feature off the development branch and you name it feature-1 . You introduce some changes into your branch while somebody merges some of their changes into development . Both your branch and the parent branch undergo some changes and then you can proceed to merging.
After a while, this feature has to be undone for the release that your team has to work on some more, and thus it requires a revert of the features you have previously introduced. Some time has passed since then and many more features have been introduced into the release.
Every merge commit has two parents. To revert yours into the state that preserves your and your team’s changes in the log, you would usually specify the mainline branch (-m) as the first one:
git revert 36bca8 -m 1
However, once the team decides to reintroduce this change once again, it will mean that you will try to merge in a set of diffs that’s already in the target branch. This will result in a succinct message that everything is up-to-date. To mitigate that, we could try to switch to our feature branch, amend the commit (in case there’s only one change to revert) or use an interactive rebase.
git rebase -i HEAD~n
Where n is the number of the introduced changes with edit option. Amending the commits will replay them, not altering the changes or commit messages, but making the commits appear different to Git and thus allowing us to reintroduce them as if they were fresh.
If you use rebasing, read this
The rebasing eases the burden of keeping track of the merge parents – because there are no merges to begin with. The history is linear and, as such, reverts of unwanted code are straightforward to perform.
It may be tempting to use rewriting in this case, but keep in mind the golden rule of rewriting – unless you are absolutely sure that it’s only you and nobody else using that branch, don’t rewrite the history.
How (not) to use the –force
Not everybody is born Jedi, and most programmers are no different. Force pushing the commit allows us to overwrite the remote history even if our branch does not perfectly fit into the scene, but makes it dangerously easy to forgo the changes somebody else made. A rule of thumb is to only use the force push on your own feature branches, and only when it is absolutely necessary.
When is it fine to rewrite Git history?
Put simply, as long as we haven’t published our branch yet for somebody else’s use. As long as our changes stay local, we’re fine to do as we please – provided we take responsibility for the data we manipulate. Some commands, such as git reset --hard can lead to loss of data (which happens quite often, as one would be able to infer from multitude of Stack Overflow posts on the topic). Nevertheless, it’s always wise to create a backup branch (or otherwise remember the ref) before attempting such operations.
Goof Prevention Patrol
Other than using software solutions, it’s best to enforce the team discipline yourself – make fellow developers responsible for their mistakes and let them learn from practice.
Some points worth going over are:
- protecting main/release branches from accidental pushes using restrictions and rules
- establishing a naming convention for branches and commit messages
- using CD/CI system for additional monitoring – this may help detect repository inconsistencies
Also, many Git tools and providers, such as BitBucket, allow one to specify branch restrictions, such as not allowing developers to push to a main or release branch without a pull request. This can be extended to matching multiple branches with a glob expression or a regex. This is very handy especially if your team follows naming branches after a specific pattern.
Summary and further Git reads
Mistakes were made, are made and will be. Thankfully, Git is smart enough to include mechanisms for undoing them for the developers brave enough to use them.
Hopefully this article resolved some common misunderstandings about Revert and when to use it. Should that not prove enough, here are some helpful links:
Cloud-native applications: 5 things you should know before you start
Introduction
Building cloud-native applications has become a direction in IT that has already been followed by companies like Facebook, Netflix or Amazon for years. The trend allows enterprises to develop and deploy apps more efficiently by means of cloud services and provides all sorts of run-time platform capabilities like scalability, performance and security. Thanks to this, developers can actually spend more time delivering functionality and speed up Innovation. What else is there that leaves the competition further behind than introducing new features at a global scale according to customer needs? You either keep up with the pace of the changing world or you don’t. The aim of this article is to present and explain the top 5 elements of cloud-native applications.
Why do you need cloud-native applications in the first place?
It is safe to say that the world we live in has gone digital. We communicate on Facebook, watch movies on Netflix, store our documents on Google Drive and at least a certain percentage of us shop on Amazon. It shouldn’t be a surprise that business demands are on a constant rise when it comes to customer expectations. Enterprises need a high-performance IT organization to be on top of this crowded marketplace.
Throughout the last 20 years the world has witnessed an array of developments in technology as well as people & culture. All these improvements took place in order to start delivering software faster. Automation, continuous integration & delivery to DevOps and microservice architecture patterns also serve that purpose. But still, quite frequently teams have to wait for infrastructure to become available, which significantly slows down the delivery line.
Some try to automate infrastructure provisioning or make an attempt towards DevOps. However, if the delivery of the infrastructure relies on a team that works remotely and can’t keep up with your speed, automated infrastructure provisioning will not be of much help. The recent rise of cloud computing has shown that infrastructure can be made available at a nearly infinite scale. IT departments are able to deliver their own infrastructure just as fast as if they were doing their regular online shopping on Amazon. On top of that, cloud infrastructure is simply cost efficient, as it doesn’t need tons of capital investment in advance. It represents a transparent pay-as-you-go model. Which is exactly why this kind of infrastructure has won its popularity among startups or innovation departments where a solution that tries out new products quickly is a golden ticket. So, what IS there to know before you dive in?
The ingredients of cloud-native apps, and what makes them native?
Now that we have explained the need for cloud-native apps, we can still shed a light on the definition, especially because it doesn’t always go hand in hand with its popularity. Cloud native apps are those applications that have been built to live and run in the cloud. If you want to get a better understanding of that it means - I recommend reading on. There are 5 elements divided into 2 categories (excluding the application itself) that are essential for creating cloud native environments.

Cloud platform automation
In other words, the natural habitat in which cloud-native applications live. It provides services that keep the application running, manage security and network. As stated above, the flexibility of such cloud platform and its cost-efficiency thanks to the pay-as-you-go model is perfect for enterprises who don’t want to pay through the nose for infrastructure from the very beginning.
Serverless functions
These small, one-purpose functions that allow you to build asynchronous event-driven architectures by means of microservices. Don’t let the „serverless” part of their name misleads you, though. Your code WILL run on servers. The only difference is that it’s on the cloud provider’s side to handle accelerating instances to run your code and scale out under load. There are plenty of serverless functions out there offered by cloud providers that can be used by cloud-native apps. Services like IoT, Big Data or data storage are built from open-source solutions. It means that you simply don’t have to take care of some complex platform, but rather focus on functionality itself and not have to deal with the pains of installation and configuration.
Microservices architecture pattern
Architecture pattern aims to provide self-contained software products which implement one type of a single-purpose function that can be developed, tested and deployed separately to speed up the product’s time to market. Which is equal to what cloud-native apps offer.
Once you get down to designing microservices, remember to do it in a way so that they can run cloud native. Most likely, you will come across one of the biggest challenges when it comes to the ability to run your app in a distributed environment across application instances. Luckily, there are these 12-factor app design principles which you can follow with a peaceful mind. These principles will help design such microservices that can run cloud native.

DevOps culture
The journey with cloud-native applications comes not only with a change in technology, but also with a culture change. Following DevOps principles is essential for cloud-native apps . Therefore, getting the whole software delivery pipeline to work automatically will only be possible if development and operation teams cooperate closely together. Development engineers are those who are in charge of getting the application to run on the cloud platform. While operations engineers handle development, operations and automation of that platform.
Cloud reliability engineering
And last but not least – cloud reliability engineering that comes from Google’s Site Reliability Engineering (SRE) concept. It is based on approaching the platform administration in the same way as software engineering. As stated by Ben Treynor , VP of Engineering at Google „Fundamentally, it’s what happens when you ask a software engineer to design an operations function (…). So SRE is fundamentally doing work that has historically been done by an operations team, but using engineers with software expertise, and banking on the fact that these engineers are inherently both predisposed to, and have the ability to, substitute automation for human labor.”
Treynor also claims that the roles of many operations teams nowadays are similar. But the way SRE teams work is significantly different because of several reasons. SRE people are software engineers with software development abilities and are characterized by proclivity. They have enough knowledge about programming languages, data structures, algorithms and performance. The result? They can create software that is more effective.
Summary
Cloud- native applications are one of the reasons why big players such as Facebook or Amazon stay ahead of the competition. The article sums up the most important factors that comprise them. Of course, the process of building cloud native apps goes far beyond choosing the tools. The importance of your team (People & Culture) can never be stressed enough. So what are the final thoughts? Advocate DevOps and follow Google’s Cloud Engineering principles to make your company efficient and your platform more reliable. You will be surprised at how fast cloud-native will help your organization become successful.
Challenges in migrating legacy .NET apps into pivotal Cloud Foundry – customer case study
Migrating legacy .net apps to Pivotal Cloud Foundry poses many challenges for most developers and companies. In this article, I will walk you step by step through the migration process based on my own experience and show you how to do it.
My first objective is to show that the migration is possible and requires only several changes in the application code base. The second one is to share the knowledge that I have gained while working on that project. Let’s forget that .NET Core even exists and discuss only the world of .NET framework. This article is about migrating existing ASP.NET Web form applications to Cloud Foundry.
As one of the 3rd party libraries is working only with Windows Server 2012R2 so we could not use Docker as it is supported only by Windows server 2016, which make migration more complex. Fortunately, but with some workarounds, the project was successfully completed.
The first goal of the migration was to implement a full Continuous Integration based on Cloud Foundry including building the application, launching unit tests, deploying the application to the development environment at Cloud Foundry, and finally launching acceptance tests using this environment.
Another requirement was to allow the operator to easily create a release package for the last version that passed the acceptance tests. That could be deployed to all environments like QA, UAT and Production. The final goal was to implement a production blue - green deployment.
After setting our goals, it was time to review the existing process and check what can we use from the existing infrastructure and processes. Since all steps were manual, nothing could be used. The build was made on a developer machine and all deployments to all production instances were manual. The only good thing was that the application has already been scaled vertically, which gave some hope to reduce the amount of necessary refactoring.
Once we have gotten familiar with the process, it was time to get to know the application itself. However, it turned out that the application was already very mature and that it was built in the asp.net web-forms technology and some parts of the code dated back to .NET 1.1. Therefore, the application requires Oracle Provier, Crystal reports and a custom archiving tool installed on the machine. Unfortunately, despite our hopes, it is not sessionless, but the session is stored in the database.
The next identified problem was the use of Active Directory; one of the web applications in the solution was using the windows identity. And finally, we learned that some of the sites were using mutual authentication and the servers had private keys loaded. Having all requirements we could begin the migration.
The migrating legacy .NET apps process
The first important decision was the choice of CI / CD tools. In the end, our choice has been limited to two: TFs and Concourse.
TFS is an integrated work tracker, very good support for NuGet and Active Directory integration. Unfortunately, the biggest disadvantage of TFS OnPremise is the lack of support for writing a CI / CD process as code or scripts (i.e. YAML files). VSTS currently has the support for CI, so there is a big hope that OnPremise version (TFS) will also have such support.
Concourse’s indisputable advantage is the docker usage. Each time when the pipeline is started it creates a new instance of container, so that we are sure that our building environment is immutable. Concourse provides us with a native support for Cloud Foundry (Resource type) and gives us the opportunity to use the same service for authorization as Cloud Foundry. Unlike TFS, it allows you to connect our process to several resources (GIT repositories) and to observe and trigger builds when changing in any of them. The biggest advantage, however, is the very simple representation of the pipelines in saved YAML files. When it comes to the drawbacks, its biggest issue is the lack of support for the NuGet resource so we cannot automatically launch the pipeline when the next version appears.
After considering all the pros and cons, the choice was to use Concourse. Unlike TFS, it allows you to connect our process to several resources (GIT repositories) and to observe and trigger builds when changing in any of them. After choosing the CI/CD tool, it was time to prepare Cloud Foundry and Concourse to work with our application. And since we were forced to use Windows Server 2012R2, we needed to prepare windows images for Concourse and PCF.
Concourse:
Uploading MSBuild, Web Deploy and all 3rd party dependencies. In case of an image for PCF, all we needed was all 3rd party dependencies. When we managed to get through the entire Setup process and choose all tools, we could start working with concourse on building the solution application. Unfortunately, at the very beginning it turned out that the application needed cleaning and small refactoring as the dependencies were so messed up that the application was not built on a clean machine, and required the developer to configure the machine in a very specific way which unfortunately was unacceptable if we wanted to use the concourse.
Another problem was the multitude of versions of MSBuild. Microsoft unfortunately did not show off and we had to create custom variables in the Concourse Image where we kept the path to MsBuild versions. And finally, we come to the reason of the IIS upload on the concourse image.
After deploying the application, the warm up took a very long time and made it difficult to run acceptance tests and caused that vertically scaling was not as smooth as we desired, therefore we decided to pre-compile asp.net pages during the build process that required IIS. Unfortunately, the pre-compilation did not turn out to be as fast as we assumed because all pages, including those currently blocked by feature toggle, were also parsed or were not in use at that time.
We have reached the point at which the application was ready to be uploaded to our PaaS. In Cloud Foundry, to run asp.net applications we needed to use an HWC Build pack which used Microsoft Hosted Web Core API that ran its own IIS instance and hosted single applications in a single web domain.
To perform the Cloud Foundry push, we needed a deployment manifesto written in YAML, of course none of us wanted to hardcode this file in the application code, we just wanted to create it dynamically based on the environment variables. Unfortunately, after a one-day research it turned out that the only PowerShell module that was maintained at that time and supported, did not provide sufficient functionality so we used python to create YAML files. After preparing the manifest, uploading the application was only a formality which, thanks to the good integration of Concourse and CF was smooth and problem-less. After uploading the application to Cloud Foundry the basic tests showed that the application did not work and some refactoring was required.
Refactoring
The first problem appeared with more than 2 instances of application, i.e. ViewState decryption failed after Postback in a situation where the user after sending the form goes to another instance. To fix it, we were forced to set the machineKey on the application level. After the application has been deployed and the first manual tests have passed, it was time to add automatic tests. For this we used selenium and chrome headless.
During the implementation of the tests, we did not come across any bigger problems. The only thing that is worth mentioning was the problem with the control "Update Panel" which could be very easily bypassed using custom java script. And the problem with checkboxes which in the headless mode Select method was not working and needed workaround by sending "space bar" character. Selenium was the reason why Chrome needed to be installed on Windows Image for concourse.
Another piece of code that required refactoring was the usage of RequestContext. Keep in mind that the router changed its content quite significantly, i.e. changing ports so the redirect or our famous asp.net ~ (tilde) would not work properly. We also had to change all the elements in the application that used LocalMachine as a storage (e.g. certificates) in our case was enough to change to CurrentUser (PCF always creates a new user when creating the instance of application and we used this fact for certificates).
The biggest problem that unfortunately could not be fixed is Windows Identity. This was mainly because there is no possibility to use this form of authentication in Cloud Foundry. The only workaround is to use Forms Authentication and communicate with AD using the LDAP protocol. The final change in the application was creating custom Healthpoint because ASP.NET Web Forms do not have ready solutions to monitor and debug applications.
However, you must remember that all pages, including our heathpoints, would not be available in case of any problem in global.asax, web.config or handlers and debugging this type of problems must be done outside of the application.
After implementing the tests, we could say that we had the first phase of migration done with a working application on CF - the first successful milestone.
Second milestone
It was time for milestone two. The application was working, but only on one environment and we wanted our applications to be deployed in many different environments and have the configuration stored in Environment Variables.
Unfortunately, ASP.NET, unlike asp.net Core does not allow you to configure the application after it has been run. The web.config file must be ready at the moment of launching the application, and if you change it for example using the global.asax, the application will be restarted.
We saw two ways of handling this issue:
- Massive application refactor to limit or eliminate web.config files (which would be very time-consuming or almost impossible in the case of sessions or certificates).
- The preparation of the entire environment and necessary files (web.config , certificates) before starting the HWC server.
After the test, both forms were using POC. Option number two, a custom build pack, was the winner.
Our Custom Build pack is nothing more than a set of Powershell scripts that launched before the HWC prepared the environment and applications. In case of Windows Server 2012, Cloud Foundry is not able to use the Docker container to separate application instances, but it creates separate users for each of the application instance.
Steps:
The first step is to read and parse all configuration from Environment Variable and download the necessary files from the GIT Server (TFS praises for making the highly-available HTTP API available).
Part of the configuration included in the GIT are xdt (XML Document Transformation) files that we used to create the appropriate entries in our web.config file.
After preparing the web.config file scripts, install the necessary certificates in the User Store and map Network Drive, of course, in the context of the user.
This few simple workaround has been able to separate the application code from the environment configuration.
The moment when we have separated configurations, we can easily take care of releasing the application and deploying it to all environments QA, UAT and production. All you have to do is set the correct versioning (we used Semantic versioning), create separate tests that can be done on production, and prepare blue-green deployment scripts that will, for example, change routing if the newly created environment is stable and remove the previous version.
The application has been migrated, it is running on many instances and is independent from the environment, but can we call it cloud-native already?
Unfortunately not, and some more work is required to make it happen. The most important changes that are still ahead of you and which you are able to do with small cost in asp.net web forms are adding telemetry and systematizing logs.
Add Sonar to Continues integration pipeline. The entire solution consists of 17 projects (including two web applications) that are dependent on each other which in the case of any change forces the deploy of all applications. So it should be broken down into a few smaller solutions and store the libraries in NuGet.
The database is not the best and most efficient place to store the sessions so the absolute minimum is to go to Radis and get rid of the session in new features at all. And for new features creating separate microservices.
Summary
To sum up, migrating legacy .NET applications to Pivotal Cloud Foundry can pose plenty of challenges that even the most experienced specialists may have to face and tackle. I hope that the process that I presented and the experience that I shared will help many of you overcome them.
Native bindings for JavaScript - why should I care?
At the very beginning, let’s make one important statement: I am not a JavaScript Developer. I am a C++ Developer, and with my skill set I feel more like a guest in the world of JavaScript. If you’re also a Developer, you probably already know why companies today are on the constant lookout for web developers - the JavaScript language is flexible and has a quite easy learning curve. Also, it is present everywhere from computers, to mobile devices, embedded devices, cars, ATMs or washing machines - you name it. Writing portable code that may be run everywhere is tempting and that's why it gets more and more attention.
Desktop applications in JavaScript
In general, JavaScript is not the first or best fit for desktop applications. It was not created for that purpose and it lacks GUI libraries like C++, C#, Java or even Python. Even though it has found its niche.
Just take a look at the Atom text editor. It is based on Electron [1], a framework to run JS code in a desktop application. The tool internally uses chromium engine so it is more or less a browser in a window, but still quite an impressive achievement to have the same codebase for both Windows, macOS and Linux. It is quite meaningful for those that want to use agile processes. Especially because it is really easy to start with an MVP and have incremental updates with new features often, as it is the same code.
Native code and JavaScript? Are you kidding?
Since JavaScript works for desktop applications, one may think: why bother with native bindings then? But before you also think so, consider the fact that there are a few reasons for that, usually performance issues:
- Javascript is a script, virtualized language. In some scenarios, it may be of magnitude slower than its native code equivalent [2].
- Javascript is characterized by garbage collection and has a hard time with memory consuming algorithms and tasks.
- Access to hardware or native frameworks is sometimes only possible from a native, compiled code.
Of course, this is not always necessary. In fact, you may happily live your developer life writing JS code on a daily basis and never find yourself in a situation when creating or using native code in your application is unavoidable. Hardware performance is still on the uphill and often there is no need to even profile an application. On the other hand, once it happens, every developer should know what options are available for them.
The strange and scary world of static typing
If a JavaScript Developer who uses Electron finds out that his great video encoding algorithm does not keep up with the framerate, is that the end of his application? Should he rewrite everything in assembler?
Obviously not. He might try to use the native library to do the hard work and leverage the environment without a garbage collector and with fast access to the hardware - or maybe even SIMD SSE extensions. The question that many may ask is: but isn’t it difficult and pointless?
Surprisingly not. Let’s dig deeper into the world of native bindings when as an opposite to JavaScript, you often have to specify the type of variable or return value for functions in your code.
First of all, if you want to use Electron or any framework based on NodeJS, you are not alone - you have a small friend called “nan” (if you have seen “wat”[3] you are probably already smiling). Nan is a “native abstraction for NodeJS” [4] and thanks to its very descriptive name, it is enough to say that it allows to create add-ons to NodeJS easily. As literally everything today you can install it using the npm below:
$ npm install --save nan
Without getting into boring, technical details, here is an example of wrapping C++ class to a JavaScript object:

Nothing really fancy and the code isn’t too complex either. Just a thin wrapping layer over the C++ class. Obviously, for bigger frameworks there will be much code and more problems to solve, but in general we have a native framework bound with the JavaScript application.
Summary
The bad part is that we, unfortunately, need to compile the code for each platform to use it and by each platform we usually mean windows, linux, macOS, android and iOS depending on our targets. Also, it is not rocket science, but for JavaScript developers that never had a chance to compile the code it may be too difficult.
On the other hand, we have a not so perfect, but working solution for writing applications that run on multiple platforms and handle even complex business logic . Programming has never been a perfect world, it has always been a world of trade-offs, beautiful solutions that never work and refactoring that never happens. In the end, when you are going to look for yet another compromise between your team’s skills and the client’s requirements for the next desktop or mobile application framework, you might consider using JavaScript.
How to run any type of workload anywhere in the cloud with open source technologies?
Nowadays, enterprises have the ability to set up and run a mature and open source cloud environment by bundling platform technologies like Cloud Foundry and Kubernetes with tools like BOSH, Terraform and Concourse. This bundle of open source cloud native technologies results in an enterprise grade and production-ready cloud environment, enabling enterprises to run any type of workload anywhere in an economically efficient way. Combining that with Enterprise Support from an experienced Enterprise IT Cloud solution provider gives enterprises the ideal enabler to achieve their digital transformation goals .
Reference architecture of an open source cloud environment
The reference architecture of an open source and enterprise grade cloud environment should at minimum contain these core elements:
- Infrastructure provisioning
- Release Management & Deployment
- Configuration Management
- Container based Runtime
- Orchestration & Scheduling
- Infrastructure & Application Telemetry software
- Monitoring & Analytics
- Application Deployment Automation
- Back-ups & Disaster Recovery
How to start?
For all the elements listed above, there are multiple open source technologies to choose from that one needs to be familiar with. The ‘Cloud Native Landscape’ from the Cloud Native Foundation is an elaborate and up-to-date overview of the open source cloud native tools available.
Navigating through this landscape and the multitude of tools to choose from is not an easy task. With a dedicated team and an agile approach every company is able to set up its own open source cloud environment. Nevertheless, bundling these technologies in a correct and seamless way is a complex undertaking. It takes an experienced and skilled partner to help enterprises simplify, automate and speed-up the setup process, relying solely on the same open source technologies . This support provides additional security and mitigates risk.
Benefits
Having a single cloud environment is a big enabler for DevOps adoption within enterprises. It provides development teams with a unified platform, giving them space to focus solely on building valuable applications. Ops teams manage a homogeneous environment by using a tool like BOSH for release management and deployment for both Application Runtime and Container Runtime.
Technologies like Cloud Foundry and Kubernetes are independent from the underlying infrastructure making it possible to utilize any combination of the public and/or private cloud as your infrastructure layer of choice. At the end of the day, you shouldn’t underestimate the economic value here. Open source technologies that are supported by large user communities and the largest technologies companies in the world make having your own Open Cloud Environment a very compelling option.
In-house vs Managed
In the light of growing complexity associated with managing DevOps enterprises look to partners to provide managed services for their cloud environments. The main reason is that all the different stack components of a mature cloud environment can be complex for many IT organizations to deploy and manage on their own. With frequent required updates to the different components of the bundle the average IT organization is not well suited to keep pace, without help from a partner.
Having a partner provide managed services for the cloud environment creates additional benefits; reducing complexity and more resource allocation to areas that create more direct business value.
Personally, I’m excited to be a close witness and participant of this growing trend and look forward to seeing more enterprises run their production workloads on open source technologies.
Cloud native: What does it mean for your business?
We witness how the world of IT constantly changes. Today, like never before, it is more often defined as “being THE business” rather than just “supporting the business”. In other words, the conventional application architectures and development methods are slowly becoming inadequate in this new world. Grape Up, playing a key role in the cloud migration strategy, helps Fortune 1000 companies make a smooth transition. We build apps that support the business itself, we advocate the agile methodology, and implement DevOps to optimize performance.
To clarify the idea behind cloud native technologies, we’ve put together the most important insights to help you and your team understand the essentials and benefits of Cloud Native Applications:
Microservices architecture
First and foremost, one must come to terms with the fact that the traditional application architecture means complex development, long testing and releasing new features only in a specific period. Whereas, the microservices approach is nothing like that. It deconstructs an application into many functional components. These components can be updated and deployed separately without having any impact on other parts of the application.
Depending on its functionality, every microservice can be updated as often as needed. For instance, a microservice that contains functionalities of a dynamic business offering will not affect other parts of the app that barely change at all. Thanks to this, an application can be developed without changing its fundamental architecture. Gone are the days when IT teams had to alter most of the application just to change one piece.
Operational design
One of the biggest issues that our customers face before the migration is the burden of moving new code releases into production. Along with monolithic architectures that combine the whole code into one executable, new code releases require deploying the entire application. Because production environment isn’t the same as development environment, it often becomes impossible for developers to detect potential bugs before the release. Also, testing new features without moving the whole environment to the new app version can become tricky. This, in turn, complicates releasing new code. Microservices solve this problem prefectly. Since the environment is divided, any changes in code are separated to executables. Thanks to this, updates do not change the rest of the application, which is what clients are initially concerned about.
API
One of the indisputable advantage of microservices which outweighs all traditional methods is the fact that they communicate by means of API. With that said, you can release new features step by step with a new API version simultaneously. And if any failures appear, there is also a possibility to shut off access to the new API while the previous version of your app is still operational. In the meantime, you can work on the new function.
DevOps
At Grape Up, we often work on on-site projects alongside our clients. On the first day of the project, we are introduced to multiple groups that are in charge of various parts of the app’s lifecycle such as operations, deployment, QA or development. Each of them has its own processes. This creates long gaps between tasks being handed over from one group to another. Such gaps result in ridiculously long deployment time frames which are very harmful to an IT business, especially when frequent releases are more than welcome. To efficiently get rid of these obstacles and improve the whole process, we introduce clients to DevOps.
By and large, DevOps is nothing else, but an attempt to eliminate the gaps between IT groups. It’s an engineering culture that Grape Up experts teach clients to use. If followed properly, they are able to transform manual processes to automated ones and start getting things done faster and better. The most important thing is to find the pain point in the application’s lifecycle. Let’s say that the QA department doesn’t have enough resources to test software and delays the entire process in time. A solution to this can be either to migrate testing to a cloud-based environment or put developers in charge of creating tests that analyze the code. By doing so, the QA stage can take place simultaneously with the development stage, not after it. And this is what it takes to understand DevOps.
The transition to cloud-native software development is no longer an option, it is a necessity. We hope that all the reasons mentioned above prompted you to embark on a journey called “Cloud-Native”, a promising opportunity for your company to grow in the years to come. And even if you’re still feeling hesitant, don’t think twice. Our expertise combined with your vision can be a great start into a brighter future for your enterprise .
A slippery slope from agile to fragile: treat your team with respect
„Great things in business are never done by one person, they’re done by a team of people.”
said Steve Jobs. Now imagine if he didn’t have such Agile approach. Would the iPhone ever exist?
Agile development, a set of methods and practices where solutions evolve thanks to the collaboration between cross-functional teams . It is also pictured to be a framework which helps to ‘get things done’ fast. On top of that, it helps to set a clear division of who is doing what, to spot blockers and plan ahead. And we’ve all heard of this. Most of us even know the definition of Agile by heart. In the last few decades, Agile has become the approach for modern product development. But despite its popularity, it is still misinterpreted quite often and its core values tend to get abused. This misinterpretation has become so common that it has developed a name for itself, frAgile. In other words, it is what your product and team become if you don’t follow the rules.
The thin line between Agile and frAgile
Working by the principles of Agile means you are flexible and you’re able to deliver your product the way the customer wants it and on time. By and large, Agile teaches u how to work smarter and how to eliminate all barriers to working efficiently. However, there are times when the attempt to follow Agile isn’t taken with enough care and the whole plan fails. Just like trying to keep balance when it’s your first time on ice skates.
With that said, I will step-by-step explain a few examples of how Agile can quickly and irreversably become something it should never be in the first place. Later on, I will list the tips on how to avoid stepping on that slippery slope. So let’s take a look at the examples:
Your technical debt is going through the roof
Just like projects, sprints are used to accomplish a goal. Quite often though, when the sprint is already running, new decisions and changes keep flowing in. As a result, your team keeps restarting work over and over again and works all the time. Does it sound familiar?
Unfortunately, if this situation continues, everyone gets used to it and it becomes the norm. It usually leads to a huge pile of technical debt you could ever imagine. Combine it with an endless list of defects caused by the lack of stability to nurture the code and you are doomed for failure.
You should always respond to change wisely. Despite the fact that the Agile methodology embraces changes and advocates flexibility, you shouldn’t overdo it. Especially not the changes that impact your sprint on a daily basis. Every bigger change ought to be consulted between sprints, and be based on the feedback received from users.
A big fish leaves the team and the project falls apart
Another, not so fortunate thing that can happen is a team member leave your team in the middle of a long-term, complex project that consists of more unstructured processes than meets the eye. With the job rotation in the contemporary world of IT, it happens all the time.
Once a Product Owner or a Team Leader is gone, none of the team members will be able to propoerly describe the system behavior and what should be delivered. As a result, deadlines will fail to be met and you will be chasing dreams about the quality of your product.
Find the balance between individuals and processes. And most importantly, never underestimate how your scope of work is documented and the team is managed. Otherwise, after an important team member is gone, the rest will be left in a difficult position. So prepare them for such events. Take your time to estimate what might hold back your team and what is absolutely necessary for the fast-paced of world-class engineering.
The project is nearly finished but your customer is nowhere near Agile
You would be surprised at how many companies out there are Agile… in their dreams. By appreciating the flexibility that Agile gives, they often confuse Agile working with flexible working and still work with the waterfall methodology in their minds. This can be spotted especially when someone overuses terms like sprint or scrum all the time. In reality, actions speak louder than words and one doesn’t have to show off their „rich” vocabulary.
Therefore, if you agree on a strict scope of delivery in your contract, you might regret it later on. We all know that the reason why IT is all about Agile is because plans tend to change. The only problems is that the list of features on the contract doesn’t. If the customer doesn’t fully understand the values of Agile, the business relationship can be put at risk.
Prioritize the collaboration with your client over contract negotiations. Focus on clear communication from the beginning and make sure that your client grasps the principles of Agile. Also, if along the way any unexpected changes to the established scope of work appear, make sure to carry them out in front of your client’s eyes.
Save me from drowning in frAgility
With all the above, here is how you can avoid messing up your work environment:
- Prepare user stories before planning the sprint. You will thank yourself later. If written collaboratively as part of the product backlog grooming proces, it will leverage creativity and the team’s knowledge. Not only will it reduce overhead, but also accelerate the product delivery.
- Be careful with changes during the sprint. Or simply - avoid abrupt changes. Thanks to this, your code base will have its necessary stability for performance.
- Turn yourself into a true Agile evangelist. Face reality that not everyone understands the core values of the world’s most beloved methodology – not even your customers. So even if someone tells you that they use Agile, take it with a pinch of salt. Strong business partnerships are built upon expectations that are clear to both sides.
At Grape Up, we follow the principles behind the Agile Manifesto for software development. We believe that business people and developers must work together daily throughout the project. It helps us and our clients achieve mutual success on their way to becoming cloud-native enterprises .


