About us
Our services

Capabilities

Legacy Modernization
Data Platforms
AI & Advanced Analytics

Industries

Automotive
Finance
Manufacturing

Solutions

Databoostr

Data Sharing & Monetization Platform

Cloudboostr

Multicloud Enterprise Kubernetes

Looking for something else?

Contact us for tailored solutions and expert guidance.

Contact
Case studies
Resources

Resources

Blog

Read our blog and stay informed about the industry’s latest trends and technology.

Ready to find your breaking point?

Stay updated with our newsletter.

Subscribe

Insights

Ebooks

Explore our resources and learn about building modern software solutions from experts and practitioners.

Read more
Contact
Blog
Software development

Building an easy to maintain, reusable user interface on macOS

Anton Prokopets
Delivery Manager
October 21, 2025
•
5 min read

Table of contents

Heading 2
Heading 3
Heading 4
Heading 5
Heading 6

Schedule a consultation with software experts

Contact us

The value of the user interface

What you see inside an app is called the user interface (UI). Undoubtedly, it is an important part of every software, and it’s no piece of cake to wrap the business logic with a fancy, (self-explanatory) and efficient visual representation.

How many times in your life have you heard not to judge the book by its cover? Probably quite a few. And how many times have you left an application and never came back just because you didn’t like the design at first sight? Probably more than just a few times.

At Grape Up, we achieve a successful UI through close cooperation between our software engineers (SE) and designers. And this article is about the software engineer’s part – programming UI for the macOS platform .

Programming user interface in Cocoa

Cocoa, an original name given to the Application Programming Interface (API) used for building Mac apps. It is fully responsible for the appearance of apps, their distinctive feel and responsiveness to user actions. The tool automates and simplifies many aspects of UI programming to comply with Apple's human interface guidelines (AHIG). It provides software engineers with several tools for UI development:

  • Storyboards
  • XIBs
  • Custom code

Each of them gives the possibility to implement not only a fully functional, but also an incredibly useful and captivating UI. Having used all three of them at Grape Up, we could give you a comparative analysis of each tool, but we will not be discussing it here.

Why is it better to use XIBs and Storyboards?

Of course, it would be wrong to state that XIBs or Storyboards are simply better than the Custom code. Like always, there is no “silver bullet” – no universal approach or solution for every software development problem. On some occasions the Interface Builder (IB) won’t be helpful to achieve the desired appearance. However, years of hands-on experience backed up by numerous projects have allowed us to draw the following conclusions as to why it’s better to use the Interface Builder on a regular basis:

It’s easy to understand and update

Let’s say that the software engineer needs to deliver an update to an already existing container view with a number of subviews. In what case will he spend less time getting familiar with the existing solution and applying the update? When using XIB with visualized UI elements and layout constraints or when the UI is scripted in several hundred lines of code?
Another example, let’s imagine a set of very similar but still different views. The software engineer needs to extend this set with another view. In what case will he not duplicate the already existing solution and reuse it instead? These are a few examples of regular SE tasks.

In practice, most of the times you would expect the UI to be available in XIB/Storyboard, and not in the code. We can consider XIBs/Storyboards as a public high-level API which encapsulates all the details underneath. IB design will provide a better general overview as well as decrease the chance of missing something.

One of the biggest arguments against XIBs are merge conflicts. Usually, they are caused by not paying enough attention to decomposing tasks during the project. Which is exactly why we practice task decomposition, a project technique that breaks down the workload and tasks into smaller ones before the actual creation of the entire work structure. Thanks to this, we are able to save a lot of time in the long run. By using IB, you can create as much granular UI elements as you want. Moreover, this a proper way of targeting a reusable and easy to update User Interface.

It helps avoid mistakes and saves time

Another great advantage of the Interface Builder is the fact that software engineers can get away with plenty of the so called “dirty work”, such as objects initialization, configuration, basic constraints management, etc. The UI can be implemented faster and with less code. By using IB, the engineer simply follows the rule – “Less code produces less bugs”.

It generates warnings and errors

The warnings and errors generated by the Interface Builder when working with an Auto Layout engine: unsatisfiable layouts, ambiguous layouts, logical errors. Raised issues are quite descriptive and self-explanatory. In many cases, the IB even suggests a solution. Thanks to the mentioned issues it is a lot easier to prepare a proper and functional UI before even building the application. Which would rather be impossible to achieve when programming UI in code. With that said we can say that the IB is a great tool for both creating and testing the UI.

It accelerates the development cycle

The next useful feature that is available in IB is Previews. Thanks to it, the application interface can be previewed using the assistant editor, which displays a secondary editor side-by-side with the main one. It gives the possibility to check the designed Auto Layout on the fly and also significantly reduces development cycle time. Such feature is especially helpful when supporting multiple localizations.

It helps educate new team members

As we all know, during the project life time engineers leave and join the team. To make the onboarding process smooth and effective, we introduce new team members to a UI which is organized in XIB files or in Storyboards. As a result, they are given a complete overview of the application’s functionalities. After all, we all know how the saying goes– “Human brains prefer a visual content over the textual one”.

XIBs and Storyboards extended use cases

Is there a better way of organizing your User Interface designs than just keeping them in your Resources folder? Of course. Why not organize them in a library or a framework? Having a UI framework with all the custom views (XIBs) along with controllers to manage them can be quite convenient. After all, having UI elements in visual representation and keeping them atomic enough guarantees extensibility, manageability and reusability. For those that work on big, stretched over time projects it might be really useful.

Conclusion - providing an easy to maintain user interface

The modern world is all about information and the ways of transferring, processing as well as selling it. Why bother yourself with some boring symbols if there already are more fascinating ways of dealing with it right around the corner? The Interface Builder delivers a quick, convenient and reliable way for programming your UI. So, let’s save us some time for a mutual discussion.

Grape Up guides enterprises on their data-driven transformation journey

Ready to ship? Let's talk.

Check our offer
Blog

Check related articles

Read our blog and stay informed about the industry's latest trends and solutions.

Software development
Automotive

Getting up to speed with Android Automotive OS development on Mac

Similar to how they did for the exploding smartphone market over ten years ago, customized infotainment operating systems and open-source software appear to be sweeping the car industry. The Android Automotive OS has been making headway in many market niches, starting with full-electric vehicles like Polestar a few years ago. It’s only a matter of time until the community and ecosystem mature enough to become a serious force for enabling mobile development on yet another front: the cars.

While Android Auto (a name easily confused with the topic I will be going over today) and Apple CarPlay have had a long-standing in the field, they came with several caveats and restrictions. Those largely pertain to the fact that many features-to-be would rely on low-level access to the hardware of the car itself. This proved to be difficult, with both solutions offering a limited set of human-machine interaction capabilities, such as a heads-up display (where available) and radio. With that in mind, the use case for providing apps for the actual OS running the car was clearly needed.

The community and documentation are still in their infancy and don’t yet provide a deep dive into Android Automotive OS. Moreover, the learning curve remains steep, but it’s definitely possible to piece together bits of information related to development and deployment. In this article, I attempt to do just that, all while emphasizing the MacOS side of things.

Prerequisites

As a general principle, Android development can either be done on a real device or a corresponding emulator. Given the sensitive nature of granting applications access to the actual car hardware, the app has to go the whole nine yards with Google Play Store eligibility. On top of that, it has to conform to one of several categories, e.g. a media app to be allowed in the AAOS system. The good news is that there’s a possibility for an app to mix and match categories.

Thus, vendors supporting the new ecosystem (as of now, among others, Volvo and Polestar) opted for creating a custom automotive device emulator that closely matches the specifications of the infotainment systems contained within their cars. Regrettably, Polestar and Volvo emulators contain proprietary code, are based on older Android releases, and do not yet support the ARM architecture, which is of special interest to developers working with ARM-based Macs.

While official AAOS emulators are available in Preview releases of Android Studio (from the Electric Eel version onwards), often the task at hand calls for customized hardware and parameters. In this case, a custom Android version would need to be built from source.

Building from source

Building from source code is a time-consuming enterprise that’s not officially supported outside 64-bit Linux platforms (regardless of the target architecture). With that in mind, choosing a dedicated AWS EC2 instance or a bare metal server for building the ARM versions of the emulator seems to be the best overall solution for Mac developers.

A requirement for unofficial builds on Mac devices seems to be having a disk partition with a case-sensitive file system and otherwise following some extra steps. I chose a dedicated build system because, in my opinion, it wasn't worth the trouble to set up an additional partition (for which I didn't really have the disk capacity).

The choice of the base Android release is largely dependent on the target device support, however, for ease of development, I would recommend choosing a recent one, e.g., 12.1 (aka 12L or Sv2). Mileage may vary in regards to actually supported versions, as vendors tend to use older and more stable releases.

After getting their hands on a development machine, one should prepare the build environment and follow instructions for building an AVD for Android Auto. The general workflow for building should include:

  1. downloading the source code – this may take up to an hour or two, even with decent connection and branch filtering,
  2. applying required modifications to the source, e.g., altering the default VHAL values or XML configuration,
  3. running the build – again, may take up to several hours; the more threads and memory available, the better,
  4. packing up the artifacts,
  5. downloading the AVD package.

Leaving out the usage specifics of the lunch and repo for now, let’s take a look at how we can make the default AAOS distribution fit our needs a little better.

Tailoring a device

VHAL (Vehicle Hardware Abstraction Layer) is an interface that defines the properties for OEMs to eventually implement. These properties may, for example, include telemetry data or perhaps some info that could be used to identify a particular vehicle.

In this example, we’re going to add a custom VIN entry to the VHAL. This will enable app developers to read VIN information from a supposed vehicle platform.

First off, let’s start with downloading the actual source code. As mentioned above, Android 12.1 (Sv2) is the release we’re going to go with. It supports version 32 of the API, which is more than enough to get us started.

In order to get sources, run the following command, having installed the source control tools :

<p>> repo init -u https://android.googlesource.com/platform/manifest -b android-12.1.0_r27 --partial-clone --clone-filter=blob:limit=10M</p>



<p>> repo sync -c -j16</p>

Partial clone capability and choice of a single branch make sure that the download takes as little time as possible.

After downloading the source, locate the DefaultConfig.h file and add the following entry to kVehicleProperties:

{.config =

{

.prop = toInt(VehicleProperty::INFO_VIN),

.access = VehiclePropertyAccess::READ,

.changeMode = VehiclePropertyChangeMode::STATIC,

},

.initialValue = {.stringValue = "1GCARVIN123456789"}},

An overview of HAL properties can be found in the reference documentation .

Build

Having modified the default HAL implementation, we’re now free to run the build for an ARM target. Run the following instructions inside the AAOS source directory – using a screen is highly recommended if connecting through SSH:

screen

. build/envsetup.sh



lunch sdk_car_arm64-userdebug







m -j16 # build the requisite partitions



m emu_img_zip # pack emulator artifacts into a downloadable .zip

Note the sdk_car_arm64-userdebug target needed for emulation on ARM-powered Macs. A car_arm64-userdebug variant also exists. Make sure not to confuse the two – only the former has emulation capabilities! Try running lunch without parameters to see a full list of targets.

The -jXX parameter specifies the number of threads to use while building the Android. If the thread count is not provided, the build system will try and optimize the number of threads automatically. Patience is advised, as even with decent hardware resources, the compilation is bound to take a while.

The resulting emulator artifact should be available in the out/ directory under sdk-repo-linux-system-images.[suffix].zip to be downloaded via scp or your file transfer client of choice.

Running a custom emulator in Android Studio

Now that we have our bespoke emulator image built, there’s a little trick involved in making it available for local development without creating a whole distribution channel, as outlined in the manual.

First, locate the ~/Library/Android/sdk/system-images/android-32 folder and unzip your emulator archive there. The directory can be given an arbitrary name, but the overall structure should follow this layout:

~/Library/Android/sdk/system-images/android-32

|_ [your name]

|_ arm64-v8a

E.g., ~/Library/Android/sdk/system-images/android-32/custom_aaos/arm64-v8a.

Second, download the example attached package.xml file and adjust the device name to fit your needs. A package.xml is added after downloading and unpacking the emulator sources from the Internet and needs to be recreated when unzipping locally. After restarting the Android Studio, Device Manager should have an option to use your brand new ARM image with an Automotive AVD of your choice.

After successfully running the emulator, a newly created VIN property should be visible in the Vhal Properties of Car Data. Nice one!

While reading VHAL property values is out of the scope of this article, it should be easy enough with a couple of Car library calls, and Google created an example app that does the very thing.

Downloading the above example (CarGearViewerKotlin) is highly recommended – if you’re able to build and run the app on the emulator, you’re all set!

Facilitating AAOS development on M1

One of the problems I stumbled upon during the development environment setup was that the Car library was not being detected by Android Studio, while the app still builds normally from CLI. This appears to be a known issue, with no official patch yet released (as of October 2022). However, a simple workaround to include a .jar of the Android Automotive library appears to work.

In case of running into any problems, import the library from ~/Library/Android/sdk/platforms/android-32/optional/android.car.jar by copying it into libs/ directory in the project root and add the following directive to your main build.gradle file, if not present:

dependencies {

implementation fileTree(include: ['*.jar'], dir: 'libs')

...

}

Once the project is re-imported into the IDE, Android Studio should be able to pick up the Android Car library for import and autocomplete suggestions.

The Real Deal

Emulators are sufficient for testing purposes, but what about real devices, such as branded infotainment centers? As mentioned before, at least two major vendors (Volvo and Polestar) offer the integrated Android Automotive experience out-of-the-box in their vehicles. System images and implementation details, however, are proprietary and require enrollment into their respective developer partnership programs. Polestar offers a free AV D that emulates Polestar 2 behavior, along with the screen size, frame and hardware controls – alas, currently only available for x86-64 platforms.

One of the alternatives worth considering is the installation of Android Automotive on a real device – be it a tablet or even a Raspberry Pi platform . Some modules will still require virtualization, but switching to a physical device could be a major step in the direction of better hardware compatibility.

All the above concerns raise the question – how to get the app to work on a real AAOS inside a car? I haven’t found a conclusive answer to that question, at least one that won’t involve third parties holding the actual documentation resources for their devices. It makes sense that some doors will stay closed to the general programming audience due to the security implications of creating apps for cars. No one, after all, would want their vehicle to be taken control of by a rogue party, would they?

Final thoughts

Programming for Android Automotive is still an adventurous endeavor. Even though the system has been around since 2017 (with APIs open to public in mid-2019), official documentation can still feel somewhat inaccessible to newcomers, and the developer community is still in its budding phase. This requires one to piece together various bits of official guides and general Stack Overflow knowledge.

Bottom line: AAOS is still behind the degree of engagement that the regular Android operating system has been enjoying so far. The future is looking bright, however, with vendors such as GM, Honda, BMW, and Ford eager to jump on the automotive development bandwagon in years to come. If that’s the case, the ecosystem will inevitably expand – and so will the community and the support it provides.

Read more
Software development

Continuous deployment of iOS apps with Fastlane

Automation helps with all the mundane and error-prone tasks that developers need to do on a daily basis. It also saves a lot of time. For automating iOS beta deployment, a very popular tool is Fastlane. The tools automate entire iOS  CI/CD pipelines and allow us to keep an iOS infrastructure as code. Today I will share my observations and tips regarding setting up fastlane for iOS project.

Setting up Jenkins

Jenkins is an open-source automation server, but you can use fastlane on your local machine too. However, a separate build server can certainly help you create a more streamlined process.

The Jenkins installation process is pretty straightforward, so I will not be going into details here. But there are some quirks that we should take into consideration.

To elevate the problem with detached HEAD, we should provide Branch Specifier with „origin” e.g. „origin/master” and choose the option Check out to specific local branch with branch name including  ** . For more details, see the Jenkins' help.

All the tool commands should be put in the Execute shell phase.

Setting up Fastlane

Fastlane is used to simplify deployment of iOS and Android apps. It is well documented and its community always tries to help in case of any bugs etc. We can install it using Homebrew or RubyGems and I personally recommend using the latter. I stumbled upon some problems during installation of fastlane plugins after using `brew` to install fastlane.

To integrate the tool with our project, we should open its directory in terminal and type:

 fastlane init

It will start a very intuitive creator that guides us through the process of setting up fastlane for our project. After this process is finished, we have configured it. Now we can customize and create new lanes.
Fastlane lanes are written in Ruby. Even if you are unfamiliar with this programming language, learning the basics that are needed to configure it shouldn't take too long. There is also a version of fastlane that supports Swift. However, it is still in beta.

Fastlane tips

The tool provides a few ways of handling code signing e.g. „cert” and „sigh” or „match”. However, if you plan to use fastlane mainly on CI/CD server you can rely on automatic code by signing in Xcode and installing the certificates manually on that machine. In that case you don't need to use any of those actions in your lanes.

If you prefer to move your Fastfile (fastlane configuration file) to a different directory than the project's directory, you will have to provide manually paths to the actions. In that case it is better to use absolutes paths since when you just use  ../ prefix you may end up in a different directory that you would expect.

Error block  error do |lane, exception| is very useful for error notifications. For example, you can post message to Slack when lane executed by the CI server failed.

Read more
Software development

Using Azure DevOps Platform for configurable builds of a multicomponent iOS application

In this article, we share our experience with building CI/CD for a multicomponent multi-language project. The article describes the structure of the pipeline set up and focuses on two important features needed in our project’s automation workflow: pipeline chaining and build variants.

The CI/CD usage is a standard in any  application development process . Mobile apps are no exception here.

In our project, we have several iOS applications and libraries. Each application uses several components (frameworks) written in different languages. The components structure is as in the picture below:

The internal component contains all the core (domain) logic that apps use. The first two components are C/C++ based and are compiled as frameworks. The wrapper framework provides an Objective-C/Swift layer that is necessary for using it in an iOS application. There are several iOS applications that are using the wrapper framework. Additionally, this framework is also used by external developers in their own applications.

The wrapper framework should be built for both x86_64 and arm64 architecture to be used on both a simulator and a real iOS device. Also, we need a debug and release version for each architecture. When it comes to applications each of them may be built for AppStore, internal testing (Ad-Hoc) or TestFlight beta testing.

Without an automated CI/CD system, it would be extremely inefficient to build the whole chain of components manually. As well as to track the status of merges/pull requests for each component. That is to control if the component is still building after the merge. Let’s see how our pipelines are organized.

Using Azure DevOps pipelines

For building CI/CD, we’ve chosen Azure DevOps. We use Azure Pipelines for building our components and Azure Artifacts to host the built components, as well as several external 3rd party libraries.

To check the integrity and track the build status of each component, we have special integration pipelines that are integrated with GitHub. That is, each pull request that is going to be merged to the development branch of a particular component triggers this special integration pipeline.

For regular builds, we have pipelines based on the purpose of each branch type: experimental, feature, bugfix, development, and master.

Since each component depends on another component built on Azure, we should somehow organize the dependency management. That is versioning of the dependent components and their download. Let’s take a look at our approach to dependency management.

Dependency management

Azure provides basic CLI tools to manipulate pipelines. We may use it to download dependencies (inform of Azure artifacts) required to build a particular component. At a minimum, we need to know the version, configuration (debug or release) and architecture (x86_64 or arm64) of a particular dependency. Let’s take a look at the options that Azure CLI gives us:

az artifacts universal download \

 --organization "${Organization}" \

 --feed "${Feed}" \

 --name "${Name}" \

 --version "${Version}" \

 --path "${DownloadPath}"

The highlighted parameters are the most important for us. The CLI does not provide explicit support of build configuration or architecture. For  this purpose, we simply use the name (specified as  --name parameter) that has a predefined format:

<component name>-<configuration>-<architecture>

This makes it possible to have components of the same version with different architecture and build configurations.

The other aspect is how to store info about version, configuration, etc., for each dependency. We’ve decided to use the git config format to store this info. It’s pretty easy to parse using  git config and does not require any additional parsing tool. So, each component has its own  dependencies.config file. Below is the example file for component dependent on two frameworks:

[framework1]

 architecture = "arm64"

 configuration = "release"

 version = "1.2.3.123"[framework2]

 architecture = "arm64"

 configuration = "release"

 version = "3.2.1.654"

To make it possible to download dependencies as part of the build process, we have a special script that manages dependencies. The script is run as a build phase of the Xcode project of each component. Below are the basic steps the script does.

1.  Parse  dependencies.config file to get version, architecture, and configuration. The important thing here is that if some info is omitted (e.g. we may not specify build configuration in  dependencies.config file) script will use the one the dependent component is being built with. That is, when we build the current component for the simulator script will download dependencies of simulator architecture.

2.  Form artifact’s name and version and forward them to  az artifacts universal download command .

There are two key features of our build infrastructure: pipeline chaining and build variants support. They cover two important cases in our project. Let’s describe how we implemented them.

Chaining pipelines

When a low-level core component is updated, we want to test these changes in the application. For this purpose, we should build the framework dependent on the core component and build the app using this framework. Automation here is extremely useful. Here’s how it looks like with our pipelines.

1. When a low-level component (let’s call it  component1 ) is changed on a specific branch (e.g., integration), a special integration pipeline is triggered. When a component is built and an artifact is published, the pipeline starts another pipeline that will build the next dependent component. For this purpose,  az pipelines build queue command is used as follows:

az pipelines build queue \

 --project "component2" \

 --branch "integration" \

 --organization "${Organization}" \

 --definition-name "${BuildDefinition}" \

 --variables \

   "config.pipeline.component1Version=${BUILD_BUILDNUMBER}" \

   “config.pipeline.component1Architecture=${CurrentArchitecture}" \

   "config.pipeline.component1Configuration=${CurrentConfiguration}"

This command starts the pipeline for building  component2 (the one dependent on  component1 ).

The key part is passing the variables  config.pipeline.component1 Version,  config.pipeline.component1Architecture and  config.pipeline.component1Configuration to the pipeline. These variables define the version, build configuration, and architecture of  component1 (the one being built by the current pipeline) that should be used to build  component2 . The command overrides the corresponding values from  dependencies.config file of  component2 . This means that the resulting  component2 will use newly built  component1 dependency instead of the one defined by  dependencies.config file.

2. When  component2 is built, it uses the same approach to launch the next pipeline for building a subsequent component.

3. When all the components in the chain required by the app are ready, the integration pipeline building the app is launched. As a part of its build process, the app is sent to TestFlight.

So, simply pushing changes of the lowest level component to the integration branch gives you a ready-to-test app on TestFlight.

Build variants

Some external developers that use the wrapper iOS framework may need additional features that should not be available in regular public API intended for other developers. This brings us to the need of having different variants of the same component. Such variants may be distinct in different features, or in behavior of the same features.

Additional methods or classes may be provided as a specific or experimental API in a wrapper framework for iOS. The other use case is to have behavior different than the default one for regular (official) public API in the wrapper framework. For instance, a method that writes an image file to a specified directory in some cases may be required to also write additional files along with the image (e.g., file with image processing settings or metadata).

Going further, an implementation may be changed not only in the iOS framework itself but also in its dependencies. As described previously, core logic is implemented in a separate component and iOS framework is dependent on. So, when some code behavior change is required by a particular build variant, most likely it will also be done in the internal component.

Let’s  see how to better implement build variants. The proper understanding of use cases and potential extension capabilities are crucial for choosing the correct solution.

The first important thing is that in our project different build variants have few changes in API compared to each other. Usually, a build variant contains a couple of additional methods or classes. Most part of the code is the same for all variants. Inside implementation, there also may be some distinctions based on the concrete variant we’re building. So, it would be enough to have some preprocessor definition (active compilation conditions for Swift) indicating which build variant is being built.

The second thing is that the number of build variants is often changed. Some may be removed, (e.g., when an experimental API becomes generally accessible.) On the other hand, when an external developer requests another specific functionality, we need to create a new variant by slightly modifying the standard implementation or exposing some experimental/internal API. This means that we should be able to add or remove build variants fast.

Let’s now describe our implementation based on the specifics given above. There are two parts of the implementation. The first one is at the pipeline level.

Since we may often add/remove our build variants, creating a pipeline for each build variant is obviously not a good idea. Instead, we add a special variable  config.pipeline.buildVariant in the pipeline’s Variables to each pipeline that is supposed to be used for building different variants. The variable should be added to pipelines of all the components the resulting iOS framework depends on because a specific feature often requires code changes, not only in the iOS framework itself but also in its dependencies. Pipeline implementation then will use this variable e.g., for downloading specific dependencies required by a particular variant, tagging build to indicate the variant, and, of course, providing the corresponding build setting to Xcode build command.

The second part is a usage of the build variant setting provided by the pipeline inside the Xcode project. Using Xcode build settings we’re adding a compile-time constant (preprocessor definition for Objective C/C++ code and compilation conditions for Swift) that reflect the selected build variant. It is used to control which functionality to compile. This build settings may also be used to choose to build variant-specific resources to be embedded into the framework.

When chaining pipelines we just pass the variable to next pipeline:

az pipelines build queue \

 --project "component2" \

 --branch "integration" \

 --organization "${Organization}" \

 --definition-name "${BuildDefinition}" \

 --variables \

   "config.pipeline.component1Version=${BUILD_BUILDNUMBER}" \

   "config.pipeline.component1Architecture=${CurrentArchitecture}" \

   "config.pipeline.component1Configuration=${CurrentConfiguration}" \

   “config.pipeline.buildVariant=${CONFIG_PIPELINE_BUILDVARIANT}"

Summary

In this article, we’ve described our approach to multi-component app  CI/CD infrastructure based on Azure . We’ve focused on two important features of our build infrastructure: chaining component builds and building different variants of the same component. It’s worth mentioning that the described solution is not the only correct one. It's rather the most optimal that fits our needs. You may experiment and try different approaches utilizing a flexible developed pipeline system that Azure provides.

Read more
View all
Connect

Interested in our services?

Reach out for tailored solutions and expert guidance.

Stay updated with our newsletter

Subscribe for fresh insights and industry analysis.

About UsCase studiesContactCareers
Capabilities:
Legacy ModernizationData PlatformsArtificial Intelligence
Industries:
AutomotiveFinanceManufacturing
Solutions:
DataboostrCloudboostr
Resources
BlogInsights
© Grape Up 2025
Cookies PolicyPrivacy PolicyTerms of use