SpareBank 1 Utvikling

Dette er SpareBank 1 Utvikling, og på denne siden kan du bli bedre kjent med hvem vi er, hva vi gjør og hvordan vi jobber – gjennom oss som faktisk jobber her. Så kan du vurdere selv om du også har lyst til å bli en del av det vi mener er det beste og triveligste IT-miljøet i bransjen. Med de flinkeste folka.

Vi skaper en enklere og bedre hverdagsøkonomi

Vi utvikler og forvalter løsninger som brukes av over 1 million kunder. Det er vi skikkelig stolte av! Her kan du bli enda bedre kjent med selskapet, måten vi jobber på, løsningene vi lager, og ikke minst - ukas aller beste dag: Fagdagen.

Miljøbilde fra SpareBank 1

SpareBank 1 Utvikling på Medium @sparebank1-digital

Azure Landing Zone Vending — Part 3

Azure Landing Zone Vending — Part 3IntroductionWelcome to part 3 in our series on Azure landing zone Vending. In the previous posts we’ve explored the concept and its implementation in SpareBank 1. In this post we will dive into the technical details of how we use the information from our Power Apps to automate the provisioning of our Azure landing zones. We’ve set an internal target to provision a production ready Azure landing zone within 20 minutes of the request being submitted.Standardization Through CodeIn the SpareBank 1 Alliance, Azure landing zones are being used by a wide range of users, from newcomers to advanced developers. With a diverse set of users, it drives the need to have a standardized approach to ensure consistency and ease of use. No matter the experience level, the goal is to create a common ground where users can easily, safely and cost effectivelly deploy and manage their Resources in Azure.To achieve this there are some recommended design areas that should be covered when creating landing zones. To read about the recommended design areas, visit: Azure landing zone design areas — Cloud Adoption Framework | Microsoft LearnWe have summarized these design areas as the following:A standardized approach plays a critical role when implementing security and governance practices at scale, where all the landing zones adhere to organizational security policies and cost controls, irrespective of the team or individual managing them.Automating subscription/landing zone provisioning and configuration of the design areas is commonly referred to as Subscription Vending. Our vending machine uses Microsoft Native tooling. The deployment is run using Azure DevOps pipelines, the deployment steps/tasks are PowerShell scripts and the subscription along with the Azure Resources are deployed using Bicep.The pipeline tasks are dependent on having a standardized JSON-file for each landing zone - containing the information needed to start the deployment. Below is an example of what our landing zone JSON file looks like:[ { “name”: “string”, // LandingZone name “managementGroup”: “string”, // Private, Public or Sandbox mgmt group “tags”: { “companyCode”: “string”, “projectNumber”: “string”, “environment”: “string”, // Dev,Test,QA,Production “defender”: “string”, “spendingLimit”: integer, // Expected monthly usage - budget “costOwner”: “string” }, “workload”: “string”, // Subscription type - DevTest or Production “IAM”: { “groupAdmins”: [ “string” ] // email addresses of users who will manage Entra ID groups }, “billing”: { // Microsoft Customer Agreement billing info for costs “billingProfile”: { // Bank/Company billing profile details “name”: “string”, “id”: “string” }, “invoiceSection”: { // Invoice section details “name”: “string”, “id”: “string” } }, “security”: { “emailNotification”: {} }, “networking”: { “required”: bool, // True or False – If sandbox == False “vnets”: [ { “deploy”: bool, “resourceGroupName”: “string”, “rgLocation”: “string”, “vnetName”: “string”, “vnetLocation”: “string”, “managedVnet”: bool, “managedNSG”: bool, “joinToVWAN”: bool, // Peering to vwan hub “vwanHubRegion”: “string”, “size”: “string”, // Small, Medium, Large “addressPrefix”: [ “string” ], // Allocated IP addresses for vnet “subnets”: [ { “name”: “string”, // Subnet name “addressPrefix”: “string”, // allocated IP address for subnet “nsg”: “string”, // if default == will use the default NSG for given subnet “peEnabled”: bool // True or False – depends on private, public or sandbox mgmt. group } ] } ] } }]To gather the information needed to populate the .json file we use the Power Apps covered in our previous post. Each submission of an order from the Power Apps creates a .json file, stores this in our repository and triggers the deployment process.The Vending Machine Deployment ProcessOur landing zone provisioning process is divided into 2 main Azure DevOps pipelines. This is more of a personal and operational preference for us. Additional settings are enforced by policies on the landing zone based on the Management Group placement of the subscription.The lz-networking pipeline is used quite often due to its role in managing subnets and NSG rules for the landing zones. By separating this as its own pipeline, we significantly reduce the time of adding, modifying, or deleting subnets and NSG rules.PipelinesThe first pipeline (lz-provisioning) contains five tasks that mainly focuses on creating a subscription with the configuration that we want.lz-provisioning pipeline tasksTask 1: Subscription provisioning — Creates the Azure SubscriptionTask 2: Entra ID groups — We create 3 Entra ID groups(reader, contributor, owner) that will be used for access control purposes for a landing zoneTask 3: Privileged Identity Management role configuration — This task is making sure that we have PIM configured for our landing zone where we create the role definitions based on Entra ID groups in previous taskTask 4: Resource Providers — We have 35 resource providers that we enable by default for all our landing zones. This will be updated according to the needs of our users.Task 5: Subscription Configuration — This is the last task where we configure our landing zone.Sample Bicep for 05-sub-configSteps in this Bicep deployment:Management Group — move the subscription to the correct management group hierarchy based on the parameter “managementGroup” in landingZoneName.jsonTagging — create subscription level tagsDefender — enabling Defender for CloudBudget — creating a budget that will be used to alert the users if they exceed the limit they provided in the ordering schema, this is not a hard cap on the budget, the goal is to make our users more aware of their costs by alerting them on thresholds at 70%, 95% and 100%.Lastly, we configure default landing zone access. This consists of basic read permissions and just-in-time access to the landing zone using PIMlz-network pipeline taskThe second pipeline that will be triggered by our Power Apps is the Network pipeline(lz-network). This is where we configure the connectivity part of our landing zone. This pipeline contains only 1 task that has all our networking setup in one main.bicep template. This template calls out to other Bicep templates (or modules):Sample Bicep for 01-net-networkingResource Group — for Network Watchers is created first.Network Watchers — are created for a selection of regions. These are used by the Network Security Group Flow logs.Resource Group — for other Network Resources. Network Watcher Resources could also reside in this Resource Group. Due to added code complexity when handling multiple regions, we decided to keep Network Watchers separate. Long term we will merge all network Resource to the same Resource GroupApplication Security Groups — created to simplify Network Security RulesNetwork Security Group — Each Virtual Network will in our case have a default NSG that is assigned to all subnets. Flow logs are enabled by using policy and forwards the logs to our common Log Analytics workspace for insight and troubleshooting. A selection of default rules are applied (deny all inbound/outbound) and basic platform rules for DNS etc are added to all NSG’s are part of their base deployment.Virtual Network — A virtual Network is created based on the input included in the landingZoneName.json fileVirtual WAN Connection — We are using Azure Virtual WAN as our network backbone and each Virtual Network will be connected to this vwan (if bool is set to true in landingZoneName.json file)How we organize our codeNow, lets take a look at how we organize and setup our deployment process.Pipeline reposEach pipeline has its own Azure DevOps Repository. Within each repository we create a folder for each task that will be triggered. Each repository also includes a folder called .pipeline that contains the yaml file for the pipeline.root.yaml sampleLooking at the first folder, .pipeline, we have 2 files that are being used.pipeline-create.ps1 — is a PowerShell script that will create the pipeline for us.root.yaml — is the pipeline itself. We start off with some input parameters, next we clone the repositories that contains our scripts, settings, and templates used by the deployment tasks. Lastly, we have the tasks that will be triggered within that pipeline.As shown with this example, we correlate the naming of folders with their respective tasks. By following this standardized and consistent approach it makes it:Easier to document how the pipelines workCreates predictability in the configuration across all existing and new pipelinesResults in easier and faster creation of new pipelinesMakes debugging existing pipelines easierIf we take a look at one of the deployment tasks (05-sub-config) in root.yaml, we can easily map this to our folder structure. As shown in the task on the right side, the scriptPath and template being used is pointed to the files under the 05-sub-config folder. This is just to show the relation between tasks and files in our setup.eaz-task.ps1The eaz-task.ps1 is the file that is referenced in scriptPath for each of our tasks. This is a PowerShell script that is designed to offload some complexity from the Bicep Template.We start by defining the mandatory and optional parameters needed for the task and Bicep templates. Mandatory parameters must be provided for the script to run and come as input through the yaml deployment task.Next, we are wrapping all our code inside a try-catch-finally for error handling purposes. Error handling can be made significantly more sophisticated than what we are doing in this example but a very simple try/catch has done the job well (enough) for us so far.eaz-task.ps1 try{}The first thing we do inside the try-catch is to import our script module file “eunomia-common.psm1” where we have a set of PowerShell functions that can be used for the deployment.One of the functions we use is “Get-eunomiaAsciiArt”, this is just a silly function we use to display ASCII art and give visual feedback in our pipeline. We highlight this function because (well it’s fun) and it is a good example of reusable PowerShell functions in our deployment framework.Next, we are preparing to build the variables that will be forwarded to the Bicep deployment — by importing a combination of configuration files.Moving on, our script generates a deployment parameter object. This object contains all the necessary information our deployment-function needs, as well as the template parameters used inside the Bicep template when the deployment runs.New-eunomiaTemplateDeploymentThe script then executes the deployment by calling the function “New-eunomiaTemplateDeployment” from our already imported script module file “eunomia-common.psm1”. The function is expecting the deploymentParameters as input for the execution.Lastly, we have the catch-finally block. The catch block will handle exceptions that occurs during the execution and displays it as output. The finally block is used to execute code regardless of whether an error occurred or not, this is just for outputting final messages and cleanup purposes.The EpilogueWrapping up this part of our Azure landing zone Vending series, we have shared a glimpse into our approaches and processes on provisioning Azure landing zones. Hopefully, this peek behind the curtain has given you some inspiration or ideas that you can use to create your own vending machine. This example, while focused on a single tenant, is just the tip of the iceberg. We are already managing our landing zone provisioning in multiple tenants. More precisely 8 tenants and counting with a lower future estimate of 14–15 and a higher estimate in the 30s!Special thanks to Matthew Greenham & Roger Carson for guidance and participating in this post.Further readingKeep an eye out for Part 4 ;)Part 1: 2: Landing Zone Vending — Part 3 was originally published in SpareBank 1 Utvikling on Medium, where people are continuing the conversation by highlighting and responding to this story....

Av Erhan Mikael Sanlioglu

How will I know if I’m good at my job in a world with AI?

“How will I know if I’m doing a good job?” is a question I’ve asked in almost every job interview I’ve had. I later realized it’s probably not the most common question based on the slightly stunned reactions I got when I asked. Despite that, it’s a question I ask myself now and then as I struggle to figure out how I’m doing professionally.I’ve been working for 8 years as both a UX designer and a front-end developer and I still have no idea if I’m where I’m supposed to be skill-wise for my experience. Not because I’m not handling my every day-to-day, but because I have no idea where the bar is or if there is one. In university, everything we do is graded or at least we get some kind of feedback, making it fairly easy to know if we’re keeping up with the expected level. Whether the level from university is the same as the one expected from companies is a whole other topic, but the fact that the requirements for what is considered a junior and senior developer vary from company to company shows that we’re struggling to define what’s expected of us as developers and designers.I’m mentioning both developers and designers as I have experienced the same insecurity in both roles in different ways. As a UX designer, I worried about whether I had enough insight into the business/service to make good design decisions, along with insecurity about my ability to be creative under pressure. As a developer, I worry about keeping up to date with everything and coding in a way that follows good practices that won’t make me and/or my co-workers confused in the future. Ironically none of these worries are mentioned in any of the articles I found about what makes a good developer/designer.Curiosity. Open minded. Problem solver, good team player, and willingness to adapt are all descriptions that seem to be repeated in these articles. All of these are personality traits more than actual skills, and knowledge within our field of work — and not to mention incredibly hard to measure! If this is truly the metric of how good of a developer and designer we are, then the only way for us to truly know how we’re doing is through talking, getting and trusting the feedback from our peers and some cases our users. In many ways that means that we’re only as good as the quality of the communication and relations we have with the people around us.But if we’re only as good as the feedback we get, then what will happen with our confidence and perception of our value and skill now that AI is set to take over a lot of our feedback arenas? Where a co-worker previously offered advice or a thumbs up during a coding session/review, an AI is now set to do that job for us. Where design critiques have been used to discuss different approaches to the UI, AI can now generate several options for us. We’re not quite there with the quality of the tools — yet, but one day we will be. While that will be amazing and increase productivity in so many ways, I also can’t help but wonder about the impact it might have on our self-image and motivation long term.Earlier in 2023 I took a leadership class where we among other things spoke about motivation. I’m not going to dive into the details of that class in this article, but simply put we can divide the types of motivation into 2: Internal and external. External motivation is when the motivation to complete a task comes from an external factor. Examples of external factors can be a fear of getting into trouble with the boss or just wanting to get paid. While internally motivated people also want to get paid, their main source of motivation comes from themselves. They are motivated to complete and do a task well, just because it’s a rewarding challenge and something they want to do. Internal motivation is a result of several different factors in our day-to-day, one of them being how able we are to see the value of our contributions. And what is the value of our contributions, when it might as well be the output of an AI? Another factor is the freedom to make our own choices, and how often will we be able to defend our design or code choices against an “all-knowing” AI?We’re past the point of whether or not we should use AI tools, now the question is which tools. And while we discuss all the different options, do we take the time to discuss what human interactions we’re losing to said tools, and how it can change our expectations of what a good developer/designer is? Maybe we should?Because if, how good I am at my job is decided by the feedback I get and my motivation by my contributions and the freedom of choice. Then it’s easy to be discouraged when the feedback is based on a comparison with “the world”, rather than the people around us or the people with the same skillset/experience. It’s also easy to be de-motivated if the freedom to make code and design choices is limited to the conclusion of a machine.So how will I know if I’m doing a good job in a world with AI when I barely knew in a world without it?How will I know if I’m good at my job in a world with AI? was originally published in SpareBank 1 Utvikling on Medium, where people are continuing the conversation by highlighting and responding to this story....

Av HeleneKassandra

Azure Landing Zone Vending — Part 2

Azure Landing Zone Vending — Part 2IntroductionIn part 2 of this 3-part series, we will provide a comprehensive guide on how we harnessed the power of the Power Platform to automate the creation of Azure Landing Zones. Our primary goal is to streamline and automate the whole process and eliminate the need for manual configuration.As mentioned in part 1, Microsoft have done a very good job in documenting all that needs to be considered when setting up a vending machine. This is now included in the Cloud Adoption Framework. In this blog post we will focus primarily on the following areas highlighted here:Business LogicThe MVP solution we first implemented was a Form which then populated a SPO List. We knew we wanted a better solution eventually but had to prioritize this work earlier than expected because there were so many issues with a simple order form. One of the biggest issues was that we had no input validation, which meant that some other flows failed (email notification and budgets containing decimal points for example!). We also experienced that Forms in a multi-tenant environment had a number of limitations as well, specifically around security. We decided then to go all in and pursue the subscription vending concept.We knew then that we needed input validation and support for users from multiple tenants. As well as this, because of the distributed support model we currently have in the alliance, there is currently no shared ITSM tool we could use as a portal. For these reasons we soon came to the conclusion that a dedicated Power App was the way to go, together with Power Automate for supporting functions.Approval processWe have thought about this but considering the complex structure we have, including the differing banks and their own processes, we decided that we would not include much in the way of an approval process. There are two ways in which this can be limited though. There is a Entra ID group per bank (the bank decides who can have access) that gives the necessary permissions to use the Power App and order a landing zone, and we have a basic check that hinders sending in a request that we deem to be unreasonable (public landing zone containing customer data without a completed risk analysis for example). Obviously, it is easy to get around this, but it is designed to increase awareness for what is being requested and the potential implications of that.Make a Subscription RequestSounds easy, right… just a portal for ordering Landing Zones… well, not so much actually, there is lots to consider!The vending concept is more than just an order form, because we need to harvest information that can then be directly used as a source for provisioning a Landing Zone.In a nutshell, what we do with this information is convert it to a JSON file that can then be picked up by the pipelines that provision the landing zone. This means that we must ensure that the data we harvest is correct, both in value, but also in form.Building the Power AppOur solution encompasses various components, fully native Microsoft technology, ensuring a cohesive integration. The front-end of our application will be developed using Microsoft Power Apps, a low-code tool that combines a user-friendly interface with robust logic and workflow capabilities. Power Automate, another low-code tool, will handle the back-end logic of our application. The synergy between Power Apps and Power Automate is the cornerstone of our approach.Our application structure may differ from yours, as our guide might include configurations specific to our needs. Notably, our multi-tenant solution for Azure influences certain aspects of our implementation. However, these configurations may not be necessary for your application. We have made trade-offs to maintain our multi-tenant solution, such as sacrificing standardized emails from drop-downs and MS Teams integration.Step 1: Data HarvestingThe data we decided to gather from users include organizational, technical, financial and compliance information. During runtime, the Power App will run two Power Automate flows. First, a flow “getCompanies” will run to get information on what Bank or Company the active user has access to. This data is used to display the available choices in a drop-down view. A second flow “getInvoiceSections” will run to display the invoice sections available for the user.Organizational informationTechnical information (part 1)Technical information (Part 2)Financial informationCompliance informationValidationValidation is integral to ensuring the reliability of user input. We have explored various validation methods, including standardized choices, regex validation for email addresses, and other error-handling techniques. This is to make sure that when a user submits the form, we can run the provisioning pipeline automatically without human intervention.After filling out the form, users will navigate to the review screen, where they can submit their order. Successful submission takes users to a confirmation screen, and the form is reset to prepare for the next order.Review ScreenStep 2: On form submissionSubmitting a form triggers updates in the associated SharePoint list, setting in motion the “main” Power Automate flow. This flow orchestrates various tasks, from API calls to updating files in Azure DevOps and running pipelines. We will dive deeper into the Power Automate flows in this next section.onFormSubmitThe “onFormSubmit” flow is the central flow in this automation process, serving as the main orchestrator. Its primary responsibility is to collect information from a SharePoint list, create and manage variables, and trigger other Power Automate flows by passing these variables as arguments. Additionally, this flow plays a crucial role in constructing the JSON object for­ each Landing Zone, relying on output data from other connected flows. The key variables generated within this flow include the Landing Zone name, Landing Zone JSON object, and Repository ID.getBillingInfoThe “getBillingInfo” flow plays a vital role in acquiring essential information regarding billing profiles and invoice sections, which are essential for successful Landing Zone provisioning. This flow accepts two input arguments: Company Name and Invoice Section Name, both of which are supplied through user input in the Power App. The output of this flow comprises the Invoice Section ID, Billing Profile ID, and Billing Profile Name. To retrieve this information, the flow interfaces with the Microsoft API and retrieves data on all invoice sections associated with a given Company name. You can refer to the Microsoft API documentation for further details: Microsoft API for Invoice Sections.updateJSONThe “updateJSON” flow is responsible for the task of updating the JSON file for each Landing Zone within Azure DevOps. This flow requires three input arguments: the Landing Zone JSON object, Repository ID, and Landing Zone name. To achieve this, it utilizes a two-step process. First, a “GET” request is made to retrieve information about the target repository using the provided Repository ID. Then, a “POST” request is executed to append the new Landing Zone JSON object to the designated folder within the repository.JSON object sent to Azure DevOps RepositoryThe Complete Landing Zone JSON objectrunPipelinesThe “runPipelines” flow serves as the final stage in the process, responsible for initiating the execution of two critical pipelines essential for provisioning a Landing Zone. These pipelines need to know which Landing Zone they should set up. So, the only argument we need to pass is the name of the Landing Zone.The operation starts with the initiation of a “POST” request to trigger the first pipeline. The target subscription for this pipeline is set as the Landing Zone name. Then, the flow is paused, and it awaits the completion of the pipeline run. To monitor and control this waiting period, a “do until” loop is implemented. Within this loop, the flow periodically makes “GET” requests to Azure DevOps to fetch information about the ongoing pipeline run. This continuous loop persists until the pipeline run has successfully completed.POST Request to trigger the PipelineMonitor the running PipelineStages of Landing Zone CreationOnce the first pipeline run has finished, the flow proceeds to replicate this process for the second pipeline, ensuring the sequential execution of both pipelines. This ensures a step-by-step provisioning of the Landing Zone, with precise control over the progress and completion of each stage.Power Automate flowsApplication ArchitectureThroughout the entire Landing Zone creation process, both the requester and the platform team is notified with updates. As illustrated in the image below the requester, the Landing Zone owner and team azure are notified when the order is submitted through the Power App. When the Landing Zone is ready for use, the requester and owner will receive a final email.Landing Zone Creation FlowRequirementsLicenses are necessary for Azure DevOps, Power Automate, PowerApps, SharePoint, and Azure Active Directory.Keep in mind that not all users may have premium Power Platform access, which could impact the functionality available to them.In SummaryBy harnessing the strengths of Microsoft’s low-code tools, specifically Power Apps and Power Automate, we’ve seamlessly integrated front-end development with robust back-end logic. This guide offers a clear roadmap for automating Azure Landing Zone creation through the Power Platform.While we provide implementation specifics tailored to our requirements, it’s crucial to recognize the potential necessity for customization in diverse application scenarios.Thanks to Matthew Greenham for editingFurther Reading:In the third and final post in this series we will delve deeper on our Landing Zone pipelines, and how we provision across all tenants in an effective way.Part-1: Landing Zone Vending — Part 2 was originally published in SpareBank 1 Utvikling on Medium, where people are continuing the conversation by highlighting and responding to this story....

Av Salam Haider Hassan

Azure Landing Zone Vending — Part 1

Azure Landing Zone Vending — Part 1IntroductionThis blog post is the first of a 3-part series where we explain the concept of Azure Landing Zone vending, and how we have implemented this in SpareBank 1.Part 1: The Holy Grail?If you have been working with Azure as long as I have (Classic portal, anyone!?), then you will remember that for a long time, Microsoft’s recommended architecture was an all-encompassing Azure Subscription per environment (dev, test, prod and so on). The logical workload boundary and unit of scale was at the resource group level. This was do-able but did mean a number of inherent problems that traditionally had to be fixed by the platform team. Access control being maybe the most obvious. However, in 2020 Microsoft launched the first full Cloud Adoption Framework guidelines, and this together with Azure Landing Zones (originally called Enterprise Scale Landing Zones) reference architecture that came out in it’s current version in 2021 completely changed all this. The best practice became to use the actual Azure subscription as workload boundary and unit of scale. At this time, an Azure Subscription was renamed to a Landing Zone in the context of this model.This brings numerous advantages and simplifies things, certainly in small environments…. but for larger organizations, it potentially increases the dangers of subscription sprawl, and an inability to keep control and oversight. Like most things with the cloud, the only way to control this over time is to do everything in code, automate and standardize as much as possible. These things are great of course, but if you’re really going to push adoption, lower the barriers to entry and open up Azure for everyone, then you need to take things up a level.The Vending MachineThe Vending machine concept is the next level (and holy grail?) for Azure platform teams because it means that the creation of Landing Zones becomes fully automated, just like a vending machine, and as such adheres to the inherent characteristics of one: Self-service, automated, quick, convenient and always available. The first known reference to a vending machine is nearly 2000 years old, so this is not a new concept, but it is new in the world of Azure Landing Zones. Microsoft have only quite recently included this in the Cloud Adoption Framework under platform automation and DevOps (march 2023): do this?As cool as this concept is, it’s not straightforward and needs a lot of engineering in order to create and not least to keep updated over time. However, there are some key and very sizable benefits to this approach:Improved speed, and time to market20 minutes from ordering a Landing Zone to availability in the portal. Fully built in code, fully automated, and ready to use. That’s always been our goal. It’s an ambitious target, but we are edging ever closer. We believe that this will drive innovation and time to market. We had a meeting with a large multinational company a while ago and they had somehow created such a complex manual structure that it took many weeks before a Landing Zone was provisioned!! Not exactly Agile.Streamlined processWith such a quick and simple process, it’s easy to sell in this as a first step for any team that needs to start innovating and creating in Azure. A single place to order Landing Zones, thet are then provisioned automatically means reduced friction and on-boarding problems. We also have a specific requirement to deliver to different banks in their own tenants. These banks can also have their own processes and routines. As such a complete solution, from frond end to Lanzing Zone delivery simplifies the experience for everyone.Full automation is efficientOnce automation is in place this process then becomes extremely efficient. Everyone benefits from this; The cloud platform team, the developers as well as security and compliance teams. This then frees up time that can be used on more value added tasks.Quality and controlAutomation is a no-brainer when it comes to improved quality. People make mistakes, whilst an automated process is correct everytime (assuming it is engineered correctly!) By using an automated process governance and compliance needs are easier to meet too. No settings or steps are forgotten in the provisioning process and all configuration is pre-defined and approved in advance.The NegativesThere is, of course, a question of whether this unfettered democratization of landing Zone creation is a good idea in an enterprise setting. You can hear the CFO now: “What!? Anyone can just get a landing zone and start creating resources and spending money!?”Well of course there should be controls in place (FinOps and people processes) to handle this problem. And it’s very possible to build in guard rails or approval processes if required. But the point is that you don’t want the provisioning of a landing zone to slow down innovation… the cloud team shouldn’t be the weakest link in the chain.There is also the question of cost and complexity in creating such a system. If you’re only creating a few landing zones per year, the the effort to establish a vending solution probably won’t be worth the investment.In SummaryWe have developed our own vending solution based on the needs we have in the SpareBank1 Alliance. Our solution is more complex than a standard vending solution would typically need to be, not least because the SpareBank 1 Alliance is multi-tenant, and we have a distributed ownership and operations model. We need to deliver Landing Zones across the whole environment (It’s easiest to think of us as a Cloud service provider, providing shared services and economies of scale), which means that all elements of the vending solution need to take this into account.The complexity is huge, and we a have used a good amount of time and energy on this. However, like most organisations at the moment, there is a ramping up of a migration and modernising to the cloud, and we expect Landing Zones to be provisioned regularly and often. With this in mind, and together with the demanding technical landscape, we feel that this was not a solution that was nice to have, but one that is absolutely necessary for SpareBank 1.Further ReadingIn the next two blog posts you will see how we have achieved the vending machine using Microsoft native tooling. The first post will go into the no code / low code front end, which data we need to collect, how we do that and which tools we have used to achieve this. The final post will lift the lid on our Landing Zone pipelines, and how we provision accross all tenants in an effective way.Watch this space…Azure Landing Zone Vending — Part 1 was originally published in SpareBank 1 Utvikling on Medium, where people are continuing the conversation by highlighting and responding to this story....

Av Matthew Greenham

300 applications upgraded to Java 17 in one commit. The day before vacation. The power of monorepo!

It’s been six years since we decided to try a monorepo aproach for our microservices. Today we have over 300 applications in this monorepo. The applications are generated using templates. We refer to these applications as our “golden path” applications. All applications use Spring Boot and React and live in the same Git repository.Friday afternoon, the day before Easter holiday, we upgraded all monorepo applications from Java 11 to Java 17. This was done in one commit and at rest. The next day I was off to the Canary Islands, and the rest of the team to the mountains. In this article I will share why this major change to more than 300 applications did not turn out to be a project in itself.15000 tests to the rescueBut why not wait to after the holiday? The short answer is; why wait when you can do it today. All our applications have unit and integration tests. In the monorepo as a whole we have more than 15000 tests. When all of these run green, we are pretty sure that the change meets the required quality level. It is also important to note that we do not automatically deploy the applications to production. When we start doing this, it might be that we will not do such a change the day before vacation.Sharing code is sharing knowledge. Local improvments in one team become global improvements for the whole company if this is done right. This is also one of the main benefits with using a monorepo.One team to boost productivityBut let us travel to the Canary Islands. The island is lovely, especially because of the climate. The temperature in Easter is perfect for us Norwegians after a dark and long cold winter. One thing you dont’t want to do, is ruin the long awaited trip because of stress at work. Upgrading 300 applications from Java 11 to Java 17 can cause such stress. To optimize the developer efficiency at SpareBank 1, we have established a Developer Experience (DevEx) team. I am a part of this team, and we took on this upgrade task. The challenges with such an upgrade are much the same for all of our 300 applications. The DevEx team can gain expert knowledge when it comes to such an upgrade. An alternative approach would be to let every team do the upgrade themselves. That would certainly work too, but time spent on this task could have been used on building new features (opportunity cost). As teams have different goals, this upgrade might not be prioritized, and therefore it would have taken longer time to finish.Handling different technologies have a cost. This cost is often invisible. Having a standardized way of making applications makes it easy to switch teams, share code and knowlegde, and to make tooling. Tooling includes local build scripts and shared build pipeline. Not having to handle different Java version is just one benefit of marching in step.Dependencies can cause frustrationsOur applications share code at compile time. This code is what we refer to as our libraries. The library code is not versioned. All application is “on head” (using the latest library/shared code). This is an important principle for us, and a common technique when working with monorepos. This means that all changes in the library code must be compatible with all application in the monorepo. It is not allways obvious why this is a good thing. For a developer that wants his/her feature in production as fast as possible, it may seem daunting to have to change code in other teams’ applications. But one thing we have learned over the last 15 years is that this is for the best for SpareBank 1 as a whole. When using mulitrepo with versioned shared dependencies we ended up with what is often refered to as “version hell”. Application 1 depends on Library A. Library A depends on Library B. If you needed to do a change in Library B, and you needed to get it in to production fast, you had to build Library B. Then you would bump the version in Library B, build that library, and you probably understand, do the same thing for Library A. It did not stop there. All application that used Library A had to be bumped and built. But wait, some of the applications are on an old version of Library A. Nobody remembers how to refactor these appliactions to make them work with Library A. If that is not enough, you have to make sure your third party dependencies are in sync. What happens then? The world don’t stop spinning. The trip to the Canary Islands is tomorrow. And here you are, in a middle of a “dependency hell”.Versioning in a multirepo with a wait period for each pull requestIn a monorepo a pull request will show all changes needed for your feature. Also if you need to do library changes in shared code. This makes it easier to do a review because you avoid the need to also review the library change in a separate repository.Local discoveries are converted into global improvementsAll code in the monorepo is easily accesible to all developers. They have quick access to the code from their IDE. The can copy it, be inspired or refactor it as library code. This is knowledge sharing. One good example is when one team switched from Webpack to Vite, it did not take many weeks before several other teams did the same. Another example is me coding a new feature. The probability is high that somebody else have coded something quite similar.Deleting code increases your velocityDevelopers are problem solvers and coding is one tool used to solve problems. Maintaing code has a cost. Often this cost is higher than writing the code. To save money we need to be able to delete code. In a monorepo where all code is on head, your IDE can show you all dependencies. Should you miss a dependency, the build will break for the applications affected by the change. Deleting code is also important, as reducing mass will improve the speed in the development teams. When doing multi-repo we rearly deleted code. We just deprecated it. Today we actually delete it. Over time this is likely to have a significant impact on the mass of code we need to take care of.Peace of mind and happinessSo, how can we go on vacation with no stress when doing a big refactoring the same day the vacation starts? My answer is the confidence you get when all applications has been built and all tests have passed. Our pipeline also deploys all applications to our Kubernetes test-enviroment. This gives us confidence that the images has been built ok, and that the configuration seems ok. I say “seems ok” beacuse as we don’t automaticly deploy to production (yet) we can not be 100% sure. But sure enough that we can travel with our minds at the right place.Want to try monorepo?Monorepos are not for everyone. You need a mature development organization that is able to build the necessary tools and understand the value of a monorepo. It is not obvious to everyone that it is more efficient for the company as a whole if a platform team makes changes to other teams’ applications. Most people understand the value of having all applications always use the latest shared code, but not everyone sees why it often does not work to let teams themselves take care of this. Teams may have other priorities than updating to the latest version of the shared code. And the cognitive load that teams are exposed to, increases as they must understand all changes in infrastructure and shared code.If you still want to try the monorepo way, I wish you good luck! Feel free to contact me if you need someone to talk to before or during your monorepo journey.ReferencesVelocity defeats itself. Get acceleration instead med Git og Maven — hvordan lære gamle hunder nye triks (in Norwegian) is a Monorepo, Really? applications upgraded to Java 17 in one commit. The day before vacation. The power of monorepo! was originally published in SpareBank 1 Utvikling on Medium, where people are continuing the conversation by highlighting and responding to this story....

Av Stian Conradsen

Gir verdier verdi?

Høsten 2018 fikk vi egen Utvikleravdeling i SpareBank 1 Utvikling.Da vi fikk muligheten til å lage vår egen avdeling, var det viktig for oss å være åpne om lønnsfastsettelse. I SpareBank 1 Utvikling er det regulerte rammer for lønn. Innenfor disse rammene er det også rom for personlige tillegg. Men hva skal en gjøre for å få disse tilleggene? Dette ville vi gjøre åpent og synlig for alle.God stemning på kontoret.Hos oss har vi i mange år kjørt en modell hvor utviklere har personalansvar for utviklere. Dermed visste vi allerede hvordan vi i praksis gjør lønnsvurderingene og hva vi vektlegger. Det handlet om å få det synlig og lett å forstå.Vi lagde et forslag hvor vi satte opp verdiene, egenskapene og aktivitetene vi setter pris på. Deretter hadde vi arbeidsmøter med hele avdelingen, for å tilpasse innholdet til noe vi alle var enige om, og la det ut på internnettet. Nå var det endelig tydelig for alle hva som påvirket de personlige lønnstilleggene.KoronaFire måneder etter at vi hadde fått på plass verdiene våre, kom Korona. Det gjorde at kulturbyggingen vår ble vanskeliggjort, både av at vi måtte lære oss å jobbe på hjemmekontor, og det at vi faktisk var på hjemmekontor. Samtidig fortsatte vi å ansette, slik at vi ble flere. Verdiene våre var viktige, og ble brukt aktivt ifm lønnvurderingene. Vi tok dem også fram når det var hensiktsmessig å vise til dem, men vi gjorde ikke en god nok jobb med å tydeliggjøre dem i hverdagen på hjemmekontoret.Bedre sammenI løpet av Korona definerte SpareBank 1 Utvikling verdiene som gjelder for alle som jobber hos oss. De ble mer generelle enn utviklerverdiene vi hadde tatt fram, men handlet om det samme. De var, og er fortsattVi vil hverandre velVi griper nye muligheterVi dyrker og deler kompetanseVi leverer best når alle føler seg verdifulleog går under samlebetegnelsen Bedre Sammen.Bedre Sammen plakaten vår.Henger verdiene fortsatt på greip?Da vi endelig var ute av koronapandemien i fjor høst, følte vi at vi trengte en oppfrisking av utviklerkultur- og verdiarbeidet vårt. Mange nye hadde startet hos oss i perioden, og vi visste at vi ikke hadde jobbet nok med å synligjøre verdiene våre da vi var på hjemmekontor.Vi fikk med oss fem av utviklerne som hadde startet i koronaperioden, og lot dem også få jobbe fram endringer og forslag til forbedringer i verdioppsettet. Resultatet av dette arbeidet ble en vesentlig forenkling. Vi så at vi kunne bruke materialet vi hadde tatt fram til å svare på to spørsmål for hver verdi eller egenskap:Hvorfor det? (*)Hvordan gjør jeg det? (**)Her er verdiene og egenskapene våre:Vær åpen og forståelsesfullJobb sammenDel kunnskapen din og tiden din med andreVær nysjerrigGjør ting bedre hele tidenTenk helhetligVis initiativVi tar fram en eller flere av disse verdiene jevnlig på avdelingsmøtene våre. Vi forteller om hvorfor de er viktige for oss, og ikke minst hvordan vi gjennom konkrete aktiviteter kan oppleve kraften i verdiene selv.Verdiene våre er med og skaper den utviklerkulturen vi vil ha.(*) Boka Start With Why for hvorfor det er viktig å starte med hvorfor.(**) Boka Switch for hvorfor det er viktig å være konkret på hvordan.Gir verdier verdi? was originally published in SpareBank 1 Utvikling on Medium, where people are continuing the conversation by highlighting and responding to this story....

Av Vidar Moe

Skal teknologi brukes av alle, må den utvikles av alle

I SpareBank 1 Utvikling er vi vårt ansvar bevisst: oppdraget vårt er å lage de beste løsningene for alle våre én million kunder. Én million mennesker med ulik bakgrunn og kompetanse — og dermed svært ulike behov.En vrien oppgave, tenker kanskje noen. Men jeg mener at SpareBank 1 er på god vei og at en av nøklene ligger i mangfold. Et utviklingsmiljø bestående av et bredt mangfold, belønner oss med innovasjon og løsninger som treffer bedre og mer relevant. I anledning kvinnedagen vil jeg særlig belyse behovet for flere kvinnelige utviklere.Alle relevante perspektiverI dag er det stor overvekt av mannlige utviklere i Norge. En utpreget homogen gruppe er lite bærekraftig for å utvikle de beste teknologiske løsningene. Det er derfor vi i SpareBank 1 Utvikling sier: Skal teknologi brukes av alle, må den utvikles av alle.Utvikling av systemer og løsninger skal bidra til at samfunnet og hverdagen vår blir mer effektiv, lønnsom, bærekraftig, innovativ, enklere, smartere og mer rettferdig for alle. For å løse dette, trenger vi alle relevante perspektiver.SpareBank 1 Utvikling er et likestilt selskap. Vi har selvsagt likelønn, mange kvinnelige ansatte og andelen kvinnelige ledere i selskapet gjenspeiler den generelle fordelingen mellom kjønnene. Men vi har virkelig et ønske om å også få tak i flere kvinnelige utviklere.Strategisk og langsiktig satsingVi er altså ikke helt i mål med våre ambisjoner, og vi kommer ikke til å slå oss på brystet riktig ennå. Vi må rett og slett ta noen aktive grep for å øke rekrutteringen av kvinnelige utviklere. Dette jobber vi strategisk og langsiktig med. Målet er å tiltrekke oss en samling mennesker som sammen har best mulige forutsetninger for å lage gode løsninger for alle.For å nå målet har vi blant annet mangfoldsforum i SpareBank 1 Utvikling og vi er med på å arrangere Girl Tech Fest, en teknologifest for jenter i 5. klasse. Dette tror vi er noen gode steg på veien til å utvikle enda bedre teknologi fremover. Vi er også alltid på jakt etter innspill til hvordan vi kan få et enda mer mangfoldig utviklingsmiljø.En oppfordring til alleSå vil jeg gjerne komme med en oppfordring på selveste kvinnedagen. La oss fremsnakke teknologiske fag i samtalen med unge jenter, ungdom på vei inn i studier og andre som har perspektiver vi trenger — nettopp for å kunne lage de aller beste løsningene, for absolutt alle.Gratulerer med kvinnedagen!Skal teknologi brukes av alle, må den utvikles av alle was originally published in SpareBank 1 Utvikling on Medium, where people are continuing the conversation by highlighting and responding to this story....

Av Espen Kjølberg

Multi-tenant and hybrid DNS with Azure Private DNS

This article covers how The Azure platform team handles registering and resolving of Azure Private Endpoints in a multi-tenant and hybrid DNS setup.If you find yourself in a situation where you need to handle multi-tenant Domain Name System (DNS) together with an on-premises environment, look no further. In this article I’m writing how we did multi-tenant and hybrid DNS at SpareBank 1, one of Norway’s largest financial institutions.This article is one of several articles we are writing about our brand new Azure platform we’re calling Eunomia at SpareBank 1. In simple terms we’re creating a multi-tenant platform to fit the needs of the alliance.We did a presentation at Ignite 2022, watch it here: Spotlight on Norway | CLC08 — YouTubeShort background introductionSpareBank 1 is an alliance of 13 banks and over 40 product companies. As individual legal entities, they choose themselves whether to collaborate in key areas such as IT operations and sytem development.A large number of these banks and companies share an on-premises Active Directory environment. On-premises AD uses AD Connect to syncronise users and groups to their own Azure AD tenant.The challengeI’m not going into why there are 13 tenants and workloads running in each tenant, which means we have this requirement for cross-tenant and hybrid dns resolve.The challenge is to support DNS across the whole architecture. DNS resolution needs to work in each tenant, from on-premises to Azure workloads (Key Vault, Storage, Web apps etc.) running in each tenant as well as internal applications on-premises.This wouldn’t be a challenge if we could leverage public DNS for everything, but we need to keep everything on a private network. Where applicable, developers must use Azure Private Link on Azure PaaS services that support it. This is a big challenge!Requirements:Resolve private endpoints FQDN’s in any tenant from any tenant and on-premisesAutomate registering of Private Endpoint FQDN’s to a an Azure Private DNS ZonesTake a look at this figure to understand the challenge a bit more.Single tenant DNSAs you may understand from the figure above, DNS in this setting is a bit challenging. But let’s look at how we would do DNS in a single tenant.Azure has a PaaS service called Azure Private DNS Zones. That is perfect for our use case. We can create the DNS zones we need and add records that resolves to the ip’s of the workloads we have.Using Azure Policy we could do automatic registration of Private Endpoints Fully Qualified Domain Names (FQDN). This means that developers would create their private endpoints and after a couple of minutes the FQDN’s would be automatically registered in it’s associated private dns zone.The figure below shows a simple architecture on how to do DNS in single tenant.Single tenant automatic registration of Private Endpoint FQDN’sCustom DNS in vnet’s would point to the central dns-servers hosted in the HUB vnet.The dns-servers(in HUB-vnet) would forward all DNS request to Azure’s own DNS service in it’s vnet. Azure Recursive resolver will take the DNS request and try to resolve it.Since the Azure Private DNS Zones are linked to the HUB vnet the resolver can look up records in those zones.The magic sauce here is actually the Azure Recursive Resolver which will look up in all available sources for the record.The automatic registration of a private endpoint FQDN is accomplished by using Azure Policy. The Azure Policy would target all resources of type Microsoft.Network/privateEndpoints and deploy a resource of type Microsoft.Network/privateEndpoints/privateDnsZoneGroups on the private endpoint.Microsoft has several resources available to create a deployment like this. See sources here:Private Link and DNS integration at scale — Cloud Adoption Framework | Microsoft LearnIn the next section this architecture is expanded to work across multiple tenants together with an on-premises environment.Multi-tenant and hybrid DNSIn this section I will explain in detail how we did multi-tenant and hybrid DNS at SpareBank 1.HUB and spoke tenantsYou have probably heard of hub and spoke topology related to Azure networking. We’re expanding on that where we introduce the concept of hub-tenant and spoke-tenants.In the maze of all our tenants there is only one HUB-tenant and all other tenants are spoke-tenants. The HUB-tenant is used to centralize some services that can be consumed by the spoke tenants, such as DNS.Azure Private DNS ZonesWe’re using Azure Private DNS Zones to host records for all of our private endpoints. We deploy all the zones we need/for all the PaaS services we are using.In the figure below you can see we have a subscription called core-con, this is where we host all connectivity services, such as Azure Firewall, Azure vwan, DNS, VPN to on-premises and third-party tenants. These workloads is only necessary in the HUB-tenant. Vnet’s in spoke tenants is peered to the HUB vnet.We host the Azure Private DNS Zones in the resource group hub-core-con-pdns-nea-rg; The acronyms stands for:hub — core — connectivity — private dns — norway east — resource groupThe private dns zones is vnet-linked to our virtual network hub-core-con-net-nea-vnet in resource group hub-core-con-net-nea-rg.Private Link and DNS registration in a multi-tenant environmentIn this section I’ll go through how we manage the lifecycle of DNS records for private endpoints. The lifecycle must ensure that records are automatically created in the matching private DNS zone for the service being created. Since we have our Azure Private DNS Zones in our HUB-tenant we need a way to write spoke-tenant’s private endpoints zone configuration to our centralised private dns zones.Writing a private endpoint zone configuration to a private DNS zone is fairly straight forward in a single tenant setup. We did that in the single tenant section above by leveraging Azure Policy to do the work for us. Take a look at the figure below to get an idea of what we want to accomplish and keep in mind how we leveraged Azure Policy earlier to write the DNS zone configuration of a private endpoint to a private dns zone.In the single tenant design the policy assignment would deploy the zone configuration in the same tenant. In this multi-tenant design we need each spoke tenant to do the same as a single tenant, but instead of deploying to private dns zones in the same tenant, we need it to deploy to our centralised private dns zones in our HUB-tenant.Reverse Azure Lighthouse conceptYou have probably heard about Azure Lighthouse. It allows an identity in a managing tenant to have Azure Role Based Access Control(rbac) permissions in a delegated tenant. So what if we use this and let all the spoke tenants become a managing tenant for our HUB-tenant? But with limited delegated permissions.For an identity to write zone configuration to a private DNS zone it needs the RBAC permission Private DNS Zone Contributor. We can create a managed identity in each spoke tenant and assign the identity Private DNS Zone Contributor on the resource group where our Private DNS Zones is in the HUB-tenant using our reverse Lighthouse concept. The figure below shows the reverse lighthouse concept.Reverse lighthouse conceptThe last, but most important, part is how we can now leverage Azure Policy in each spoke tenant to automatically register all Azure Private Endpoints fqdn’s in the HUB Private DNS Zones.Azure Policy — Deploy if not exist — cross tenantWe deploy our Register private dns Azure Policy Definition to each spoke tenant and create assignments for each PaaS resource/groupid/region.PolicyRule "policyRule": { "if": { "allOf": [ { "field": "type", "equals": "Microsoft.Network/privateEndpoints" }, { "count": { "field": "Microsoft.Network/privateEndpoints/privateLinkServiceConnections[*]", "where": { "allOf": [ { "field": "Microsoft.Network/privateEndpoints/privateLinkServiceConnections[*].privateLinkServiceId", "contains": "[parameters('privateLinkServiceId')]" }, { "field": "Microsoft.Network/privateEndpoints/privateLinkServiceConnections[*].groupIds[*]", "equals": "[parameters('privateEndpointGroupId')]" } ] } }, "greaterOrEquals": 1 } ] },The policy deploys if not exists (DINE) a resource of type Microsoft.Network/privateEndpoints/privateDnsZoneGroups. "resources": [ { "name": "[concat(parameters('privateEndpointName'), '/deployedByPolicy')]", "type": "Microsoft.Network/privateEndpoints/privateDnsZoneGroups", "apiVersion": "2022-05-01", "location": "[parameters('location')]", "properties": { "privateDnsZoneConfigs": [ { "name": "privateDnsZone", "properties": { "privateDnsZoneId": "[parameters('privateDnsZoneId')]" } } ] } } ]Because our managed identity in each spoke tenant has Private DNS Zone Contributor rbac permission to the hub-tenant we only need to reference the resource id to the Azure Private DNS Zone in the policy assignment.Screenshot of a Azure policy Assignment for ConfigurationThe figure below shows an overview on how DNS is configured on-premises, in spoke vnets (cross-tenant) and on HUB DNS server.When setting up Conditional Forwarders from on-premises to the DNS server in Azure I recommend starting with just a few zones that you are currently using. Don’t configure the whole list of public DNS zones which Microsoft lists here: Azure Private Endpoint DNS configuration | Microsoft LearnClosing NotesWith the configuration the benefits of the cloud are clear. Set up any PaaS service with private link across any of the multiple tenants, and we have full automation (including lifecycle management) for that private endpoint’s DNS records. Developers do not need to think about it when creating their systems, and it is also very low overhead for the Azure platform team. This works brilliantly for us!During the design and deployment of this the Azure DNS Private Resolver was still in preview. We’re looking into moving away from VMs to the PaaS solution. The PaaS solution will contribute greatly in achieving a more resilient solution.We’ve had this in production for a couple of months now and we’re experiencing a couple of challenges:Azure Static Web App has a partition id in its private dns zone name. It is not documented which partition id’s this can be. This makes it difficult to pre-provision the Private DNS zones for this and also create policy assignment to target the correct private dns zone. See issue #101133 and #99388Azure Machine Learning workspace creates several records utlizing two Private DNS Zones. Our Azure Policy only handle one of the zones and leaves us to handle the second manually. With some additional work on the policy I’m sure it’s possible to make it work. We have published an Github Issue on it here: #99388Multi-tenant and hybrid DNS with Azure Private DNS was originally published in SpareBank 1 Utvikling on Medium, where people are continuing the conversation by highlighting and responding to this story....

Av Joakim Ellestad

Coachende ledelse med fem små spørsmål

I SpareBank 1 Utvikling bruker vi gjerne strukturert problemløsning når en utfordring ikke er rett frem å løse. A3 er en slik problemløsningsmetode som hjelper oss til å få felles innsikt i problemet før vi jobber med løsning, og det har gitt gunstige resultater hos oss. På ledernivå har vi imidlertid erfart en utfordring med A3-arbeid. I en hektisk hverdag kan det være krevende for lederen å følge opp forbedringsinitiativ man selv har prioritert oppstart av. Varierende grad av dialog og forankring underveis blir da et hinder for fart og kvalitet i problemløsningen, der A3-teamet kan ende opp med å kun sporadisk rapportere status til lederen. Vi bestemte oss for å teste om dette mønsteret kunne endres for skape bedre flyt i A3-arbeidet. Ambisjonen var å stimulere til økt lederinvolvering mens forbedringsarbeid pågår og unngå for stor avstand ved at man går hver til sitt etter oppstarten. Valget falt på et eksperiment der A3-problemløsning ble kombinert med samtaleverktøyet Coaching Kata.Hva er Coaching Kata?En kata er en sekvens av steg som repeteres mange ganger, til mønsteret er automatisert og kan utføres som en enhet, uten å måtte tenke over hvert enkelt steg. Kata er kjent fra bl.a. kampsport og musikkøvelser.“the karate kids” by Orly Orlyson is licensed under CC BY 2.0.Mike Rother beskriver i boken Toyota Kata to slike mønstre som hører sammen: Improvement Kata og Coaching Kata.Improvement Kata er en forbedringsmetodikk med fire steg: 1) Forstå målet, 2) få oversikt over nåsituasjonen, 3) sett et kortsiktig, tidfestet mål (target condition), og 4) utfør eksperimenter for å fjerne hindringer og bevege deg i retning av target condition. Når target condition er nådd, kan man reevaluere nåsituasjonen og sette et nytt target condition. Det er viktig å innse at det finnes en grense for kunnskapen vi har i dag, vi kan bare se et lite stykke fram. Eksperimentene gjør at vi lærer, og gradvis ser og forstår mer av veien vi må gå for å komme frem til målet. Bruk av dette mønsteret trener inn den vitenskapelige metoden for problemløsning.Coaching Kata er lederens motpart til Improvement Kata. Den består av et lite sett spørsmål som lederen bruker for å hjelpe den som driver med Improvement Kata, og forsterker mønsteret av vitenskapelig tenkning. Spørsmålene printes gjerne ut på et lite kort med hovedspørsmålene på den ene siden og refleksjonsspørsmålene på den andre.Kilde: Toyota Kata Practice GuideLederen starter med å stille spørsmål 1 og 2, og snur deretter kortet og går gjennom de fire refleksjonsspørsmålene, før man snur kortet tilbake for å gå gjennom resten. Dette mønsteret vil føles unaturlig i starten, både for lederen og den som blir coachet. Men ved å holde seg til mønsteret (kataen) mange nok ganger, vil flyten etterhvert bli naturlig for begge parter. Når mønsteret er automatisert, kan man begynne å tilpasse det til situasjonen fra gang til gang, og få enda mer verdi ut av disse korte samtalene.“Those who have seen The Karate Kid have seen kata in practice; those who have watched a jazz band play have seen the results.”- Jeffrey Liker, How the Toyota Way and Toyota Kata Fit TogetherCoaching Kata på A3-arbeidI et A3-problemløsningsarbeid er det flere planleggingssteg før man kommer fram til beskrivelsen av en ønsket fremtidig situasjon (target condition): Man må forstå problemet godt, kartlegge nåsituasjonen, spisse problemstillingen ned til den delen som skal løses først og kartlegge rotårsaker, før man er klar til å se på forslag til løsninger.Kilde: Toyota Kata Practice GuideUnderveis i planleggingsfasen er det noen faste delmål som er svar på Coaching Kata-spørsmål nr. 1 (“Hva er delmålet du jobber mot nå?”):Enighet om formuleringen av “observert problem”, dvs. hva er problemet, hvem er det et problem for, hva er konsekvensene av problemet og hvor stort er det?Enighet om hvordan problemet skal spisses og mål for forbedringen, dvs. hvilken del av problemet er det vi skal løse, og hvor stor forbedring ønsker vi å oppnå?Enighet om hypotese for fremtidig situasjon, dvs. hva er første “target condition” vi skal eksperimentere for å nå?Hva som kan være et “neste steg” eller eksperiment varierer etter hva slags delmål man jobber mot. Eksempler vi så underveis i arbeidet:En kartleggingsjobb, f.eks. måle størrelsen på problemet, prosesskartlegging, rotårsaksanalyse.Administrative ting, f.eks. finne teammedlemmer, innkalle workshops, forberedelser.Test av en ny løsning, herunder planlegging, gjennomføring, måling av resultater.Alle slike steg kan betraktes som eksperimenter hvor det kan være noe å lære, enten om selve problemet som skal løses eller om problemløsningsmetodikken.Oppsett og gjennomføringI eksperimentet ble tre deltakere coachet. Dette var medarbeidere som skulle lede hvert sitt problemløsningsarbeid sammen med et A3-team. Felles for teamene var at forbedringsarbeidet fokuserte på hvordan de kunne gjøre utviklingsprosessene sine mer effektive. Underveis var det en av deltakerne som måtte prioritere bort forbedringsarbeidet, og avbrøt derfor også deltakelsen i eksperimentet. Coachen i Coaching Kata-samtalene var deltakernes leder og eier av problemene som skulle løses. Artikkelforfatterne fungerte som 2nd coach, og var med og observerte samtalene og ga tilbakemeldinger til coachen underveis.Møtene ble kjørt en gang i uka, med 20 minutter til hver deltaker. Disse ble lagt rett etter hverandre, slik at coachen hadde satt av en time pr uke i sin kalender for de tre deltakerne. Møtene ble kjørt på Teams fordi deltakerne befant seg på ulike steder. Hvis vi skulle kjørt fysiske møter, ville vi ønsket å samle oss rundt en fysisk tavle, mens på Teams viste deltakerne fram sitt arbeid på en digital tavle eller i PowerPoint.Kilde: Toyota Kata Practice GuideHva lærte vi?Underveis i det 5 måneder lange eksperimentet ble tre retrospektiver gjennomført. Nedenfor presenteres de tydeligste læringspunktene fordelt på positive opplevelser og muligheter for forbedring.I stort erfarte deltakerne Coaching Kata som et effektivt møtepunkt med høy verdi. Samtalene tok mellom 7 og 20 minutter, og skapte nyttig dialog mellom leder/coach og medarbeider. Her kom lederen nærmere problemstillingen teamet arbeidet med gjennom løpende involvering. Videre bidro Coaching Kata-strukturen til flyt i samtalen og til at man i stor grad pratet om rett ting: Hva har vi lært siden sist og hva skal vi gjøre framover for å lære mer? Arenaen ble ikke opplevd som et klassisk statusmøte. I stedet la de hyppige møtene til rette for dialog og tilbakemeldinger som ga medarbeiderne et “puff for fremdrift” som de opplevde positivt. Samtidig bidro strukturen i samtalen til at lederen fikk trene på og forsterke et mønster for coachende ledelse. Visualisering av arbeidet ble vurdert som svært nyttig for fokuset i samtalen.«Som A3-eier gir slike samtaler langt mer involvering enn vanlig, og med minimal bruk av tid»- Leder og coach i eksperimentetEt punkt deltakerne tidlig kjente på var hvordan man kunne stille riktig forberedt til samtalene. Kvaliteten og lengden på de første iterasjonene varierte en del. Ved tydeligere fokus på hvorfor visualisering av arbeidet var viktig, kombinert med sterkere forpliktelse til å stille forberedt, økte kvaliteten på møtene samtidig som tidsbruken ble redusert. Vi mener nå at en innkalling på 15 minutter skal være nok.En annen utfordring i starten av eksperimentet var at sparring med 2nd coach på A3-tekniske spørsmål ble til hinder for gjennomføring av Coaching Kata-syklusen. For å avverge dette problemet ble møteagendaen oppdatert med en eksplisitt rekkefølge for innholdet, der Coaching Kata stod først på kjøreplanen, mens A3-teknisk sparring var mulig når det var tid til overs. Med dette grepet ble det tydelig for alle involverte at Coaching Kata-spørsmålene hadde prioritet.Utklipp fra retrospektiv-tavle for eksperimentetEt tredje hinder under den første fasen av eksperimentet var at 2nd coach brøt ut av sin observatørrolle for å ta aktivt del i samtalen. Da vi ble oppmerksomme på denne utfordringen, fokuserte 2nd coach mer på aktiv lytting, ble mer oppmerksomme på egen iver etter å delta, og overlot samtalen til de øvrige deltagere. At A3-teknisk sparring ble plassert etter Coaching Kata-syklusen gjorde det også enklere å overholde rollefordelingen.Ukesrytmen for samtalene fungerte bra for deltakerne. Behovet varierte dog underveis, da det ikke alltid hadde skjedd så mye som man følte var verdt å diskutere. Andre ganger kjente man på at det kunne være nyttig å ha muligheten for flere samtaler i løpet av en uke. Selv om kadens ble diskutert underveis i forsøket ble det ikke foretatt justeringer på oppsettet.Observasjoner fra 2nd coachUt over erfaringene som er beskrevet over, observerte vi som 2nd coacher at fremgangsmåten med de fem spørsmålene muliggjorde en form for situasjonsbestemt ledelse, der hver enkelt medarbeider fikk sparring ut fra hvor man befant seg i problemløsningen. Vårt inntrykk var at coachen benyttet spørsmålene godt, og fikk fram elementer som kanskje ikke ville ha kommet fram i en mer tradisjonell dialog om status. Blant annet observerte vi at coachen så bort fra flere forsøk på å starte samtalene med å liste opp aktiviteter man hadde lagt bak seg, og i stedet vennlig oppfordret til å starte møtet med en beskrivelse av hvilket delmål man fokuserte på, etterfulgt av detaljert refleksjon over det siste steget man hadde vært gjennom. En annen fordel med oppsettet av samtalen var muligheten for rask feedback fra 2nd coach rett i etterkant av Coaching Kata-sesjonen, med denne friskt i minne.“Being a good coach is essential to being a good manager and leader. Coaching is no longer a speciality; you cannot be a good manager without being a good coach.”-Trillion Dollar Coach: The Leadership Playbook of Silicon Valley’s Bill CampbellEffektiv arena for coachende ledelseDet kan ta tid å snu mentalitet fra status-rapportering til læringssløyfe. Erfaringen fra vårt lille eksperiment var at samtaleverktøyet traff godt ved behov for en mer coachende tilnærming til ledelse. To klassiske grøftekanter, “set & forget” preget av alenegang, og sporadisk reaktiv statusrapportering, ble unngått. Både lederen og medarbeiderne i eksperimentet synes Coaching Kata fungerte godt til formålet om dialog og forankring underveis, samtidig som samtalene tok lite tid og ga positiv effekt på selve forbedringsarbeidet.Uavhengig av problemløsningsmetode er det kanskje flere som kjenner seg igjen i en situasjon der dialog med en opptatt leder blir flaskehals for fremdrift på problemet man jobber med? Samt ledere som opplever at med covid-pandemien og fremveksten av hybridkontoret har det blitt vanskeligere å finne naturlige arenaer for samhandling med folkene man leder? Hvis man kjenner på en slik utfordring tror vi Coaching Kata kan være et interessant alternativ å teste ut.Skrevet av:Ragni Ryvold ArnesenKristoffer BergReferanserToyota Kata: Managing People for Improvement, Adaptiveness and Superior Results, Mike Rother.The Toyota Kata Practice Guide: Practicing Scientific Thinking Skills for Superior Results in 20 Minutes a Day, Mike Rother.Understanding A3 Thinking: A Critical Component of Toyota’s PDCA Management System, Durward K. Sobek II and Art Smalley.Managing to Learn: Using the A3 Management Process to Solve Problems, Gain Agreement, Mentor and Lead, John Shook and Jim Womack.Coachende ledelse med fem små spørsmål was originally published in SpareBank 1 Utvikling on Medium, where people are continuing the conversation by highlighting and responding to this story....

Av Kristoffer Berg

Speed up your Multi Module Maven Builds with turbo-maven-plugin

Fast feedback makes us happy. So if you are only looking for how to speed up your multi module build as fast as possible, go straight to turbo-maven-plugin.If you want to know more about why, and also how turbo-maven-plugin works, please keep on reading.The reason we are happy when we get fast feedback, is that it triggers the production of dopamine in our bodies. Dopamine is a happiness drug we can give ourselves for free. This is a smart thing to do as often as possible — it makes us happy.In addition to making us happy, fast feedback makes us deliver value faster. This not only feels good, it is also good for our team and the place we work.When working with Kotlin and Java, feedback from building our apps, is something we need often. Where I work, we code and run our tests in IntelliJ. As soon as we want to move the change to production, we normally build the app locally with Maven before pushing the code, to see that the tests run as they should, and that everything is working fine.A Multi Module Maven repository with 250 applicationsWe have our 250+ apps in a monorepo. The monorepo is one multi module Maven repository, where everything is on head all the time. When we build our app, we build both the app itself, and all the modules that it depends on.This is why it is important for us to build smart.A typical app builds and runs all tests for itself and its dependencies in 2–3 minutes. This is a long time to wait, so we started looking for a way to get faster feedback.A multi module Maven build with a change in one of the modules.We quite often have a code change in a module the app depends on.It is possible to ask Maven to build only the modules that we want, using the — projects <list of projects to build> argument, and building from the root pom. This is faster than building all required dependencies with — also-make.Using a maven command with — projects requires both mental capacity and finger acrobatics on the command line, so we seldom do this. We rather build with variants of cd in and out of modules and mvn clean install, hoping that we have built everything that needs to be built. Or we build everything to be sure.Building only what needs to be builtA multi module Maven build with a change in one of the modules. We only need to build this module and the modules depending on it.We only want to build what needs to be built, without having to hand code a special Maven command for every change we do. Both Bazel and Gradle knows how to do this.There are several strategies here, and all we have seen, is based on analysing what files have changed, then making scripts or programs calculating what modules the changes reside in, and then create a Maven command that builds these modules.We have created a Maven plugin helping us with just that. It is called turbo-maven-plugin.How does turbo-maven-plugin work?turbo-maven-plugin is based on the same strategy, that is analysing what modules have changes in their source code, and then build only these modules, and the modules depending on them.For every module, the plugin looks for a file containing one row per source code file in the module. A row contains the name of the source code file and a checksum of the contents of the file.If it doesn’t find a file for a module, it creates it, and puts it in the the module’s directory in the local m2 repo. It does this for all modules that is required for building the app, and also the app itself.If we build again, without changing anything, nothing will be built, since all the checksums are the same.If we do a change, the plugin will first get the complete list of modules that needs to be built from the Maven Reactor. This is the the app itself, and all its dependencies. For each module, it compares the checksum for each source code file in the module, with the checksum in the file in the m2 repo. If the checksums are the same, the plugin removes the module from the list.For the modules that have changes, we make sure we also add the modules that are dependant on them. In pseudo code, it looks like this://Find changed modules:modulesToBuild = modulesFromMaven.filter(isModuleChanged())//Find the modules dependent on the changed modules:modulesToBuild.forEach(module -> downStreamProjects.add(module.getDownstreamProjects()))//Return the distinct set of modules to build:return modulesToBuild.addAll(downStreamProjects).removeDuplicates()How do we use the turbo-maven-plugin?The plugin is defined in our root pom, and is disabled by default, so that Maven behaves normally for everyone when using regular Maven commands:<plugin> <groupId>no.sparebank1</groupId> <artifactId>turbo-maven-plugin</artifactId> <version>${turbo-maven-plugin.version}</version> <extensions>true</extensions> <configuration> <enabled>false</enabled> <ignoreChangesInFiles>swagger.json</ignoreChangesInFiles> <alwaysBuildModules>distribution</alwaysBuildModules> </configuration></plugin>We have a tool, that really is just a structured collection of scripts, called bob. When we want to build an app, we run bob mvn build from the app root. This command actually does this:mvn -T4 -f <path-to-the-root-pom> --projects <path-to-the-app-pom> --also-make -Dturbo.enabled=true clean installBut that is something our developers don’t have to think about.With this, we have cut the average app build time in half, from 2–3 minutes to 1–2 minutes. We have also reduced the cognitive load of our developers. They don’t have to think about what modules need to be built anymore. They just run bob mvn build, and Maven and maven-turbo-plugin take care of the rest.If you want to try the plugin, it is on Maven Central, and you find both source code and pom configuration on the turbo-maven-plugin’s home page.Speed up your Multi Module Maven Builds with turbo-maven-plugin was originally published in SpareBank 1 Utvikling on Medium, where people are continuing the conversation by highlighting and responding to this story....

Av Vidar Moe

SpareBank 1 Utvikling på Instagram@sparebank1utvikling