Iterations toward automation

GitLab is an open-core single application serving development and operations teams in software development. I served as a senior product designer on the Secure and Protect UX team, where I focused on designing an experience that enables contributors and teams to commit their most secure work and to defend what they have in production. I lead design on the auto-remediation feature, which is a web security feature with the long-term objective: fix security vulnerabilities in production automatically without human involvement. The users are: 1) developers, who are responsible for committing secure code; and 2) security professionals, who are accountable for an organization's application security.

Why is this valuable to customers? Thinking big: auto-remediation removes the guesswork of identifying and fixing known vulnerabilities by the system automating the workflow. The key innovation: fixes are applied to production code while the customer sleeps. This customer value gives time back to their teams and ensures a more efficient application security workflow. It’s not just the time saved fixing vulnerabilities, it’s also the time saved investigating the detected vulnerabilities. When it comes to web security, no application will ever be 100% secure. That’s why our team’s core focus is: integrating automation into every step of the user’s workflow, facilitating improved decision-making, and helping users understand their risk.

Getting started

My role and objective: 1) identify customer value, 2) ensure consistent user experience across multiple scanners/languages, and 3) define and prioritize iterative steps toward automating the user’s workflow.

Leading a cross-functional discovery with a backend and front-end engineer. My first step: audit the existing user experience baseline. The findings included: 1) burdensome and time consuming manual remediation process on the user, 2) suggestions existed but were not being communicated to the user in the security dashboard or 3) the merge request when new vulnerabilities were committed.

Then, I started identifying what was on the horizon: supported languages, scanner vulnerability findings, and related solution/fixes database capabilities. Today, customers can use the feature for dependencies detected in a project and in the future for application containers (such as Docker). This foundational technical understanding outlined the dependencies we’d need to consider as we evolved, such as configuration requirements, outlining areas of divergence and opportunities for consistency in the experience. With an understanding of our long-term customer value goals, current UX, and engineering capabilities - I dove into the ideation and focused on the following questions to explore:

  • What are the user permissions?
  • Who will be able to do what?
  • How is the feature activated?
  • When are suggested solutions turned to merge requests?
  • How will a user be notified this occurred?
  • What user will be responsible for the created merge request?
  • What criteria, such as vulnerability severity, would trigger the auto MR?

Based on the long term goal to automate fixes: broke down the objective into focused iterations:

  • Step 1: automatic creation of merge request (opted-in)
  • Step 2: suggest solutions available (opted-out)
  • Step 3: automatic merging of auto-created merge request (opted-in)

Focusing on step one, the minimal viable change goals:

  • Works out of the box: opted-in by default
  • System communicates to user that merge request solutions exist
  • UI explicit about vulnerabilities that have suggested solutions (for both opt-in or out)
  • Capabilities and supporting languages is clarified to the user
  • User needs to be aware the merge request was a system creation
  • Findability and log of merge request created

Ideation kickoff

My ideation moved fast to get early feedback and shared learning. The first iteration review focused on the settings, configuration, and creation of the auto-created merge request. This surfaced the following findings, learnings, decisions needs, and questions:

  • Requiring an assignee or author of the merge request blocks our ability for the feature to work out of the box. Challenge: no existing alternative user or system to leverage; so expediency, selecting an assignee could move the feature to production faster. Questions to follow: are customers comfortable with having a team member be the assignee? Would this deter them from adopting the feature?
  • Feature setup
  • Information architecture: the opt-in/out setting for the feature located in the configuration section vs the settings area. Background hypothesis based on: 1) we saw in past studies that users would navigate to the configuration section for secure settings, 2) configuring the scanners (dependency and container scanning) were prerequisites for the feature to be active. Questions to follow: do users understand the configuration requirements? Where would users go to learn more about the feature? Do users navigate to the section to address related settings?
  • Displaying a separate section for the auto-created merge requests. The hypothesized upside of this direction: increased discoverability, made the management easier (focused area), less reliance on notifying users about new merge requests, worked out-of-the-box with no assignee. The downside: additional front-end UI build (longer implementation), would be outside the normal workflow of merge request, better for longer term goal (vs initial iteration - insight for later).
  • merge requestsection
  • The structure of the auto-created merge request was heading in a good direction, per the data from findings-solutions database to be display in the merge description. However, the creator of the merge request was not yet clear, nor was the record of the created merge request.
  • merge request

Learn, evolve, and repeat

Following my learnings from the kickoff ideation: the second turn focused on the user questions, promoting user-awareness of suggested solutions, related workflows of the problem-solution vulnerabilities. The findings, learnings, and questions:

  • Dedicated sections filtering by “Auto-fix” may be a good longer-term solution; but could be minimized to leverage a label, which will be included on the merge request.
  • Findability
  • Author of the created merge request: instead of forcing the user to select, the author would be the user who opted in. Downsides: may deter users from opting-in, feature does not work out-of-the-box, temporary solution. Upsides: 1) simplified setup since it designated the user, 2) this user would then be the author, 3) record log per the author profile.
  • Opt-in
  • User awareness: display on the security dashboard when auto-created merge requests exist. Proposed solution to display banner, as a first iteration would keep the noise level of notification down. Questions: does the user understand the visual communication for suggested solutions? How do users perceive the banner - do they understand the text - what action, if any, do they take?
  • Opt-in

Weighing the tradoffs, we commited our decisions and concluded with a discovery outcome report outlining a path forward including implementation to production, user research, and a follow up discovery. These steps optimized for delivery by: prioritizing the backend work while in tandem prototyping our workflow and getting it in front of customers for actionable feedback. For user testing, we wanted to answer the following:

  • What is the customer’s perception about auto-remediation?
  • What is the customer’s expectation with the feature?
  • Where does the user go to turn on/off the feature?
  • Where does the user go to learn more about the features?
  • Does the user understand the settings section - specifically, the user enabling feature is the author of the merge request?
  • Where does the user expect to see auto-created merge requests?
  • Where does the user go to find the auto-created merge request?
  • Is the notification banner seen on dashboard UI helpful to the user?
  • How does the user feel about auto-creation of MRs and then auto-merging of those merge requests?
Bot Overview Notifying user of auto-created merge request, ideal for learnability and optimized for shipping. Next steps: improve communication on UI Bot Overview Workflow remains same as other merge request, but with system created labels for search Bot Overview Bot profile serves as author and audit log of system actions
Ideation II design and update review
Ideation III discovery conclusion

Observe, iterate, and deliver customer value

I organized a research study and prototyped the workflow. Collaborating with our user research team: I prioritized a test script, recruited customers, and setup interviews. I found the following positive key insights: 1) users understood the communication about fixes, 2) learnability was validated when landing on the merge request list (per labeling), 3) the setup process was clear, in terms of configuration requirements, and 4) we observed 4 of 5 navigated to Security & Compliance section when tasked with finding the option to enable the auto-fix feature.

The insight that required action to improve: customers prefer that auto-created merge requests not be under a certain project user. Example: 4 of 5 participants said they'd rather and expected the author was the system or "bot". 2 participants said they might add a different project participant prior to enabling this feature. We already knew the major downside was that the feature was not out-of-the-box (and would require rework). But also, if users added another user to a project this would create the additional problem of an added seat - that is since the feature is for ultimate tier members, it would add additional cost to the account. We conducted a design and engineering discovery to look out how the following:

  • How could we create, leverage, or add ghost or bot or system user?
  • How would it be created?
  • How would this work for saas vs on-premise?
  • Understand differences between project, group, and instance users?
  • How, what, when, and where is the related activity show?
  • Identify permissions: how do customers control the entity?
  • How to ensure extra member doesn't add a seat?
  • How do users find and discovery of the bot?
  • What are other user cases to leverage bot?
  • What are security concerns to mitigate?

Partnering with engineering on a solution validation discovery, the results were: introduce a GitLab-Security-Bot user in the initial implementation. This solution was ideal given it 1) made the feature out-of-the-box ready, 2) low-cost, 3) works for on-premise and SaaS customers, 4) displayed and kept records of system actions, was 5) approved by the security team.

Bot Overview Introducing the GitLab-Security-Bot overview
Based on customer feedback: discovery kickoff for GitLab-Security-Bot
Merge request experience: improving suggested solutions
Bot Overview Next steps: improving the merge request experience, system suggesting and providing solutions for newly committed vulnerabilities

Continuous iterations

As with all iterative software: the story is never done, we keep iterating! Today, the team is releasing the above workflows to automatically create merge requests. Next areas of focus related to this feature are:

  • User awareness of solutions when committed in the merge request (issue and related epic: team effort to iterate on merge request): this is important as it can start promoting fixes before they are even merged into the master branch. This is intended to empower the users responsible for security: developers. The first small iterations will show when solutions are available; then to help automate this workflow by committing or creating merge requests with the fixes.
  • Design and engineering discovery around auto-merging the automatically created merge request. This discovery will look at how auto-merging could be achieved, related configuration, incorporate feedback from existing UX, and adapt the workflow for divergent supporting technologies. Example: for container scanning multiple fixes are ideal in one merge request (several commits), while single fixes are ideal for dependency scanning. The UX question is how and when is the workflow auto-merging these fixes and how is it communicated to the customer.
  • Customer feedback! How are they using (or not using) the feature now that it’s in the wild. What do they like about it? What do they not like? What are our observations of how it’s being used? Feature implementation, discovery, and research epic

The source of truth lives in production. It started with an idea, then trial-and-error discovery, then prototype, then testing, then engineering, and now with users. I’m inspired by evolving the feature with the team and the continuous iterations toward automation.

Thanks for visiting. Questions or thoughts? drop me a note.

See home page or view next case study: security dashboard.