Nate Ballantyne
5 min readFeb 25, 2021

--

3 Amigos

You’ve probably heard of a 3 amigos session, you may have even been involved in one. The aim of the session is usually to answer the following questions…

  1. How do we know if we’ve built the right product?
  2. How do we know if we’ve built the product right?
  3. How will we support it in production?
  4. How will we know if there is a problem in production?
  5. How will we test it?

The format of the 3 Amigos session isn’t restricted to 3 people, it just means having consideration from a programming, test and product perspective. Typically they will involve a product manager/business analyst, a developer and a tester. One of the aims of the 3 Amigos session is to answer the above questions. One of the biggest things that can help us to answer these questions is Behaviour Driven Development aka BDD…

BDD

BDD is about having conversations and capturing the examples from those conversations using the Gherkin syntax with a focus on the behaviour. Let’s start with what a user story looks like:

As a [persona]
I want to [perform an action]
So that [desired result]

So in the example of a login user story:

As a user
I want to be able to login using my username and password
So that I can access my account

You may then translate this into requirements by having open conversations. A good place to start is the happy path:

GIVEN I am a user with an account
WHEN I enter the correct credentials
THEN I am logged in

The reason why developing scenarios in this manner is useful, is because they can flow quite nicely. For example, the next logical step could be:

GIVEN I am a user with an account
WHEN I enter invalid credentials
THEN I see an error message

Hopefully, your UI/UX team would have considered this case and have already created this error state. If not, you have the opportunity to agree on what happens. You may want to cover what happens when the error message is interacted with. Let’s assume it’s a popup with 2 CTAs:

GIVEN I see the error popup
WHEN I tap the reset password link
THEN I am taken to the reset password page

So, you have considered what happens when you tap the reset password link, what happens when you tap the other CTA:

GIVEN I see the error popup
WHEN I tap the dismiss button
THEN the error message is dismissed
AND I see the login page

As you can see, we have added an additional AND statement, this is to indicate we are checking for something else. The important thing to consider is that AND statements can be used incorrectly and can validate too many things. If you have too many AND statements, it can be hard to identify which part of the scenario failed. This also goes against the cardinal rule of BDD, one scenario, one behaviour. Likewise, it doesn’t make sense to create another scenario for checking if we’re still on the login screen when the error is dismissed. That said, if upon tapping the CTA, we navigate to a different page/screen, we would need to create a separate scenario.

Personally, I like to keep the AND statements to a maximum of 2 in order to make sure we aren’t validating too many things in a single scenario. Also, your scenario setup may require an additional step e.g:

GIVEN I am on the login screen
AND I have no network connection
WHEN I enter my credentials
THEN I see a network error

If you find you have more than 2 AND statements, you may want to consider breaking up that scenario. Particularly if the AND statements follow the THEN statement. Based on the scenarios we have created during the session, we can identify the candidates for test automation.

During these conversations, we can also ask how we will support our login feature in production. For example, load testing and performance testing, do we want to consider running an A/B test in order to help us improve the success of the feature etc etc. This is particularly useful if we have considered a few variants but don’t have enough data to commit to one. Also, how will we know there is a problem in production? There are a few things we could consider:

  • Error reporting on the client
  • Monitoring and logging on the backend
  • Tracking how many people have logged in
  • Alerting when errors reach a certain threshold

By taking these into consideration during the 3 amigos session, we’re already considering how to shift our testing right. If we maintain this for all our features, it’s easier to deliver high quality work.

You may also realise that you uncover additional information about the feature during this collaborative process that means you have the opportunity to slice the story into smaller increments. For example:

What scenarios are in scope for acceptance criteria?

  • You can work with a subset of the input data.
  • You can defer conditional steps to other stories.
  • You can defer data validation.
  • You can defer error handling.

Which could then result in stories created for:

  • You can make a story per input screen.
  • You can make a story per enabled elements of an input screen.
  • You can make a simple (not pretty) UI.

As the team matures and develops a greater understanding of the product, the necessity to slice stories this late in the refinement process will decrease. In the early stages of user story creation, an acronym that will help identify what a good user story is INVEST.

Independent — Self sufficient, not dependent on something else (except for a technical dependency)

Negotiable — Collaboration is encouraged to identify the what and how

Valuable — Either adds some value to the user or the team. That said, on their own they may not provide value until shipped with several other stories.

Estimable — The story has enough detail to estimate how big it is, relative to other stories in the backlog

Small — This scales depending on the team size and sprint length. e.g. for a team size of 4 and a Sprint length of 2 weeks,

Testable — There is a means to validate that what you have done, works

When?

If they are done too early, it’s possible that the team will forget the important detail and new information could be uncovered that affects that piece of work. A good rule of thumb is to run the session 1–2 weeks before picking up the work.

Outcomes?

So, how will doing this help you?

  • There is a common understanding of what needs to be done
  • Ambiguities around behaviour are identified early on
  • Feedback loops decrease drastically
  • Blockers and dependencies on other teams/software are identified early
  • Edge cases are identified early on
  • Test cases are written before development which enables more time for exploratory testing
  • Bugs found during development/testing are reduced
  • Clients/stakeholders have more confidence in what is being delivered

Finally…

Don’t use assumptions or biases when defining the acceptance tests as you then run the risk of building the wrong thing. Having a shared understanding and conversations when creating Gherkin examples is where the value comes from.

What are your thoughts on 3 Amigos and behaviour driven development?

--

--

Nate Ballantyne

Currently a QA Lead @Patchwork. I enjoy blogging and talking about all things software testing! In particular how we can shift testing in all directions