Monday, February 10, 2020

Finding Agreement in Conflict

Tuesday, Feb 4th at 2:10 PM

 

On the left side of this picture is the layers/levels model presented for categorizing the kinds of conflict.  To summarize them even further:

  • Issue = “what is it and how do we feel about it”
  • Organizational = “what should/are we doing about it”
  • Relationship = “how do we feel about what we’re doing”
  • Mission = “how do we feel about each other / our group”

The point of doing this is to be able to find common ground at a layer below the current conflict, such that both/all parties can then work on the conflict together from a space in which they are secure in their togetherness.

 

On the right are the real-time session notes naming the major thoughts shared.  Quoted items are book titles, with the rest probably being useful only to those that were there.

 

FYI, the activism story referenced can be followed here.

Saving the World

Wednesday, Feb 5th at 11 AM

(Click on pictures for greater detail)

 

 

 

 

Finding Agreement in Conflict

Tuesday, Feb 4th at 2:10 PM

 

On the left side of this picture is the layers/levels model presented for categorizing the kinds of conflict.  To summarize them even further:

  • Issue = “what is it and how do we feel about it”
  • Organizational = “what should/are we doing about it”
  • Relationship = “how do we feel about what we’re doing”
  • Mission = “how do we feel about each other / our group”

The point of doing this is to be able to find common ground at a layer below the current conflict, such that both/all parties can then work on the conflict together from a space in which they are secure in their togetherness.

 

On the right are the real-time session notes naming the major thoughts shared.  Quoted items are book titles, with the rest probably being useful only to those that were there.

 

FYI, the activism story referenced can be followed here.

Sunday, February 9, 2020

2/4 @ 1pm Mob Experiment





Mob Experiment 2/4 @ 1pm

 

Sorry, it took so long to get notes posted.  I didn't take notes during the session.  So, I will first tell the story of my experience mobbing (which is how the session started out).  Then, I will add notes of what I remember from the rest of the conversation.  For those who did take notes or remember other things we discussed I would grateful if you added them in the comments!

 

About 6 months ago, the technical practices agile coach where I work, Quinn, came to me: "Christine, I want to do an experiment and need a team to work with."

 

I said "Sure, I'll have to get the team on board but, I'm sure we'll do it."

 

"Great! I want to do an experiment with mobbing."

 

My response was something like "Oh…" My lack of enthusiasm was because we had done a few hour training on mobbing in the past.  When we tried to mob on our own, we failed miserably. But, we agreed to do the experiment with Quinn despite previous failure.

 

For those who don't know what mobbing is, to quote Woody Zuill, it is having "all the brilliant people working on the same thing, at the same time, in the same space, and on the same computer."  While mobbing we use the driver/navigator practice.  The navigator tells the driver what to do.  They are the ones to decide what needs to be done and what direction the mob should be going.  The driver is the person at the keyboard.  They are essentially a smart input device, implementing what the navigator says.  While mobbing, you regularly rotate the roles of driver, navigator, and part of the mob.

 

The experiment was for my team to commit to mobbing for 2 hours a day for a sprint (2 weeks).  Quinn committed to facilitating the mobbing sessions for those 2 hours a day. We decided that the morning was the best time for the mobbing sessions.

 

At the time of the experiment, my team had 3 Scrum teams working from the same backlog.  My Scrum team, who participated in the mobbing experiment, consisted of 3 developers, 1 SDET, and 1 manual QA.  Our manual QA did not feel comfortable joining in on the mobbing rotation but, would sit with us and provide valuable insight from the testing perspective.

 

When our 2 weeks were up, we had nearly doubled the amount of work we finished compared to each of the previous 3 sprints.  I have been told that it is unusual for teams to increase work completed in the first sprint mobbing.  There are a few things that I believed contributed to the increase. 

 

First, three sprints prior, we had started work on code we had just inherited from another team in the company.  So, it took a little bit to get familiar enough with the code to be able make updates without having to decipher what the code is doing in that area.

 

Second, the mobbing helped the team focus on the work that needed to be done.  Since we were working on the same thing at the same time we all knew where we were at with the sprint work.  This helps to make the work being done go more smoothly and be less disjointed.  Also, since our manual QA person was sitting with us, she got to see what we had already created automated tests around and know where to focus her testing.

 

Third, because of mobbing progress was still being made even if someone has to head off to a meeting.  When working more individually, there were times when work on the next story could not be started because the person with the knowledge needed was booked in meetings or maybe on PTO.  When working together the often solutions can be found even if they don't have as much knowledge about the code as the person not there.

 

Since the experiment went so well, we decided to continue mobbing for 2 hours a day.  After 1 or 2 more sprints, we started occasionally mobbing in the afternoon as well.

 

About that time,  one of the other Scrum teams decided to try the experiment as well.  Their team had 2 developers and a manual QA.  Their QA was comfortable being the driver but, did not want to be the navigator.  After 2 days, Quinn decided that since they were doing pairing more than mobbing. So, we decided to merge the two team.  Since we had already committed to 2 Scrum teams worth of sprint work, there was some concern from our Scrum Masters, Product Owner, and manager about getting the work done but, they agreed. Happily, it worked out pretty well.  Technically, we had 3 stories carry over but, I only count 1 of them because the other 2 were completed shortly after planning was done for the next sprint.  We liked the results of the combined team so well, we have continued as a single team and are still mobbing today.


We now are pretty consistent in mobbing both mornings and afternoons.  So, there is rarely any work that anyone does on their own.

 

 

Some things we talked about during the discussion:

  • The drawing in the picture is a sample of a layout of a mob team. 
    • For mobbing, the computer needs to be connected to a TV (or even two).  It's hard enough for two people to crowd around a monitor.  Imagine having 3 or more people!
    • The driver sits in front of the keyboard (on the right in my drawing).
    • The mob sits in the middle.
    • The navigator stands away from the driver.
      • The reason to have the driver and navigator separated is so that the navigator speaks loud enough so that the whole mob hears what is being said.  It also helps the mob stay engaged.
    • My team has the navigator stand in front of a whiteboard where we write our intent and possible steps that we need to accomplish.
      • Sometimes, as we get the work done, we realize we don't need to do all the steps we thought we did or that there are other things that need to be done.  So, don't spend a lot of time talking about what needs to be done…get started doing!
      • Llewellyn Falco (a technical coach/consultant) recommended not using a whiteboard but, creating a text document.  Then you can save the document, review them, identify patterns in the work the team does.  Then, you can find ways to do your repetitive work better/faster/automated.
  • Rotation
    • My team started off doing 5 minute rotations but, are now doing 7 minute rotations.  After 4 rotations we take a break.  Our break is 7 minutes as well, just to keep times consistent.
      • Llewellyn's comments
        • New teams do 4 minute rotations.  This helps everyone stay engaged (a person gets back to being navigator faster).
        • 7 minutes is the absolute max amount of time you want a rotation to last.
        • If the team is getting stuck, shorten the rotation time.  Since a different person becomes a navigator more often and everyone thinks differently you are more likely to get unstuck.
  • Getting more junior DEVs comfortable putting their ideas "out there"
    • Mobbing helps be sure that everyone's input gets applied
    • Mobbing allows the junior DEVs see that the senior DEVs can make mistakes, go down the wrong path, have to restart, etc…
  • Facilitating mobbing
    • When starting out mobbing, it is highly recommend that there is someone who is not participating in the mobbing to facilitate the mob.
    • Even when you've been mobbing for a while, I t is extremely difficult to facilitate mobbing from within the mob.  When you are in the mob, you are trying to accomplish the task that the mob is working on, so, you don't always keep the mob following the principles of mobbing. 

Sent from my iPad

Friday, February 7, 2020

SOFTWARE TESTING FOR DUMMIES: Exposed and Ridiculed - requirements- vs subject-oriented testing

Developing each test to verify a specific requirement (or small number of requirements) is the standard accepted (and often argued for) approach to automated functional testing.  Were we to apply this approach to any other industry, we would likely drive our company into bankruptcy as we a laughed out of our career.  And yet we insist on following these practices in the very industry which should be light-years ahead of any other industry.  During our session, we viewed a simplistic example of applying our widely accepted approach to the Automobile industry for purposes of crash testing. 

 

 

Craig A. Stockton | QA Automation Architect - Continuous Testing
M 253.241.1598
| cstockto@TEKsystems.com
1601 5th Avenue, Suite #800, Seattle, WA 98101

TEKsystems. Experience the power of real partnership.

 

 



This electronic mail (including any attachments) may contain information that is privileged, confidential, and/or otherwise protected from disclosure to anyone other than its intended recipient(s). Any dissemination or use of this electronic mail or its contents (including any attachments) by persons other than the intended recipient(s) is strictly prohibited. If you have received this message in error, please notify us immediately by reply e-mail so that we may correct our internal records. Please then delete the original message (including any attachments) in its entirety. Thank you

Beloved Impediments of Continuous Delivery

If we haven’t yet achieved Continuous Delivery (defined here as automated build and deployment of a build through environments to a ‘Staged’ environment for delivery to Production), we should identify any and all impediments.  At times, those impediments may be those things to which we hold to most firmly. 

There are aspects of software development which we hold firmly to due to habit, comfort, or affinity.  We may hold to some because they were advocated by a mentor in the past, part of an established ceremony, or even a well-documented principle or practice (implemented improperly or out of context).  Regardless, we should always be willing to reconsider them within well-established and supported Principles and Practices (as they were first defined by their authors).

Our session was intended to identify such “beloved impediments”.  We ventured off to any impediments, but still came away with a good list, along with Principles and Practices which, when implemented properly, address them, moving a team closer to successful Continuous Delivery.

 


 

  1. Lack of User Test coverage:  The most beneficial practice for addressing this issue is Test-driven Development.  Most often the impediment to a team implementing TDD successfully is two-fold:  pressure to maintain velocity as a team develops proficiency in TDD, and development engineers’ initial sense and belief that taking more time to develop more test code than system code represents wasted time/effort.  There is enough material out there to address those concerns, such as “Clean Coder” by Bob Martin (“Uncle Bob”).
  2. Manual Deployments:  The stellar book on continuous delivery, “Continuous Delivery” by Jez Humble and David Farley, makes it very clear it is necessary to make effort to automated absolutely everything (while accepting this may take time, and may never be fully achieved).  There are wise principles and practices on which Continuous Delivery is based.  Begin by following those principles and practices firmly.  Automate deployment steps in simple console scripts if that’s all that is available; don’t wait for someone to first set up a Deployment Server.
  3. Manual Configurations:  Again, “Continuous Delivery” includes the principles and practices for addressing manual configuration of systems and environments.
  4. Lack of Automated Function Test coverage:  Including only unit tests during development introduces high risk that after deployment to the first environment (often “Dev”), the automated functional tests will fail.  This can be addressed by practicing Comprehensive Test-driven Development, implementing both unit tests and functional tests during development, ensuring all are up to date before merging changes which will be automatically deployed on build success.
  5. Poorly Pre-tested Builds:  Comprehensive Test-driven Development will address this, as both unit and integration tests will be executed on a developer’s local environment to ensure they all pass before submitting a merge request.  This does require automated functional testing be implemented in a way which is executable on local environments.
  6. Merge Conflicts:  This issue is best addressed by truly following Continuous Integration (as opposed to just using a Continuous Integration tool).  True Continuous Integration is achieved by merging into the main (often the ‘develop’) branch often.  Ideally, when following TDD, this is done after every change following which all tests pass.  The fewer the code changes involved in each merge, the less chance of a merge conflict occurring – and when they do occur, the easier it is to find and address each.  Merging only after a given user story is developed will continue to result in frequent merge conflicts.
  7. Environmental Disparities:  Following Continuous Delivery helps with the ‘Configuration’ part of this issue but fails to address the ‘Data’ part of the issue.  We didn’t address this during our session, but the best approach I have found for addressing this issue is implementation of Dynamic Test Data Providers in automated functional tests – this approach results in tests which will pass regardless of the state of the data in the environments.
  8. Poorly Written User Stories:  This is another impediment we didn’t address during our session.  However, the session “Evaluating User Story Readiness” may prove helpful in this regard, ensuring that every user story is highly cohesive and test ready.
  9. Development and Testing done in Isolation:  Having system development completed and merged separately from automated testing is addressed by practicing Comprehensive Test-driven Development along with Paired- (or Mob-) programming.
  10. Merge Request Review/Approval:  Having to wait for code changes to be reviewed and approved by individuals who are already busy developing code for features may seem to be an unavoidable impediment to Continuous Delivery.  However, by practicing Paired- or Mob-programming, code review occurs along with code development.  This is an often over-looked acceleration resulting from this practice.

 

 

Craig A. Stockton | QA Automation Architect - Continuous Testing
M 253.241.1598
| cstockto@TEKsystems.com
1601 5th Avenue, Suite #800, Seattle, WA 98101

TEKsystems. Experience the power of real partnership.

 

 



This electronic mail (including any attachments) may contain information that is privileged, confidential, and/or otherwise protected from disclosure to anyone other than its intended recipient(s). Any dissemination or use of this electronic mail or its contents (including any attachments) by persons other than the intended recipient(s) is strictly prohibited. If you have received this message in error, please notify us immediately by reply e-mail so that we may correct our internal records. Please then delete the original message (including any attachments) in its entirety. Thank you

Evaluating User Story Readiness

An often-heard conversation during retrospectives:

Facilitator:  So, what could have been better?
Team Member:  The user stories were too big.
Product Owner:  I’m sorry.  I certainly want to improve that.  How could I improve that?
Team Member (after some thought):  I’m not sure, but they need to be smaller to get them done in a sprint.

The conversation may continue for some time. 

The challenge is, even if the team has a reasonable Definition of Ready for each user story, they don’t seem to have an objective way to expose ways in which a user story could be sub-divided and still provide business value.

The following are five items which can be added to any Definition of User Story Readiness to help ensure each user story is highly cohesive, and testing complexity is considered.


 

  1. Specific User Story Phrase:  One indicator of a user story which has low cohesion, is when they lack a proper user-story phrase, or the phrase is ambiguous (e.g. “Create a ‘Log In’ page”, or “As somebody, I want a ‘Log In’ page, so that I can use the site”).  When a user story is highly cohesive it will have a very specific user-story phrase (e.g. “As a customer, I want a ‘Log In’ page, so I am assured only I have access to my account”, or “As a Marketing Executive, I want user authorization, so that we can present appropriate content to each user”).


  1. Testable Acceptance Criteria:  Too often acceptance criteria are written along the lines of “It Works” (though in other words).  It is critical for each acceptance criteria to be testable, something for which you can express clearly a specific scenario, a specific action, and a specific expected result.  A helpful way to express this is by using BDD (Gherkin) language:  Given [a state, or data scenario], When [an action occurs or is taken], Then [a specific result occurs] (e.g. Given valid credentials for an existing user, When on the ‘Log In’ page the credentials are entered and the ‘Log In’ button clicked, Then the ‘Landing’ page is displayed).

  1. Cohesive Acceptance Criteria:  The “Then” of each acceptance criterium describes the “Test Subject”.  For example:  In “Then the ‘Landing’ page is displayed”, the ‘Landing’ page is the test subject.  In “Then the user’s name is displayed in the ‘Landing’ page header”, the ‘Landing’ page is the test subject.  In “Then an auth token is generated by the auth server”, the auth token is the test subject.  Acceptance criteria are considered highly cohesive when all have the same test subject.  A way to improve cohesion is to extract acceptance criteria for the same test subject to a different user story.

  1. Functional Test Plan:  The majority of functional tests can be extrapolated from each acceptance criterium.  The assumption can be made that there are multiple ‘Given’ states in which the ‘When’ action should produce the ‘Then’ result.  By moving the ‘Given’ statement to the end of the acceptance criterium (written in BDD language), every scenario which should produce the same result can be listed.

  1. Exploratory Test Plan:  Assuming all functional tests are automated, manual testers (QAs, BAs, etc.) can now practice Exploratory Testing (not “ad hoc”, but a formal approach like Session-based Exploratory Testing), finding defects much more difficult (if not impossible) for computers to feasibly discover.

 

 

Craig A. Stockton | QA Automation Architect - Continuous Testing
M 253.241.1598
| cstockto@TEKsystems.com
1601 5th Avenue, Suite #800, Seattle, WA 98101

TEKsystems. Experience the power of real partnership.

 

 



This electronic mail (including any attachments) may contain information that is privileged, confidential, and/or otherwise protected from disclosure to anyone other than its intended recipient(s). Any dissemination or use of this electronic mail or its contents (including any attachments) by persons other than the intended recipient(s) is strictly prohibited. If you have received this message in error, please notify us immediately by reply e-mail so that we may correct our internal records. Please then delete the original message (including any attachments) in its entirety. Thank you