<img height="1" width="1" style="display:none" src="https://q.quora.com/_/ad/33f0fcb02bbd41af9d706a2c469929f2/pixel?tag=ViewContent&amp;noscript=1">

Works on My Machine

by Bekki Freeman, on 4/4/18

Note: Test engineers Reena Kuni and Jeannette Smith also contributed to this blog.

“Works on my machine.” Yes, I’ve said it. Yes, I’ve groaned at myself as soon as it came out. I don’t intend it to be defensive; I just literally and innocently mean it as shorthand for, “I can’t reproduce this yet.” Now that we’ve finished another release cycle for Eggplant Functional 18.0, we thought we’d revisit and share some best practices for bug fixing between QA and Dev.

Make Bug Reports as Self-Explanatory as Possible

Whether you’re Dev or QA, include the following information in your bug reports:

  • Steps to reproduce: Go further back than you might think, like 10 steps back, all the way to what were you doing just beforehand.
  • State of the app: Provide information about variable watcher status, which panels were open, any non-default preferences set, etc.
  • Screenshots or videos: A picture is worth a thousand words.  
  • Sample script or suite: Add what was used when the issue occurred and the SUT IP address to use for reproducing.
  • Any log files, crash reports, or stack traces.
  • Build number, OS of the Eggplant Functional host machine, type of connection to SUT (where relevant). Does the bug occur on other OSs or just this one?
  • All places where this issue could be encountered. Specific to Eggplant Functional, a feature could be used on several panels, accessed from the command line, and accessed from different types of tests and scripts.
  • Do we know if it’s a new issue or has it been around in previous releases?
  • Does it occur in similar features, such as different connection types?
  • Comments on the bug that are unambiguous and easy to read. Readers, including future you, will appreciate how clear your comment/description is.

When Dev Cannot Reproduce

Bust through the wall between QA & dev. You’ll solve things faster if you just go talk to them. Developers should work with QA on troubleshooting activities. Watch a QA team member reproduce the issue on his or her machine. Try to figure out in what section of code the issue might be and comb through the code directly. Look for race conditions or deadlocks in the area of code where the issue is found. Analyze from which threads the method or object is assessed. Add logging messages in likely areas with objects, values, etc., and add to a production build to get more tracking information on the QA machine.

Dear Dev: Candid Notes From QA

One of the best things Dev can do for QA is provide clear examples in feature and bug descriptions. In addition, code or SenseTalk sample syntax and examples are so important to help QA understand how something should work.

We always hope to get the release candidate early so we have enough time to ensure a solid, bug-free release. Slipping a fix or two in the release candidate at the last minute repeatedly proves to be costly in terms of QA time and risk to quality. In the last two releases, we’ve actually been bitten by the desire to slip in what seemed like low-risk, last-minute changes.

Typically, the morning after the release is more relaxed for QA. But we still need to verify that the released build of Eggplant Functional downloads successfully from the website and operates as expected. We’re also holding our breath in anticipation, hoping that customers don’t find any bugs that we didn’t catch.

“Why didn’t you find that bug?” is the most dreaded question for QA. The only things we can do are determine in what feature the bug exists, evaluate our tests to determine why we missed it, and take steps to ensure it’s covered in the future. While we’re disappointed, we have to accept it and learn from our mistakes. Root cause analysis can help determine why the bug was there and why we didn’t catch it, as well as help us find patterns and risky areas that should be tested more thoroughly, added to our regression suite, and given extra development attention going forward.

Failed testing is the most uncomfortable status for QA. We’re a very collaborative team, but QA still hesitates to put a fix in failed testing. In doing so, we all need to remember that it’s not personal and that we’re working together to improve the product. While failed testing can be frustrating as a Dev, too, it just means that QA has been exceptionally thorough in preventing an oversight on my end from reaching the customer.

QA’s Bug Fix Wishlist

Here are some key things developers can do to increase the success of a bug fix:

  • Do some basic testing before committing — happy path and a few sad path choices or edge cases. Ask QA for ideas if you aren’t sure what to test.
  • Test both creating and editing of the feature/bug in question, e.g., scripts, suites, images, connections, etc.
  • If the change has risks for other platforms, test other platforms as well, before committing.
  • Read the the bug description completely before implementing and don’t leave out some part of the bug fix.
  • Work on high-impact features/bugs early to help identify regression changes more quickly.
  • Start working on behavioral changes to an existing feature early, so it can be tested thoroughly and documented on time.
  • Communicate any known issues with a bug fix to QA before they begin testing.
  • Describe any ancillary risks of a bug fix upfront so that QA can identify other areas to test for regressions.
  • Provide SenseTalk examples, especially on new features, that QA can use to learn the feature and more quickly develop tests.

On the Eggplant Functional team, we try to be very respectful of each other’s time, knowing that we always have an ambitious deadline for releases. The due diligence we describe here highlights ways we’ve been able to improve our efficiency and quality. We maintain an open and collaborative environment and prefer to avoid the mentality of tossing fixes and bugs over the wall. We can joke about rejected bugs and failed testing fixes because we work so closely together toward the same goal.

What best practices have you found that work in your organization? What do your devs do that drives you bonkers or makes you smile — or vice versa? Let us know in the comments, and by participating in our forums on our brand new Community page.
Topics:Quality assuranceUI testingUser Testingsoftware testing

Subscribe To Updates