QA Postings - Page 7
|Earliest: November 26, 2017||Latest: September 21, 2021||Total: 163|
Testing Tool Blunders
She defined a "Blunder" as:
- Mistake caused by ignorance, carelessness or not thinking things through.
- People blunder when they don't see or understand.
The same thing could be applied to automation tools.
Automation Application Blunders
Over the past five years, I have evaluated various automation tools. There has been a pattern of feature blunders that keep occurring. They are more focus on making automation easy to create and not really focus on the day-to-day automation functionality.
There's a lot more to the automation process. So here's a couple of blunders that I see over and over:
Not Debugging Automation Friendly
Did you know that QA Engineers spend a lot of time debugging automation issues?
We spend a ton of time having to find out why automation test case fails. Then we have to decide if its an issue that needs to be reported to engineering or if the test case needs to be fixed.
Automation Test Cases systems should make it easy to debug and update.
I have encountered some automation tools where you would have to "start from scratch" and completely rewrite the automation step to fix an issue.
Reusing Automation Functionality
Many test cases have similar functionality. The test may share the same path to get to certain functionality - at some point, it changes to do something else. Wouldn't it be great to reuse common code.
Wouldn't it be great if a code change was made, say an XPath change, was made there was a single source file to change?
I have found many test applications where it's not possible to share code. So then I have to make the change to the same XPath in multiple locations. Which is very time-consuming.
Test Automation systems should have the ability to reuse code. Even better to allow XPath value to be defined in a global library so it can be used elsewhere.
Ya, There Are More Blunders...
Those two are the ones that I see over and over again. I have discovered that automation tools are not always written for all the common QA automation tasks.
Sure it's nice to make it easy to create an automation test case. But there's so much more into the whole process.
QA Memes by the Democratic Candidates
Tomorrow is Super Tuesday and the Boston area airwaves have been busy with all sorts of campaign ads. So to the honor of the candidates that are running for Presidents of the United States, here are some QA Memes:
Senator Bernie Sanders - "I am once again asking Developers to test their code."
Mike Bloomberg - "This Patch is ridiculous."
Joe Biden - "Test Cases!"
Senator Elizabeth Warren - "There's something about this release I don't like."
Jira - Show Recent Comment issues
Sometimes you may want to track comments you made in Jira Issues. This might be needed for a variety of reasons:
- Track issues that you worked on to record testing hours.
- Hold Engineering accountable for responding to issues you listed.
- Track team members comments on projects
Currently there's no "easy" JQL query to search for specific information related to the comment field. The only way to search Jira for comments is to use the Dashboard Activity Filter search tool.
Tip: You can set up multiple gadgets on a Dashboard to perform different searches.
Options with the Activity Filter in Jira
Setting up the Tracking Comments on Specific Projects
Instructions on setting up the dashboard to view recent commented issues.
- Go to the Dashboard, and create a new personal Dashboard
- Click "Add a New Gadget"
- Search for "Activity Stream" - You'll need to click "Load all gadgets" to see it.
- Click "Add gadget" to the right, then click Close.
- Click "Add a filter" define the project.
- Click the '+' and then use the selector to use "Username"
- Enter in a login, such as yours.
- Then click the '+' and then use the selector to select "Activity"
- Enter in Comment in the text box.
- Set "limit to 99 items"
- Click the Save Button.
Assuming everything is right, You should now have a Dashboard view of your activity stream.
QA Time Constraint
There are only so many hours in a test cycle. Any given company there's always more test than there is time to complete them. Obviously every product manager wants their test to be run at every release.
Yes QA Testing time is Finite.
Five Things I Have Learned about QA Time Constraints
Here are some of the things I have learned over the years that have made managing short test cycles. Not all of these strategies apply to every situation - many of these are from different companies with different test cycles - but they can at least help QA move testing forward.
- Propoer Planning prevents Poor Performance - QA Project leads should have an understanding of major changes that are being done in the upcoming release to decide what areas the team should focus more on. For example, if many changes were don't to support mobile, then probably less testing should be done with print actions.
- Acceptance Testing - Have a checklist of key functionality that should be working and tested in every release. These should always be working.
- Automation Rocks- We all know the value of automation testing - it can certainly help QA test more in a short amount of time. The downside is that QA needs to monitor the results for False Positives and False Negatives. Manual testing may miss some "obvious" bugs because testers are rushing through a test to find obvious bugs. Automation is great because it can run tests a 1,000 times the same way. It's not going to miss something because the tester is rushing through to find the most obvious bugs.
- Find Bugs Early as Possible - Use Critical Path testing to find bugs early in the test cycle. Work with the Product and Marketing team to identify all the critical paths with various products and services. The earlier you find blocker/critical bugs the better it is for everyone.
- Check List - Having a physical checklist helps to make sure that the product is well tested. This helps identify when you have to balance testing with offshore teams and make sure that everything has been touched. I have found a checklist is a way to track that testing is being done.
Sometimes bugs are missed by QA because they are unique. They can be hard to find during normal regression testing.
Here are a couple of examples of bugs making it to Production due to complexity of testing:
There was a bug that only occurred for French users of a website that I was testing. The bug occurred because the translated text was too long and as a result, a dialog action button was not visible.
QA missed this in testing because the focus on testing was on English. QA wasn't notified that a particular dialog box was being translated to French - which wasn't an issue at the time since text content wasn't part of QA testing.
JBoss Installation Instructions
At another company that I worked at, I was responsible for software QA while at the same time doing some installs and training for customers. The company developers put together some documentation on how customers can install the product. The problem was that the document only handled a particular set of customers and several customers started complaining about the inaccuracy of the documentation.
QA missed this because the document was working for many customers. There were no major changes in the application that would have resulted in needing to test the installation process with the customer instructions. What happened was the sales team was selling the product to a different set of customers that required QA to check the documentation.
I then walked through the installation with a couple of customers while at the same time updating the document to make sure that it was clear for particular environments and make sure the terminology matched the audience that was doing the install.
Five Things I Learned to Handled Future situations
While the above is very specific examples, there are many more similar bug patterns that I have seen over my many years of QA testing. Here are some things that I have learned:
- It doesn't hurt to every once in a while to take a step back and manually go through the sales flow of the application. Are things working as they should be? How does the product look to new customers.
- Work with Developers to get some QA tools to help with testing. The French problem was being solved by having a special URL query for QA to force the page to load with a particular translation. This tool makes it easy to test the key languages when major changes happen in the application. Also, it makes it easy for automation to test the button visibility against various languages.
- Review the code changes. It doesn't hurt to check out the code review to see what has been changed. Many times I have found that a code change was made without thinking of other consequences - for example, what happens if customers use non triditional UTF-8 characters.
- Learn from the Bugs that Excape QA. One of my weekly tasks is to review the causes of customer reported issues and to see how it was missed.
- Learn new QA tools. There's always something new to learn in QA. There's always some new Chrome Extension, JQuery tip, database query and security lock down.
How Bugs Escape
There are many reasons why bugs are missed by QA and eventually make it to Production. Here are some of the reasons that I have encountered over the years in various QA roles.
Sometimes bugs occur because the bug occurs during a complex situation. These are hard for QA to detect or identify.
For example: If you have 99 items in your cart and your language is French, the shopping cart page crashes.
Realistically these aren't going to be found by QA. With a little help from Devs, QA can formulate test strategies based on code changes. Such as learning more about a 3rd Party library or support for Unicode.
QA Tasing time is finite. There's always too little time to test every possible situation.
This is why test strategies are so important. During off time, the QA team should get together and audit regression testing. The team should focus on what risk areas should QA be a focus on.
This is why developers' testing steps are important. QA should get a "heads up" on what to test.
Stale Test Case Repository
Bugs may escape QA because the manual test case repository tests might be state. How often are you looking at manual regression tickets? Are they update with the latest design changes? Are QA Testers actually following through the steps?
At previous companies that I have worked with, we review the manual regression steps with key stakeholders on a regular base. How often? Usually when there's a big design or code change. It's a good way to get some input in how QA is testing their product.
Don't take manual testing for granted. Make sure to review manual test cases on a regular base - especially if there's a lot of bugs making it in production.
A good QA Manager once told me, "that if an automation test case passes several releases, it should be audited. The test may not be challenging the code enough to be useful as part of regression."
In the month of February, I'll explore more into how Bugs escape QA testing. I'll talk about various strategies that I have found to work well to combat these bugs.
Jira Board Shortcuts for QA
Atlassian Jira is being used by more and more Engineering teams. It's flexibility and standard tools help make it easy for teams to track tickets and sprint progress.
During Sprint planning, it's very helpful to know some shortcuts to help make navigation easier.
Four Shortcuts to Know
There are four Sprint Board shortcuts to know. Simply type in the number when you're on the sprint board.
|Active sprints/Kanban board:||2|
|Dock/undock the filters panel:||[|
The last feature, "Dock/undock the filters panel" hides the left gray bar - which is not being used while you're looking at a particular sprint board view. By undocking the filter panel, you get the valuable real estate back.
New QA Memes
Here are some original QA memes that I came up with. These are just common occurrences that happen in QA. In particular, QA responding to conversations in Slack.
All of these images are Slack "ready" and will show up embedded in chat conversations.
These will also work as Jira comments too.
Check out the Library
These and all QA images are in the QA Graphic Library.
Human Testing better than Automation
Recently Ministry of Testing asked:
Based on many years of testing, here are my four things that humans testing is good at. Automation is a great tool, but for these items, they are no match.
Four Ways Human Testing is Better than Automation
Exploratory Testing - Looking for ways to break functionality is best done with human testing. People don?t always use the conventional path when using a website. Exploratory testing by humans can find unique bugs.
Debugging the route cause of a bug. Humans can use all sorts of methods to discover why a bug might happen. Humans can use the Chrome Console, log files and visual logic to better understand the root cause of a bug.
New Product/Feature Testing - It?s better to perform manual testing when a product/feature is new. The feature could go through numerous changes - so human testing would best to start before investing in automation time.
Third Party Tools Integration - Using third-party tools that require logins. There?s a chance that third-party companies could make changes that will break the automation flow. (Such as changing ids or layouts) Human testing can help bypass any complexity that third party websites have.
Apple Numbers (QA Fail)
Apple makes things easy - which is why I like using it as my computer platform. Just about everything is easier to do on Macintosh.
However, they seriously failed with making graphs in Numbers.
On Christmas Day, I was trying to create a chart of how early my daughter would wake up Christmas morning.
I was struggling with generating a chart from a bunch of data. Basically it was a spreadsheet that had years as one row and times in the second row.
Here's a sample shot of the data:
When I clicked on the "Insert Chart", this is what I got:
Looks like I'll have to do a lot of chart manipulation to make this to work. (I tried putting the data via column view and got the same results.)
I put in the exact same data in a Google SpreadSheet, select the fields and clicked "Insert Chart" and got a perfectly matched chart:
What I Learned
Apparently if I need to create a quick chart, the way to go is to use Google Spreadsheet (or Excel). Apple Numbers isn't all that user-friendly when it comes to creating charts.
Apple should make it simple to create charts - they do have some unique layouts/styles that aren't available in other applications. I shouldn't have to be a chart master to make it work - especially if the data is simple.