Quality Assurance Image Library
This is my carefully curated collection of Slack images, designed to perfectly capture those unique QA moments. Whether it's celebrating a successful test run, expressing the frustration of debugging, or simply adding humor to your team's chat, these images are here to help you communicate with personality and style.
QA Memes (October)
Here's more collection of QA memes. Some of these you have to be doing QA for a while, but most of these you should understand.
Meme
Best Release Ever
Hello, QA Wants to go Home!
why can't developers actually test their code?
QA up at Night
A common question that gets asked CEOs is:
What Keeps you Up at Night?
The response usually centers around new projects and uncontrolled risks to the company bottom line.
So, What about QA? What issues are keeping QA up at night? I can't speak for all of QA, but here are the top six things that keep QA Managers and Leaders up at night.
Six Things that keep QA up at Night
Ticket Scope Creek - Product making last-minute changes and not adding the change to the spec document. This can cause some unfortunate consequences later. For example, the product team didn't realize that an extra line in a description field pushes the next button below the visible frame view.
Tight Deadlines- sometimes a feature has to go to Production with very minimal time for proper testing. In these rare cases, QA has very little time to fully test the feature. Example: Product wants to ship a new feature in time for the customer event, unfortunate delays with development means a shorter test cycle.
Dev Environment not Matching Production - Some testing can't be accomplished because the testing environment doesn't match production. Most of the time it's load balancing and Cache that may cause issues. Example: New customer login path doesn't take account of having different servers. QA passes the feature but the release is rolled back because users aren't able to log in.
Developers that don't test their code - some developers don't test their code before handing it off to QA. They feel when it passes code review it's good enough. Unfortunately, developers don't check for how the change impacts the code. Most time code review is to test logic. Example: Developer submits code for QA and the build fails because the developer forgot to properly close a SQL insert statement.
Dealing with the Cash Register - Making the sale process smooth is critical to any business. QA needs to make sure that customers can buy and the sale occurs correctly for the customer and for the company. It's important that customers are properly charged for the goods and services that they order. Example: At one of the companies that I worked at, a team of QA engineers was responsible for making sure that purchases were successful. They were trained to understand various tax rules and security regulations. Not all companies have the luxury of having a skilled team, so QA has to do their best to make sure that the sales process is good after every release.
Automation Failures due to UI changes - When a developer makes a UI change and doesn't tell QA, it can cause some issues after the first automation run. The overnight tests may fail and QA will spend much of the morning fixing all the failed runs. This can cause other bugs to go undetected for a while. Example: A developer changes some of the IDs in the main navigation to keep with the new CSS standards. The change is considered minor and no ticket is created. On the first day of testing 90% of the automation fails because the old IDs can't be found.
Agree/Disagree?
What do you think? Are there other cases that cause you to lose sleep?
Share your story in the comment section.
Winnemucca, Nevada
Recently I was doing some location testing using Google. The purpose of the test was to find a place in the United States where I could expand the radius targeting more than 50-miles.
When you use Google targeting you are allowed to expand that radius only when your location had a small population density.
If your range has a high population count Google will force you to have a smaller target.
I tried many remote areas of the United States and the only place that I could expand the target range to 100-miles was Winnemucca, Nevada.
I am sure that there are places in Alaska that would qualify - but I wanted to test for a place in the lower 48-states.
Hopefully this helps someone else doing targeting testing using Google Locations.
Code Freeze Meme
It's been a while since I added QA graphics to the QA library. Here are some more images to add to my collection.
This week's theme is "Code Freeze"
Quality Logo
Since today is Labor Day, I decided to go easy on today's post and highlight some QA logos that might be fun for presentations or Notion headline.
Typerwiter
Counter Dial
I wish I knew more about...
Today's blog post is all about the Ministry of Testing blog challenge for September.
Write a blog on the topic "I wish I knew more about..." before September 17th
My answer is simple:
I wish I knew more about Quality Chrome tools that can help me be a better tester.
I feel like there are a bunch of Chrome plugins that are out there that I should know about.
Sure I use some of the popular Chrome tools such as ColorZilla, JunkFill, Bug Magnet, Fake Filler, and Page Ruler Redux.
Now What?
The next time that I have downtime, I'll spend a few minutes searching the Google Play store for extensions that might be useful.
Some search queries that I can think of:
- Random Data insert
- HTML5 Checker
- Cookie tester
- JQuery test
- Screenshot - always looking for the latest screenshot tools.
Fuzz Testing
Fuzz testing (fuzzing) is a quality assurance technique used to discover coding errors and security loopholes in software, operating systems, or networks. It involves inputting massive amounts of random data, called fuzz, to the test subject in an attempt to make it crash.
This is usually a technique done with Automation to see how fields respond to random text and interactions.
Manual testers may want to use Bug Magnet, a popular Chrome Extenstion to add random data to fields. This is more exploratory testing than Fuzz Testing. Fuzz Testing is focused more on how the software reacts to a huge set of random data being entered.
Fuzzing is a critical part of testing as it checks for potential vulnerabilities with the software application and logic.
You can learn more about Fuzzing at the Open Web Application Security Project Foundation website.
Early Evaluation
In QA, it's good to not have any bad surprises. This is especially true around code freeze. Nobody wants to encounter blockers or critical issues that may impact the ability to release on time.
This is why many companies implement an Early Evaluation of releases. It's a chance to see where the release branch is, and if there are any issues.
Basic Definition of Early Evaluation
Early Evaluation is when QA builds the latest version of the release branch to a dev box a few hours before code freeze. If the build is successful, QA runs a suite of Acceptance Test to see if the release branch is stable. If there are any blocking issues, QA will notify developers of the issues.
Main Purposes
There are three main components of the Early Evaluation:
Is the Release Branch really Stable - This is the time to find out build issues. Not at midnight or off-hours when code freeze happens. Key developers may not be near their computer to help out.
Are there any Database Migrator issues? Some developers may not check for migrator conflicts when merging their code in. Checking for migrators errors can help determine of certain functionality are working as expected.
Acceptance Testing - The QA team should have a suite of manual acceptance tests. (These are specific tests that QA has defined are critical for sign off) QA should be checking for any Blocking Issues This is also a good time to run automated acceptance tests. If there are any automation failures, QA should check to make sure its not related to release intended changes. Fixing these now will help with overnight automation runs.
Worth the Time and Effort
Code Freeze day can be crazy, but it's important to take a break and test out the release as part of the early evaluation. In the past, there has been a lot of good bugs found before code freeze, saving a lot of time during code freeze.
Google Lighthouse
Google's Lighthouseis a useful extension to test the performance of any website. It's useful to learn about the load times and suggest ways to improve the site.
This is built into every chrome browser, simply go to the Developer Tools page, Command - Option - I, and then click on the Lighthouse on the top menu bar.
Official Product Description
Lighthouse is an open-source, automated tool for improving the quality of web pages. You can run it against any web page, public, or requiring authentication. It has audits for performance, accessibility, progressive web apps, SEO and more.
You can run Lighthouse in Chrome DevTools, from the command line, or as a Node module. You give Lighthouse a URL to audit, it runs a series of audits against the page, and then it generates a report on how well the page did. From there, use the failing audits as indicators on how to improve the page. Each audit has a reference doc explaining why the audit is important, as well as how to fix it.
Useful for QA?
This is a useful tool to check for broken images that may go unnoticed during testing. Especially since the test is run on another server outside of VPN.
In addition, the "properly size" images can highlight any images that may cause slowness in load times - particularly important for mobile users.
Also useful for QA to test login processes. Since some portions of the site may require logins, you can use this tool to see how much of the site is exposed without logging in.
Easy to run, and just about instant results make it a quick tool to assist QA with some general page testing.
They are constantly adding new audit metrics using industry best practices.
Ask QA: Should You Always Report Bugs?
Someone recently asked me:
Should you keep raising bugs that you know won't get fixed? Isn't it a waist of people's time during the ticket triage to look at very minor issues?
Answer: First of all, QA should not be looking for reasons to NOT create bug reports. QA should file tickets for every bug that they encounter. Then they should assign the right priority to that issue. It's up to Product/Dev to decide on how to triage the minor issue.
Tickets should be created for every issue. Here are some reasons why:
Sometimes they get fixed: In some instances, I have seen developers fix multiple issues in a common code base just because they have the file open and can make the modifications.
It's a test of a good QA Team - Engineers and Product will see that QA is active and finding things. It shows that QA is looking for issues and reporting them. If customers are reporting issues that QA isn't finding, well that could question how good the QA team is.
Bug Patterns - I have found that finding little bugs can lead to finding the bigger bugs. When developers are sloppy in some areas, it's a sign that they may be sloppy in another area. So when I find a minor issue. It makes me stop and think, is there another issue here that may not be so obvious.
Maybe the Feature isn't Ready, yet - If QA finds a lot of minor issues, it's might be something that the product team to sit back and take notice. Should a new feature/product really go to market with so many minor issues? If they aren't addressed now, when will they be?
Don't be Discouraged
Minor tickets can be frustrating to find and report - even if QA knows that no one will read the full bug. However, there may be a time when the bug will get fix, so it's important to report the issue.
Some Pointers
- Don't spend too much time writing up the minor tickets. Just do the bare minimum on reporting. If the Dev team needs clarification, you can always go back and add value to the report.
- Generate a report of the minor bugs. If you do daily or weekly QA reports, make sure to highlight the minor bugs found. This will help highlight the issues that QA has found. There's strength in numbers.
- Combine Issues - When you have some downtime, revisit some of the old minor issues and see if it makes sense to combine several minor issues into a single ticket - which may make the new ticket a higher priority.
About
Welcome to QA!
The purpose of these blog posts is to provide comprehensive insights into Software Quality Assurance testing, addressing everything you ever wanted to know but were afraid to ask.
These posts will cover topics such as the fundamentals of Software Quality Assurance testing, creating test plans, designing test cases, and developing automated tests. Additionally, they will explore best practices for testing and offer tips and tricks to make the process more efficient and effective
Check out all the Blog Posts.
Blog Schedule
Thursday | BBEdit |
Friday | Macintosh |
Saturday | Internet Tools |
Sunday | Open Topic |
Monday | Media Monday |
Tuesday | QA |
Wednesday | Affinity |
Other Posts
- Test Cases Specs
- The Best 2019 QA Posts
- Fixed some Classic Posts
- Location Guard
- Design By Contract
- QA Memes (October)
- QA Fail: U-Haul Car Trailer
- November Release Memes
- PostgreSQL Quick Cheat Sheet
- Hide That Bookmark Bar
- Making Evidence Base Decisions
- Applescript for Chrome
- Stealth Mode Deployment
- Too Many Cooks in the Kitchen
- XPath Validation