Blog Listing

Quality Assurance Image Library

This is my carefully curated collection of Slack images, designed to perfectly capture those unique QA moments. Whether it's celebrating a successful test run, expressing the frustration of debugging, or simply adding humor to your team's chat, these images are here to help you communicate with personality and style.

December 17, 2024

Not All Bugs Are Worth Fixing: Why Some Are Left Behind

As a Quality Assurance (QA) professional, you may have encountered the disheartening situation where bugs you identified and reported were ultimately left unfixed. It can feel like your hard work is undervalued or that you're failing to uphold the quality standards expected of you. But here's the truth: not all bugs are worth fixing.

This concept is not a failure of QA but rather a strategic decision that balances the cost of fixing a bug against the value it brings to the product or the user experience.

The Cost-Benefit Analysis of Bugs

When deciding whether to fix a bug, Product/Dev teams consider factors like:

  • Severity: How significantly does the bug impact the user experience or functionality?
  • Frequency: How often is the bug encountered by users?
  • Cost to Fix: How much time and resources are required to resolve it?
  • Risk of Fix: Could fixing the bug introduce new issues?
  • Deadlines: Is there sufficient time to address the bug without delaying the release?

For example, a minor cosmetic issue on a low-traffic page might not be prioritized if the team is racing against a critical deadline. Fixing such a bug could consume resources better spent addressing more impactful issues.

Reporting Bugs Still Matters

Even if a bug is unlikely to be fixed, QA's role in identifying and reporting it is still critical. Reporting ensures:

  1. Documentation: Bugs are logged and available for review later, potentially in a less time-sensitive release cycle.
  2. Trend Analysis: Patterns of similar bugs could highlight a deeper issue in the codebase.
  3. Awareness: Developers and stakeholders have visibility into the product's imperfections, enabling informed decision-making.

It's important to understand that the decision to leave a bug unfixed is often made by weighing business priorities and resource constraints. This doesn't diminish the value of your work as a QA.

Shifting the Perspective

QA teams can sometimes become frustrated when their bug reports are not acted upon. During my time leading QA discussions, I've found it's helpful to remind teams that:

  • Your job is to find the bugs. Fixing them is a separate process driven by different priorities.
  • You are protecting the customer. By finding bugs before customers do, you ensure a better experience even if every issue isn't resolved.
  • Bugs are like insurance policies. Even when bugs aren't fixed, their documentation can serve as a reference in future discussions about code quality and prioritization.

Why It's OK for Some Bugs to Stay

It's perfectly OK for all bugs not to be fixed. Software development operates within constraints, and addressing every issue simply isn't feasible. But when QA does its job thoroughly, the team has all the information needed to make the best decisions for the product and the users.

At the end of the day, QA's mission is clear: find the bugs before customers do. Even the smallest bugs are worth reporting because their discovery reflects the thoroughness and dedication of your team.

So the next time a bug you reported is left behind, don't be discouraged. Celebrate the fact that you found it first. That alone is a win for quality.

December 10, 2024

PlayWright URL Scraping

While experimenting with Playwright this week, I put together a script that grabs all the URLs from a website and writes them to a file. Here's the code that I finally came up with:

This approach is particularly useful when you need to ensure that all the anchor tags on the homepage are functioning as expected. By verifying the anchor tags separately, you can isolate any issues related to broken or misconfigured links, making it easier to pinpoint and address problems.

Additionally, I'll create another test specifically to validate that the URLs associated with these anchor tags are correct. This two-pronged strategy ensures that both the structure and the destinations of your links are accurate.

Pro Tip: The reason for separating these tasks, instead of validating the URLs while scraping the homepage, is to enhance the efficiency of your test execution. By dividing the workload into smaller, targeted tests, you can leverage parallel execution to speed up the overall testing process. This approach not only reduces the total runtime of your test suite but also provides clearer insights into potential issues, allowing you to debug faster and more effectively.

December 3, 2024

Five Answers QA Gives to Dev When Testing Changes

Five Answers QA

As QA testers, we play a pivotal role in ensuring that the software our developers create meets the highest standards of quality and functionality. During the testing phase, developers often approach us eagerly (or nervously) to ask, "Did it pass?" Over the years, I've found that our responses typically fall into five distinct categories. Each answer tells its own story about the state of the code?and what's next.

Here's a light-hearted but insightful look at the five answers QA gives to developers when we test changes:


1. "No, there are issues."

This is the classic QA response, and often the most dreaded. When we give this answer, it means we've encountered bugs or inconsistencies that need to be addressed before the code is ready.

Why we say it:
Our role is to be the gatekeeper of quality. We don't stop at saying "No"; we provide detailed feedback, logs, and replication steps so developers can tackle the issues efficiently.

What Devs Should Know:
A "No" from QA isn't a personal critique?it's an opportunity to refine and improve the product.


2. "Yes, but you'll have to wait until bugs are fixed."

This answer signals that the primary functionality works as expected, but there are secondary issues or edge cases that still need attention. While the fix might not be critical to the current release, it's worth addressing.

Why we say it:
We want to ensure you're aware of minor bugs that could grow into larger problems later. Testing is about more than checking boxes; it's about foresight.

What Devs Should Know:
This "Yes, but" approach keeps progress moving while acknowledging the need for future iteration.


3. "Yes, but not what you expected."

This one stings a bit for everyone involved. When we say this, the code works, but the outcome deviates from the intended functionality or doesn't align with user stories or design specs.

Why we say it:
Sometimes, code technically "works" but misses the mark in terms of business requirements or user experience. QA looks beyond the "happy path" to ensure the product aligns with the vision.

What Devs Should Know:
Think of this as a second chance to revisit the user story or refine the implementation. Collaboration between QA, Dev, and Product at this stage is key to success.


4. "Yes, and here's more!"

This is the QA equivalent of a standing ovation. Not only does the code pass testing, but we've also discovered unexpected benefits, optimizations, or overlooked strengths.

Why we say it:
We want to celebrate wins with you! Maybe the new feature performs better than anticipated, or we found ways to leverage it beyond the original scope.

What Devs Should Know:
Cherish these moments. They're proof that all your hard work is paying off?and QA noticed.


5. "Yes, I thought you'd never ask."

This answer is delivered with a mix of relief and satisfaction. It means the code is perfect?or as close as it gets. All scenarios passed, edge cases handled, and performance met expectations.

Why we say it:
This is our way of telling you: "Well done!" A flawless release is rare, but when it happens, it's a moment of pride for the whole team.

What Devs Should Know:
Take a bow, share the success, and let's prepare for the next challenge!


Final Thoughts

Each of these answers reflects a different facet of the QA/Dev relationship. While it's fun to categorize them, the reality is that our responses are a stepping stone for collaboration and improvement. At the end of the day, QA and Dev share the same goal: delivering exceptional software to users.

So, the next time you ask us, "Did it pass?"?brace yourself. Whatever our answer, you'll know it's backed by rigorous testing, an eye for quality, and a shared commitment to excellence.

November 26, 2024

Crafting the Perfect User Experience: The Disney World Approach to Holistic Testing

Disney Holistic

Imagine holistic testing like planning the ultimate Disney World experience!

Just as Disney meticulously designs every aspect of a magical day - from the moment you enter the park to the last firework - holistic testing looks at the entire user journey from start to finish.

Holistic Testing About

Let's break it down with a Disney World analogy:

A holistic testing approach is like being a Disney Imagineer who doesn't just check if one ride works, but ensures the entire park experience is seamless. It's not just about making sure Space Mountain's track is safe, but also checking:

  • Can guests easily find the ride?
  • Is the queue management smooth?
  • Are the signs clear?
  • Do the special effects work?
  • Is the ride accessible for guests with different abilities?
  • How does the ride fit into the overall park experience?

In software, this means testing isn't just about checking if a button clicks, but understanding the entire user journey:

  • Can users easily find what they need?
  • Does the website work on all devices?
  • Is the experience smooth from login to checkout?
  • Are all features working together harmoniously?
  • Can users with different abilities use the product?

Just like Disney creates a magical, interconnected experience where every detail matters, holistic testing ensures every part of a digital product works perfectly together, creating a smooth, enjoyable "ride" for users.

November 19, 2024

How QA Saved the Day: Navigating Risky Pre-Holiday Releases with Confidence

Winning The Release

The week before Thanksgiving can be a challenging time in software development. Product leads are often eager to meet their monthly targets, pushing for feature releases even when the timing may not align with QA best practices. This case study recounts a real-world scenario where QA successfully navigated a difficult political landscape to delay the release of a high-risk feature.


The Situation

A product team was eager to push a feature update in the pre-Thanksgiving release. However, a critical issue emerged: the feature's code would not be ready by the established code freeze date. The Tech Lead assured the team they would perform a rigorous code review to ensure quality, but this promise came with a significant caveat: the Tech Lead would be unavailable for much of the Thanksgiving week.

This posed two major risks:

  1. Customer Impact: If a unique issue arose, there would be limited support to troubleshoot and resolve it during the holiday week.
  2. End-of-Month Reporting: Thanksgiving coincides with a busy reporting period for many customers. A failure in reporting functionality would disrupt critical business operations for users who rely on accurate, timely data.

The Decision

As QA, our role was not to simply test and approve code but to assess the broader implications of shipping an incomplete feature. We conducted a release risk assessment and outlined the risks of pushing the feature:

  1. The incomplete code would likely introduce instability, given the reduced availability of team members to support post-release fixes.
  2. The potential for reporting issues posed a reputational risk to the company.
  3. Customers had no immediate expectation for the feature, reducing the urgency to release it.

Despite pressure from the Tech Lead and Product Leads, we escalated the concerns to the VP of Engineering.


The Outcome

The VP of Engineering supported QA's recommendation to delay merging the new feature. This decision ensured the following:

  • The feature shipped in the post-Thanksgiving release, with ample time for thorough testing and code review.
  • The delay had no measurable impact on the overall project schedule.
  • Customers experienced no interruptions or issues during the Thanksgiving week.

The Tech Lead later acknowledged that delaying the release was the right call, given the complexity of the feature and the risks involved.


Key Takeaways

  1. Use Risk Assessments to Advocate for Quality: Clearly articulate the risks of rushing a release, especially when they affect critical customer workflows or coincide with challenging timelines like holiday periods.

  2. Balance Business and Technical Priorities: While Product Leads may push for releases to meet goals, it's essential to weigh those goals against the potential impact on customers and the company's reputation.

  3. Escalate When Necessary: When a decision involves significant risk, involve leadership to ensure all perspectives are considered and that the final decision aligns with the company's values and priorities.

  4. Delays Aren't Always Bad: In this case, the delayed feature had no negative impact on the project schedule or customer satisfaction. Taking the time to get it right paid off in the long run.


This case underscores the importance of QA's role as a gatekeeper, not just for software quality but for the overall success of the product and customer experience. By staying firm and focused on the bigger picture, QA can navigate even the trickiest political landscapes to ensure the best outcomes for all stakeholders.


Does your QA team have a plan for handling high-pressure release situations? Share your strategies or lessons learned in the comments!

November 12, 2024

Mastering Risk-Based Testing: How QA Teams Can Prioritize for Quality and Efficiency

Risk Based Testing2024

When it comes to Quality Assurance testing, understanding risk is key to delivering effective, efficient, and focused testing. Testing is about more than running checks on the entire product; it's about knowing where to focus your energy to ensure that any changes work as intended and don't disrupt critical areas of functionality.

Why Risk-Based Testing Matters

In QA, risk-based testing helps us prioritize. Instead of "testing everything," we zero in on areas with the highest potential for issues. Prioritizing testing effort based on the likelihood and impact of changes allows QA to focus on high-risk areas, ensuring more targeted, effective coverage.

When change occurs in software, it often ripples through several parts of the application, and each area of impact needs to be tested. Knowing the high-risk areas allows QA to test smarter, not just harder, delivering results that matter most to both users and stakeholders.

Story Time: "Test Everything" vs. Focused Testing

I'll never forget the numerous times I was instructed by my VP of Engineering to "test everything" before a big release. As a QA engineer, "test everything" sounds straightforward but rarely provides the best strategy for actual risk mitigation. Broadly testing every feature leaves little time to focus on edge cases or specific areas impacted by changes. With each feature tested only at the surface level, there's a higher chance that unexpected bugs, especially edge cases, slip through.

But when you're empowered to dive into high-risk areas, to dissect each change, you're often able to uncover issues that no amount of shallow regression testing would catch. Focusing on high-risk areas enables QA to surface issues that may seem minor but could disrupt critical functionality if left unchecked.

Identifying Risky Areas in Testing

So, how do we get to a point where we can intelligently identify and prioritize risks? Here are some of the techniques I've found valuable:

1. Analyze the Code Changes

When you get a code change, don't look only at what's new. Look at which existing parts of the system interact with these changes. Are there shared functions or dependencies? Minor alterations to shared code can have far-reaching effects.

Pro Tip: Build a habit of analyzing commit messages, reviewing PRs, and talking to developers about areas they feel are sensitive to change.

2. Understand Business Impact and Critical Paths

Not every feature carries the same weight in terms of business impact. Some features are directly tied to revenue, while others are key to user experience. Work with product managers to identify which features are most critical to the business, and make sure these paths are covered in testing.

Example: For an e-commerce platform, a change affecting the checkout flow demands high-priority testing since issues here could directly impact sales.

3. Ask Questions and Look for Assumptions

Testing assumptions developers or product owners have can reveal risks no one initially considered. Ask questions like: - Are there alternative ways users might interact with this feature? - What data assumptions are built into the feature?

Real-World Insight: Once during a testing cycle, I identified a critical bug by questioning the developer's assumption about what users would input into a form. It turns out that edge cases in user data broke the feature in production.

4. Use the Impacted Module Approach

When major changes impact a module, list out each dependent component. Take the extra time to test these areas in-depth; they're prime candidates for breakage, especially if they share interfaces with the modified code.

For Example: A change in the database schema might not seem like it impacts front-end functionality, but if your module pulls data from affected tables, it could disrupt page display.

5. Prioritize Test Cases Based on Risk Severity

Not all test cases carry equal weight. Evaluate which ones will deliver the best insights for time invested. Prioritize based on the potential damage a bug might cause in production. Work with stakeholders to determine what they'd consider "critical" failures.

Story: I once was told to "test everything," but by focusing on high-impact test cases first, I managed to discover a major issue in time for the developers to fix it, preventing a potentially costly post-release hotfix.

How to Approach "Test Everything" Requests

When you're asked to "test everything," it's usually because stakeholders want to ensure nothing is missed. But "test everything" often leads to overstretching the QA team's efforts. Here's how to handle this:

  1. Clarify Priorities: Politely ask what areas are the highest priorities.
  2. Suggest a Risk-Based Approach: Explain the benefits of focusing on high-risk areas and offer to create a prioritized test plan.
  3. Educate on the Cost of Regression Testing: Share data on the time it would take to test everything versus targeted testing to help decision-makers see the value of a focused approach.

Embracing Risk-Based Testing in Your QA Strategy

To wrap it up, effective risk-based testing is about asking the right questions, using your knowledge of the application's structure, and constantly refining your approach to prioritize high-risk areas. By emphasizing risk assessment in your QA strategy, you move beyond just "covering" your application to understanding where it's most vulnerable.

QA isn't about testing everything; it's about testing the right things. When we focus our efforts strategically, we ensure quality where it matters most.

November 5, 2024

Effortlessly Access Jira Tickets with a Custom Bookmarklet

In my previous company, I often needed to access Jira tickets to review or comment on them. The usual process was a bit tedious: I'd have to open Jira, then manually type the ticket number into the search bar to locate it. This multi-step routine took up valuable time, and I knew there had to be a faster, more efficient way.

The Solution: A Bookmarklet for Jira

I created a simple bookmarklet to streamline this process. With this bookmarklet, I can simply enter the ticket number, and it instantly opens the Jira ticket. It's an efficient shortcut that removes the extra steps of navigating through Jira manually.

Since I only work in one Jira project, the bookmarklet is set up for my specific project. However, if I were working across multiple projects, I could modify the code to prompt me to enter the project ID each time or simply omit the project number from the URL.

Jira Ticket2024


Bookmarklet Example

To use this bookmarklet, create a new browser bookmark and paste the following code into the URL field. Be sure to update the Jira domain (e.g., company.atlassian.net) and project ID (e.g., PP24) to match your Jira environment.

javascript:(function(){ 
  var ticketNumber = prompt("Enter Jira Ticket Number"); 
  if (ticketNumber) {
    window.location.href = "https://company.atlassian.net/browse/PP24-" + ticketNumber; 
  } 
})();

This little tool saves time and effort, especially for anyone working regularly within Jira. Just click the bookmark, enter the ticket number, and you're immediately taken to the ticket-no need for manual searches.

October 29, 2024

Boo! Common Frightful Phrases in Software QA

Q A Halloween Phases

As the leaves turn and the nights grow longer in October, we're not just preparing for Halloween; in the tech world, we're also bracing for the spooky season of Software Quality Assurance (QA) testing. Here's a light-hearted look at some of the most terrifying phrases you might hear echoing through the halls of software development, perfect for this time of year.

1. The Build is Broken

Imagine this: You're walking through a dimly lit corridor, and suddenly, from the shadows, a voice whispers, "The build is broken." It's not just a phrase; it's a curse that threatens to derail your entire project. It's like finding out your pumpkin has rotted from the inside out.

2. Regression Testing Found New Bugs

This is akin to opening your trick-or-treat bag only to find that all your sweets have turned into bugs. It's not just the new features that are haunted; the old ones have now decided to join the ghost party.

3. We've Hit a Showstopper

Hearing this is like discovering a haunted house in your software. This isn't just any old specter; it's the kind that stops the entire production line, making everyone question if they've angered some tech deity.

4. It Works on My Machine

This phrase is the tech equivalent of a ghost story that ends with, "But when I got home, it was still there?" It's the mystery that keeps developers up at night, trying to replicate an issue that seems to vanish like a wisp of smoke in daylight.

5. "There's a Memory Leak"

If software had a ghost story, this would be it. It's as if your code has been possessed by a poltergeist that keeps eating away at your resources until your application collapses into a spectral heap.

6. We Didn't Test for That Scenario

Imagine setting out to trick-or-treat only to find your costume has a hidden flaw that everyone notices. This phrase is the realization that your testing didn't cover the 'vampire at the door' scenario.

7. The Database is Corrupted

This is like finding out that all your Halloween candy has turned into a pile of dust. Data corruption is the nightmare where all your hard work vanishes, leaving you with nothing but echoes in an empty digital tomb.

8. The Third-Party Service Just Changed Their API

In our spooky software tale, this is the moment the witch decides to rewrite her spellbook without telling anyone. Suddenly, your integrations are as outdated as a Victorian ghost story.

9. We're Out of Memory

This phrase brings to mind a haunted house party where the guests keep arriving until the house literally can't hold anymore. Your software is the house, and memory is the space, creaking under the weight of too many guests.

10. We Need to Rollback to the Previous Version

Ever had to undo your Halloween decorations because the kids were too scared? This is the software equivalent. It's admitting that the new version is more of a fright than a delight, and we need to go back to the "safe" version.

This Halloween, as you carve your pumpkins, remember these eerie phrases. They might just give you a chill, but they also remind us of the critical, albeit slightly terrifying, role QA plays in our digital world. Here's to hoping your projects are more treat than trick this season!

Happy Coding, and don't let the bugs bite!

October 22, 2024

QA Clichés

cliche
Some common Football cliches.

In the many years of my software testing, there are several clichés or commonly repeated phrases that testers, developers, and project managers might say during meetings:

  • "It works on my machine." - Often said by developers when a bug can't be reproduced in their local environment.

  • "That's not a bug, it's a feature." - A humorous or sometimes serious claim that unintended behavior might actually provide some value or was intended all along.

  • "Have you tried testing using incognito mode?" - Often highlights issues related to session management, caching, or initialization.

  • "Works as designed." - This can be a genuine clarification or a way to push back on a bug report when the software is behaving according to the specifications, even if those specifications might now seem flawed.

  • "It's not reproducible." - When testers or users report an issue that can't be consistently replicated, leading to challenges in debugging.

  • "We need more test cases." - Often said when unexpected issues arise, suggesting that the existing test suite might not be comprehensive enough.

  • "Let's take this offline." - Not unique to QA but commonly used when a bug or issue leads to a discussion that's too detailed or tangential for the current meeting.

  • "Did we test for this scenario?" - A question that arises when an unforeseen issue comes up, questioning the coverage of the test cases.

  • "The user would never do that." - A sometimes risky assumption about how the software will be used, which might overlook edge cases or unexpected user behavior.

  • "How quickly can you test this?" - Suggesting that QA engineers can speed up testing without impacting the quality of the test.

  • "This should be an easy fix." - Often underestimated, what seems simple might involve complex underlying code changes.

  • "We'll fix it in the next sprint/release." - When time runs out, or when a bug is deemed not critical enough for immediate action.

  • "Automate all the things!" - While automation in QA is crucial, this phrase humorously points to the sometimes overly enthusiastic push for automation without considering the ROI.

  • "It passed in staging, why is it failing in production?" - Highlighting environment-specific issues or differences in data sets.

  • "QA found another 'corner case'." - Recognizing that QA teams often find bugs in the most unexpected or rarely used functionalities.

Q A Cliche

These clichés reflect the ongoing dialogue between intention, design, implementation, and real-world use in software development. They encapsulate the challenges, humor, and sometimes the frustrations inherent in the QA process.

Next Week's blog post is about scary sayings heard in QA.

October 15, 2024

Playing Politics at Work

Welcome to the world of Quality Assurance, where finding bugs is just one part of the job. Beyond the technical challenges, there's a whole other layer that can be even more complicated: navigating the political landscape of corporate America. You might think the office politics are confined to boardrooms, but the reality is, they can affect everything - your projects, your career, and even the quality of your work.

Old Office

The Reality: A Tight-Knit Core Group

In many businesses, there's a core group of people who pull the strings behind the scenes. They're often managers or senior leaders who've known each other for years and wield significant influence. These people tend to hang out together, have lunch offsite, and make key decisions that ripple throughout the organization. They may not be the ones in the trenches, but they control what happens on the battlefield.

For newcomers - especially those with fresh ideas and an eagerness to improve processes - this can feel like a brick wall. You might bring a brilliant suggestion for streamlining the release process or optimizing the testing workflow, but getting buy-in from the core group can feel impossible. They may view outsiders, especially those with different perspectives, as a threat to their status quo.

My Experience: The Manager with the Sales Hat

I've been in that position before. My former manager at a very large ecommerce site was so focused on impressing this core group that our one-on-one meetings became a rarity. Instead, he'd be off attending every meeting he could with the key decision-makers, trying to climb the corporate ladder. He wasn't a bad person, but he made it clear where his priorities lay - being seen by the right people, rather than supporting the people under him.

It was frustrating, to say the least. There I was, trying to push for improvements in our testing processes and advocating for better quality, but it often felt like my efforts were invisible. If I wanted to make progress, I needed to understand the lay of the land and find ways to navigate around it.

Office Place

Finding Your Path: Focus, Over-Deliver, and Skill Up

Here's the reality check: as a QA Engineer, you may not be part of the "in" crowd right away, but you don't have to be sidelined. There are strategies, that I have learned, which you can use to build your reputation and make your mark:

  1. Focus on Your Goals: Don't get distracted by the politics, but don't ignore them either. Focus on delivering high-quality work consistently. Make sure your bug reports are thorough, your test plans are well-documented, and you're always looking for ways to improve the product.

  2. Over-Deliver When It Matters: This is your ticket to building credibility. Sometimes, it means burning the midnight oil or working through a weekend. But if you can pull off a critical testing phase or help recover a project that's about to miss a deadline, people notice. It shows you're dedicated and reliable, and it might even catch the eye of those key decision-makers.

  3. Be Ready with Your 'Sales Hat': When you do have ideas for improvements, be prepared to sell them like a pro. This means doing your research, building a solid business case, and framing your suggestions in a way that appeals to their priorities. It's not just about being right - it's about being persuasive.

  4. Invest in Your Skills: The political climate might not change overnight, but you can keep improving yourself. Learn Python, JavaScript, or whatever new tool is in demand. These skills not only help you automate tedious testing tasks, but they also make you more valuable, both inside and outside of your current organization.

Why It's Worth It

It can be tempting to give up when it feels like you're shouting into a void. But navigating the politics is a skill in itself, and one that can pay off in the long run. You don't have to become best friends with the core group, but if you can show that you're competent, dependable, and skilled, you might just find yourself being invited to that offsite lunch one day.

And if you don't? Well, the skills and reputation you build along the way will follow you wherever you go. You'll be prepared for whatever comes next, whether it's a new opportunity within the company or a fresh start somewhere else.

After all, Quality Assurance isn't just about making sure the software works - it's about finding a way to thrive, even when the landscape around you isn't as polished as the code you test.

About

Welcome to QA!

The purpose of these blog posts is to provide comprehensive insights into Software Quality Assurance testing, addressing everything you ever wanted to know but were afraid to ask.

These posts will cover topics such as the fundamentals of Software Quality Assurance testing, creating test plans, designing test cases, and developing automated tests. Additionally, they will explore best practices for testing and offer tips and tricks to make the process more efficient and effective

Check out all the Blog Posts.

Listen on Apple Podcasts

Blog Schedule

SaturdayInternet Tools
SundayOpen Topic
MondayMedia Monday
TuesdayQA
WednesdaySnagIt
ThursdayBBEdit
FridayMacintosh