
Richard Seidl | Software Development & Testing Expert
In a world where software drives everything, testing is no longer optional — it’s your superpower. - How much testing is enough? - When should you automate? - What makes a great integration test? - And how do you keep up when AI, ML, and cloud-native complexity are redefining the rules? Each week, leading minds from across the software universe — testers, developers, architects, and product thinkers — share practical insights, field-tested techniques, and bold ideas to help you ship better software, faster. Whether you're scaling your QA strategy, building your first test suite, or leading complex enterprise projects: This is your backstage pass to the tools, tactics, and trends that are shaping the future of software testing. 🚀 Are you ready to unleash the next level of quality in your software? Hit play and join the movement.

How HUSTEF Became a Leading International Software Testing Conference Bonus Tipp: Never miss a crucial test again "You're coming home, so you will see next year you're coming back like an old friend." - Attila Fekete In this episode, I talk with Attila Fekete about HUSTEF 2025 in Budapest. He runs the program and the backstage work. We look at how a small local meet up from 2011 turned into 700 people from many countries. Care for people, high quality talks, and a fun vibe. We discuss new formats like longer talks, a master class track, and a career clinic with coaching and CV tips - and that first time speakers get mentoring too. Attila Fekete has been passionate about software engineering since the age of 12, when he got his first computer and started programming — and the rest, as they say, is history. With 27 years of experience in software testing, he has held various roles, including Head of Testing, Technical Test Lead, Quality Manager, Test Automation Engineer, and most recently, Vice President of SDETs. For a time, Attila was an active public speaker, but during the pandemic he decided to focus his energy on organizing HUSTEF, one of Europe’s leading conferences dedicated to software testing and software quality. This year, he was appointed Program Chair of HUSTEF, a role he considers one of the major highlights of his career. Outside of software engineering, Attila is passionate about sports — especially swimming and basketball. He also bikes to the office two to three times a week, even when it’s raining. What’s next? Perhaps a return to public speaking — and new collaborations with other events, whenever time allows. Highlights: HUSTEF grew from a small 2011 meetup to 700 international attendees The program values people, quality talks, and a fun atmosphere New formats include longer talks, a master class track, and a career clinic First time speakers get mentoring and a keynote opportunity Next steps plan a better app, speaker branding, and larger venues More Links with Insights: Hungarian Testing Board on Youtube

How Testers Can Influence Project Quality Bonus Tipp: Never miss a crucial test again "Some of the really basic things that people can do straight away, is to just make your work visible on the boards, just like anybody else's work." - Cassandra H. Leung In this episode, I talk with Cassandra H. Leung about why testers still feel unseen and what we can do about it. We unpack impostor syndrome, the shy voice that says keep quiet, and how it holds many of us back. Cassandra shares a simple frame: show, share, shine. Put testing work on the board, share notes and dashboards, and keep a brag board for wins. We explore the wider role of testers across product talks, pipelines, and coaching the team. Cassandra H. Leung is a trained Quality Engineer, certified Scrum Master, and UX enthusiast. She specialises in exploratory testing and test strategy, and enjoys supporting teams towards better quality products and processes. Cassandra has spoken at events around the world, writes about topics relevant to the technology industry, and creates content to educate future generations of testing specialists. She's active in the online testing community, and hopes to inspire others to share their stories and learnings. Highlights: Make testing work visible to gain recognition Use show share shine to communicate testing value Tester impact spans product talks, delivery pipelines, and team coaching Build trust through regular conversations and pairing on real team pain points Treat AI as a useful tool, not a threat

Stoicism in software development "And what the Stoics say is, well, most things are actually outside of your control. Like other people's opinions, your body, your health." - Maryse Meinen In this episode, I talk with Maryse Meinen about stoic thinking for product development and life. We ask what happens if you stop judging success by outcomes and start judging by decision quality. Maryse shares tools you can use today: scenario planning, the 10 10 10 rule, and a simple decision journal. Prepare for failure, accept what you cannot control, and act with courage, justice, and temperance. This fits agile work and the mess we face in tech and society. Maryse Meinen is a product development coach who uses Agile and Stoicism to make teams and organizations more resilient and sustainable. She espouses the philosophies of degrowth and stoicism, which advocate working more efficiently with fewer resources and valuing what is already there. Her motto is: Achieve more with less! Highlights: Judge decisions by quality, not outcomes. Use scenario planning, the 10 10 10 rule, and a decision journal. Prepare for failure and accept what you cannot control. Practice courage, justice, and temperance in work and life. Stoic thinking fits agile work in tech and society.

Testing as an Art Form Discovering Curiosity and Creativity in Software Quality "Simplicity is the ultimate sophistication." - Barış Sarıalioğlu In this episode, I talk with Barış Sarıalioğlu about testing as art and science, through the lens of Leonardo da Vinci. We ask what a tester can learn from curiosity, observation, and experiments. Mona Lisa's smile shows how uncertainty beats 100 pages of metrics. We should aim for understanding, not bug counts. We talk about storytelling, simple reports that people can read, and mixing engineering with empathy. Testers work across disciplines, explore, and make sense of messy projects. Perfection is a trap. Good enough can be great. Balance logic and imagination, and you get impact that reaches beyond tools. With over 20 years of experience in IT and software engineering, Barış Sarıalioğlu specializes in navigating the complexities of digital transformation, innovation, and leadership across diverse industries. His expertise spans Digital Transformation, Agility, Artificial Intelligence, Software Development, User Experience, Design Thinking, Quality Assurance, and Software Testing, enabling him to deliver holistic, technology-driven solutions to business challenges. He has led global teams and managed cross-functional departments, including HR, Marketing, Sales, Legal, and Finance, aligning organizational goals with innovative strategies. His work spans industries such as telecommunications, banking, defense, aviation, automotive, insurance, e-commerce and semiconductors, contributing to high-stakes projects across Turkey, the U.S., Russia, Germany, China, and beyond. As a published author and keynote speaker at 100+ conferences in 50+ countries, He is dedicated to advancing software engineering and exploring the transformative power of technology on organizations and individuals alike. Highlights: Curiosity, observation, and experiments make testers better. Aim for product understanding, not bug counts. Uncertainty can beat piles of metrics. Use storytelling and simple reports people read. Balance engineering with empathy and imagination.

How self care habits raise quality and agility "So we need to start thinking about work in a complete, holistic, integral way. Because we are integral people. We cannot be split out and having different personalities and having different faces. And that takes a lot of energy to be sustained, you know." - Clara Ramos González In this episode, I talk with Clara Ramos González about how self-care can raise quality and agility. We look at why communication failure still breaks projects and how breath can fix more than tools. Clara blends QA leadership with yoga and brings simple rituals to teams. Three deep breaths to open meetings. One word to set intention. Weekly coffee talks without work. A feedback rule to sleep on it. The message is clear. Bring your whole self. Lead by example. Small steps cut stress and help us build better software and healthier teams. Clara is a Senior QA Manager specializing in guiding companies through their Digital Transformation. As a certified Manager, she provides Workshops for IT Leaders and Teams to elevate the team potential and engagement, and as a passionate advocate for Quality Assurance, she regularly speaks at various IT conferences, being a proud ambassador for the strongest brands in the industry. Clara shows she is a Yoga teacher and dedicated meditator, in one of her workshops: "Balancing Code with Calm: A guide to thrive at work and personal life", aiming to integrate daily work with spiritual growth, promoting a holistic approach to reduce stress and increase team satisfaction. Highlights: Self-care practices raise software quality and team agility Communication breakdowns still cause most project failures Three deep breaths before meetings improve focus and cooperation Weekly coffee chats without work topics strengthen relationships and morale Sleep on feedback to prevent knee jerk reactions

Switching End-to-End Test Automation Frameworks: Lessons from Real-World Projects "Because if we go with wrong tools or wrong alternatives like libraries, frameworks, whatever we are using in our environment or ecosystem, then we might not be able to cover everything that we are supposed to do." - Mesut Durukal In this episode, I talk with Mesut Durukal about picking the right end to end test automation framework. Mesut shares why tool choice must serve real needs, not trends. It is a mindset shift from hype to needs. In his case users were on Safari, the team tool did not run there. He mapped needs, compared Cypress, Playwright, Selenium, TestCafe, and Nightwatch, and chose Playwright for speed and broad browser support. We talk about reporting, debugging, and docs. We touch on architecture, like keeping login and helpers outside specs, so migration stays clean. For me, this is tech with agility. Know your goals, grow your system, and review choices often. Mesut has over 15 years of experience in areas such as industrial automation, IoT, cloud services and defense industry, complemented by his expertise in test automation and CI/CD integration. He has held multiple roles in multinational projects including Quality Owner and Hiring Manager and is well versed in CMMI, Scrum & PMP. As a recognized speaker on international stages and winner of the award for the best presentation, he is also involved in various program committees. Highlights: Choose automation tools based on real needs, not hype. Match browser support to your users, Safari included. Evaluate frameworks with clear criteria across Cypress, Playwright, Selenium, TestCafe, and Nightwatch. Playwright delivered speed and broad cross browser coverage for the team. Modular tests with shared helpers simplify maintenance and migration.

Why winners collect failures "We don't really exist in a world where everything is known before we start our work. And so the value of context is incredibly important." - Chris Armstrong In this episode, I talk with Chris Armstrong about context in testing. We talked about why "it depends" is an honest answer in complex work. Chris shows how decisive humility helps. Say what you do not know. Find the people and data to learn fast. We talk about fear, optimism, and why winners collect more failures. I ask how testers grow influence. We land on trust, social skills, and asking better questions. Challenge tools and processes with respect. Start small with clear hypotheses and visible outcomes. Remove unnecessary friction. AI comes up as a fresh field for testing. Join early, shape it. Stay curious. Context moves, and so should we. Chris Armstrong is tester always looking to improve. His testing journey began in 2004 and has crossed several different industries. He identifies as a pragmatic agilist and quality practices geek. He advocates for inclusion, collaboration and continuous improvement. He loves learning and storytelling, and aims for his blog to be a platform for him to process his thoughts and observations. It is not intended to be instructional, but more an insight into how he sees the world and to be an outlet. He also podcasts as part of a group of test leadership peers, the Testing Peers Highlights: "It depends" is an honest answer in complex work Admit unknowns and seek data to learn fast Grow influence through trust, social skills, and better questions Start small with clear hypotheses and visible outcomes Join AI testing early and help shape practices

Why Examples Matter: Rethinking Requirements "If you have a good understanding of the requirements, you can write better test cases, tests for that the developers might be able to make it immediately the right way and they don't need to do so much rework." - Gáspár Nagy In this episode, I talk with Gáspár Nagy about behavior driven development. We look at why a simple example can beat a specification. You do not learn soccer from a rulebook. You learn by playing and watching plays. BDD uses the same trick to build understanding early. We discuss example mapping, writing readable scenarios, and turning them into executable specs with Cucumber, SpecFlow, and Reqnroll. Done well, this guides vertical slices, shows progress, and stops the mini waterfall at the end of a sprint. Gáspár Nagy, the creator of SpecFlow & Reqnroll, bringing over 20 years of experience as a coach, trainer and test automation expert nowadays through his company, called Spec Solutions. He is the co-author of the books "Discovery: Explore behaviour using examples" and "Formulation: Document examples with Given/When/Then" and also leads SpecSync, aiding teams in test traceability with Azure DevOps and Jira. He is active in the open-source community through leading the Reqnroll project. Gáspár shares his insights at conferences, emphasizing his commitment to helping teams implement Behavior-Driven Development (BDD). Highlights: Simple examples communicate requirements better than large specifications Example mapping structures conversations and uncovers rules early Readable scenarios turn into executable specifications with Cucumber or SpecFlow BDD guides vertical slices and makes progress visible BDD prevents the mini waterfall at the end of a sprint

Why Software Testing Struggles With Recognition and How AI Changes the Game "I truly believe that we have like in five to 10 years we see a huge demand in people who are able to understand system architectures." - Daniel Knott In this episode, I talk with Daniel Knott about the real pains in testing and what comes next. Why do managers cut quality when money gets tight. We look at AI and low code that spit out apps fast, often without clear architecture. We warn about skipping performance and security. We also reflect on how testers can sell value in business terms. Speak revenue, KPIs, and user happiness, not code coverage. Daniel says domain knowledge may beat deep coding as AI writes more code. We explore prompt reviews as a new shift left habit. Daniel Knott loves digital products with high quality being web or native mobile applications. He has been working in the IT industry for almost 20 years with experience in hands-on software testing for desktop, web and mobile applications. He also worked as product manager for mobile and web products. At the moment, Daniel is working as an IT manager as Head of Engineering, helping software development teams ship great products with high quality. Daniel wrote two books - Hands-On Mobile App Testing and Smartwatch App Testing and is a frequent blogger and conference speaker. In 2022 he also created his YouTube Channel about Software Testing which has grown to more than 145k subscribers. Highlights: Budget cuts often hit testing and quality first AI and low code speed delivery but risk weak architecture Skipping performance and security testing creates major business risk Testers should speak business metrics like revenue, KPIs, and user value Domain knowledge gains importance as AI writes more code

Stop saying everything is broken: Speak in outcomes and get taken seriously "So I think the first step is always trying to understand who you are talking to, trying to understand what matters to them, what do they really care about. Bad quality is something that hurts the business, but how does it hurt this particular person? What is the impact on this person or on the team that this person works with?" - Kat Obring In this episode, I talk with Kat Obring about the tester as an influencer. We explore how to stop saying everything is broken and start speaking the language of stakeholders. Bring evidence, not opinions. Say "the Safari sign up button fails and 20 percent of users are blocked". We share a 15 second check before stand up, and pairing early so testing is part of development, not a mini waterfall at the end. Pick small battles and run one or two week experiments. If it works, keep it. If not, drop it. Influence without authority grows from trust and habits. With over 20 years in the software industry, Kat Obring now focuses on what matters most: teaching teams and individuals how to measurably improve the quality of their work. Her practical frameworks combine insights from her diverse experience as a DevOps QA engineer, Head of Delivery, and, surprisingly, her early career as a chef. She's learned that evidence always beats guesswork, and a well-designed experiment will reveal more truth than months of planning ever could. Highlights: Speak stakeholder language and link bugs to user impact Bring evidence, not opinions, to drive decisions Pair early to make testing part of development, not a late phase Run small experiments for one or two weeks and keep what works Influence without authority grows from trust and consistent habits More Links with Insights: Free Ebook about the QED Framework