ICSE 2025
Sat 26 April - Sun 4 May 2025 Ottawa, Ontario, Canada
Sat 3 May 2025 12:00 - 12:30 at 213 - Paper Presentation 1 Chair(s): Jinhan Kim

Background: Manual testing is vital for detecting issues missed by automated tests, but specifying accurate ver- ifications is challenging. Aims: This study aims to explore the use of Large Language Models (LLMs) to produce verifications for manual tests. Method: We conducted two independent and complementary exploratory studies. The first study involved using 2 closed-source and 6 open-source LLMs to generate verifications for manual test steps and evaluate their similarity to original verifications. The second study involved recruiting software testing professionals to assess their perception and agreement with the generated verifications compared to the original ones. Results: The open-source models Mistral-7B and Phi-3-mini-4k demonstrated effectiveness and consistency comparable to closed-source models like Gemini-1.5-flash and GPT-3.5-turbo in generating manual test verifications. However, the agreement level among professional testers was slightly above 40%, indicating both promise and room for improvement. While some LLM-generated verifications were considered better than the originals, there were also concerns about AI hallucinations, where verifications significantly deviated from expectations. Conclusion: We contributed by evaluating the effectiveness of 8 LLMs through similarity and human acceptance studies, identifying top-performing models like Mistral-7B and GPT-3.5-turbo. Although the models show potential, the relatively modest 40% agreement level highlights the need for further refinement. Enhancing the accuracy, relevance, and clarity of the generated verifications is crucial to ensure greater reliability in real-world testing scenarios.

Sat 3 May

Displayed time zone: Eastern Time (US & Canada) change

11:00 - 12:30
Paper Presentation 1DeepTest at 213
Chair(s): Jinhan Kim Università della Svizzera italiana (USI)
11:00
30m
Talk
Lachesis: Predicting LLM Inference Accuracy using Structural Properties of Reasoning Paths
DeepTest
Naryeong Kim Korea Advanced Institute of Science and Technology, Sungmin Kang KAIST, Gabin An KAIST, Shin Yoo KAIST
Pre-print
11:30
30m
Talk
Improving the Reliability of Failure Prediction Models through Concept Drift Monitoring
DeepTest
Lorena Poenaru-Olaru TU Delft, Luís Cruz TU Delft, Jan S. Rellermeyer Leibniz University Hannover, Arie van Deursen TU Delft
12:00
30m
Talk
On the Effectiveness of LLMs for Manual Test Verifications
DeepTest
Myron David Peixoto Federal University of Alagoas, Davy Baía Federal University of Alagoas, Nathalia Nascimento Pennsylvania State University, Paulo Alencar University of Waterloo, Baldoino Fonseca Federal University of Alagoas, Márcio Ribeiro Federal University of Alagoas, Brazil
OSZAR »