Abstract
With the rapid development of information technology in educational institutions, training management platforms have become essential tools for managing student internships and practical training programs. However, ensuring the quality and reliability of these platforms through comprehensive testing remains a significant challenge. This study focuses on optimizing automated testing for training management platforms using QuickTest Professional (QTP). An enhanced testing framework that integrates systematic test case design with advanced automation strategies is proposed and empirically validated. Through comprehensive analysis of a real-world training management platform serving over 2,000 students, significant improvements in testing efficiency and defect detection rates are demonstrated. The proposed optimization approach achieves a 40% reduction in testing time while increasing test coverage to 95%, contributing substantially to the field of software testing automation and providing practical guidelines for educational software quality assurance.
References
Li, B., Zhao, Q., Jiao, S., & Liu, X. (2024). A survey on factors preventing the adoption of automated software testing: A principal component analysis approach. Journal of Software Engineering Applications, 3(1), 1–25.
Yuan, Z., Lou, Y., Liu, M., Ding, S., Wang, K., Chen, Y., & Peng, X. (2024). Software testing with large language models: Survey, landscape, and vision. IEEE Transactions on Software Engineering, 50(3), 654–678. https://doi.org/10.1109/TSE.2024.3352940
Hu, J., Zhang, Q., & Yin, H. (2024). A survey of testing techniques based on large language models. Proceedings of the 2024 International Conference on Computer and Multimedia Technology, 245–252. https://doi.org/10.1145/3675656.3675687
TestEd Workshop Organizing Committee. (2024). The software testing education workshop (TestEd 2024): Promoting best practices in software testing education. Proceedings of the 17th IEEE International Conference on Software Testing, Verification and Validation, 1–4. https://doi.org/10.1109/ICST60714.2024.00015
AST Conference Committee. (2024). 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024): Test automation for and with generative AI. ICSE 2024 Co-located Event Proceedings, 1–8. https://doi.org/10.1109/AST60808.2024.00009
Alagarsamy, S., Tantithamthavorn, C., & Aleti, A. (2022). More effective test case generation with multiple tribes of AI. Proceedings of the ACM/IEEE 44th International Conference on Software Engineering: Companion Proceedings, 107–118. https://doi.org/10.1145/3510454.3516865
Schäfer, M., Nadi, S., Eghbali, A., & Tip, F. (2024). An empirical evaluation of using large language models for automated unit test generation. IEEE Transactions on Software Engineering, 50(1), 85–105. https://doi.org/10.1109/TSE.2023.3334955
Delgado-Pérez, P., Ramírez, A., Valle-Gómez, K. J., Medina-Bulo, I., & Romero, J. R. (2023). InterEvo-TR: Interactive evolutionary test generation with readability assessment. IEEE Transactions on Software Engineering, 49(4), 2580–2596. https://doi.org/10.1109/TSE.2022.3233314
Steenhoek, B., Tufano, M., & Sundaresan, N. (2023). Reinforcement learning from automatic feedback for high-quality unit test generation. arXiv preprint arXiv:2310.02368. https://arxiv.org/abs/2310.02368
Dakhel, A. M., Nikanjam, A., & Majdinasab, V. (2024). Effective test generation using pre-trained large language models and mutation testing. Information and Software Technology, 168, 107468. https://doi.org/10.1016/j.infsof.2024.107468