Optimisation Method for Training Deep Neural Networks in Classification of Non- functional Requirements

PhD Thesis

Sabir, M. (2022). Optimisation Method for Training Deep Neural Networks in Classification of Non- functional Requirements. PhD Thesis London South Bank University School of Engineering https://doi.org/10.18744/lsbu.931wy
AuthorsSabir, M.
TypePhD Thesis

Non-functional requirements (NFRs) are regarded critical to a software system's success. The majority of NFR detection and classification solutions have relied on supervised machine
learning models. It is hindered by the lack of labelled data for training and necessitate a significant amount of time spent on feature engineering.
In this work we explore emerging deep learning techniques to reduce the burden of feature engineering. The goal of this study is to develop an autonomous system that can classify NFRs into multiple classes based on a labelled corpus. In the first section of the thesis, we standardise the NFRs ontology and annotations to produce a corpus based on five attributes: usability, reliability, efficiency, maintainability, and portability. In the second section, the design and implementation of four neural networks, including the artificial neural network, convolutional neural network, long short-term memory, and gated recurrent unit are examined to classify NFRs.
These models, necessitate a large corpus. To overcome this limitation, we proposed a new paradigm for data augmentation. This method uses a sort and concatenates strategy to combine two phrases from the same class, resulting in a two-fold increase in data size while keeping the domain vocabulary intact. We compared our method to a baseline (no augmentation) and an existing approach Easy data augmentation (EDA) with pre-trained word embeddings. All training has been performed under two modifications to the data; augmentation on the entire data before train/validation split vs augmentation on train set only. Our findings show that as compared to EDA and baseline, NFRs classification model improved greatly, and CNN outperformed when trained using our suggested technique in the first setting. However, we saw a slight boost in the second experimental setup with just train set augmentation. As a result, we can determine that augmentation of the validation is required in order to achieve acceptable results with our proposed approach. We hope that our ideas will inspire new data augmentation techniques, whether they are generic or task specific. Furthermore, it would also be useful to implement this strategy in other languages.

PublisherLondon South Bank University
Digital Object Identifier (DOI)https://doi.org/10.18744/lsbu.931wy
File Access Level
Publication dates
Print14 Oct 2022
Publication process dates
Deposited25 Jan 2023
Permalink -


Download files

License: CC BY 4.0
File access level: Open

  • 31
    total views
  • 81
    total downloads
  • 0
    views this month
  • 0
    downloads this month

Export as