Libraries at University of Nebraska-Lincoln

 

Abstract

In recent years, the World Wide Web (WWW) has established itself as a popular source of information. Using an effective approach to investigate the vast amount of information available on the internet is essential if we are to make the most of the resources available. Visual data cannot be indexed using text-based indexing algorithms because it is significantly larger and more complex than text. Content-Based Image Retrieval, as a result, has gained widespread attention among the scientific community (CBIR). Input into a CBIR system that is dependent on visible features of the user's input image at a low level is difficult for the user to formulate, especially when the system is reliant on visible features at a low level because it is difficult for the user to formulate. In addition, the system does not produce adequate results. To improve task performance, the CBIR system heavily relies on research into effective feature representations and appropriate similarity measures, both of which are currently being conducted. In particular, the semantic chasm that exists between low-level pixels in images and high-level semantics as interpreted by humans has been identified as the root cause of the issue. There are two potentially difficult issues that the e-commerce industry is currently dealing with, and the study at hand addresses them. First, handling manual labeling of products as well as second uploading product photographs to the platform for sale are two issues that merchants must contend with. Consequently, it does not appear in the search results as a result of misclassifications. Moreover, customers who don't know the exact keywords but only have a general idea of what they want to buy may encounter a bottleneck when placing their orders. By allowing buyers to click on a picture of an object and search for related products without having to type anything in, an image-based search algorithm has the potential to unlock the full potential of e-commerce and allow it to reach its full potential. Inspired by the current success of deep learning methods for computer vision applications, we set out to test a cutting-edge deep learning method known as the Convolutional Neural Network (CNN) for investigating feature representations and similarity measures. We were motivated to do so by the current success of deep learning methods for computer vision applications (CV). According to the experimental results presented in this study, a deep machine learning approach can be used to address these issues effectively. In this study, a proposed Deep Fashion Convolution Neural Network (DFCNN) model that takes advantage of transfer learning features is used to classify fashion products and predict their performance. The experimental results for image-based search reveal improved performance for the performance parameters that were evaluated.

Share

COinS