Off-campus UNL users: To download campus access dissertations, please use the following link to log into our proxy server with your NU ID and password. When you are done browsing please remember to return to this page and log out.

Non-UNL users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Unsupervised Cosegmentation and Phenotypes for Multi -Modal, -View, and -State Imagery

Rubi Quiñones, University of Nebraska - Lincoln

Abstract

Cosegmentation is used to segment the object(s) from the background by simultaneously analyzing multiple images. The state-of-the-art cosegmentation methods do not satisfactorily address the challenges in segmenting multiple images of evolving objects such as growth sequences of plants. Segmenting the plant from the background is critical in plant phenotyping applications whose goal is to compute any measurable traits from plant images. Current segmentation algorithms typically use low-level methods to isolate the plant from the background. The results are often inadequate leading to degradation in the performance of subsequent algorithms, including the computation of phenotypes. The goal of this research is to increase segmentation accuracy in the plant phenotyping domain by leveraging cosegmentation techniques. First, we show the significant bias in current cosegmentation algorithms and datasets by testing their efficacy of a multiple-aspects (views, modalities, and time) plant imagery dataset. Second, we develop an unsupervised cosegmentation-coattention deep learning framework, called OSC-CO2, that can leverage the results of multiple algorithms specifically meant to increase the overall accuracy in a multiple-aspects plant image dataset. Finally, we propose a suite of novel phenotypes that leverage the information from multiple images across an aspect to describe information-rich phenotypes that represent higher-order properties of the plant. The efficacy of the segmentation algorithm and the phenotypes is demonstrated using multi-aspect imagery from a high-throughput plant phenotyping facility. The broader impact of this research is the advancement of plant phenomics in understanding the plant’s environmental interactions for maximal resilience and yield and indirectly aiding food security and sustainable crop production.

Subject Area

Computer science|Plant sciences|Engineering

Recommended Citation

Quiñones, Rubi, "Unsupervised Cosegmentation and Phenotypes for Multi -Modal, -View, and -State Imagery" (2022). ETD collection for University of Nebraska-Lincoln. AAI29166280.
https://digitalcommons.unl.edu/dissertations/AAI29166280

Share

COinS