-
881
A semi-supervised learning approach for automated 3D cephalometric landmark identification using computed tomography
Published 2022-01-01“…The proposed method first detects a small number of easy-to-find reference landmarks, then uses them to provide a rough estimation of the all landmarks by utilizing the low dimensional representation learned by variational autoencoder (VAE). The anonymized landmark dataset is used for training the VAE. …”
Get full text
Article -
882
Deep learning model for smart wearables device to detect human health conduction
Published 2024-12-01“…Training on raw data is done using a VariationalAutoencoder (VAE). While avoiding rebuilding mistakes, we want to achieve as many brief features as possible. …”
Get full text
Article -
883
A Rumor Detection Method Based on Adaptive Fusion of Statistical Features and Textual Features
Published 2022-08-01“…Statistical features were extracted by encoding statistical information through a variational autoencoder. We extracted semantic features and sequence features as textual features through a parallel network comprising a convolutional neural network and a bidirectional long-term memory network. …”
Get full text
Article -
884
Affective Neural Responses Sonified through Labeled Correlation Alignment
Published 2023-06-01“…The evaluation uses a Vector Quantized Variational AutoEncoder to create an acoustic envelope from the tested Affective Music-Listening database. …”
Get full text
Article -
885
Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure
Published 2019“…Our algorithm fuses the original learning task with a variational autoencoder to learn the latent structure within the dataset and then adaptively uses the learned latent distributions to re-weight the importance of certain data points while training. …”
Get full text
Get full text
Get full text
Get full text
Article -
886
Characterizing chromatin folding coordinate and landscape with deep learning
Published 2022“…We applied a deep-learning approach, variational autoencoder (VAE), to analyze the fluctuation and heterogeneity of chromatin structures revealed by single-cell imaging and to identify a reaction coordinate for chromatin folding. …”
Get full text
Article -
887
Single-nucleus cross-tissue molecular reference maps toward understanding disease gene function
Published 2023“…Here, we applied four single-nucleus RNA sequencing methods to eight diverse, archived, frozen tissue types from 16 donors and 25 samples, generating a cross-tissue atlas of 209,126 nuclei profiles, which we integrated across tissues, donors, and laboratory methods with a conditional variational autoencoder. Using the resulting cross-tissue atlas, we highlight shared and tissue-specific features of tissue-resident cell populations; identify cell types that might contribute to neuromuscular, metabolic, and immune components of monogenic diseases and the biological processes involved in their pathology; and determine cell types and gene modules that might underlie disease mechanisms for complex traits analyzed by genome-wide association studies.…”
Get full text
Article -
888
Peak learning of mass spectrometry imaging data using artificial neural networks
Published 2023“…Therefore, we assess if a probabilistic generative model based on a fully connected variational autoencoder can be used for unsupervised analysis and peak learning of MSI data to uncover hidden structures. …”
Get full text
Article -
889
A domain knowledge-informed design space exploration methodology for mechanical layout design
Published 2024“…This is realised by constructing a layout generation variational autoencoder (LGVAE) model, which uses a latent space as an interface to generate the layouts. …”
Get full text
Journal Article -
890
Molecular generation using gated graph convolutional neural networks and reinforcement learning
Published 2019“…For this purpose, we build upon an existing state-of-the-art architecture called Junction Tree Variational Autoencoder (JT-VAE), which learns continuous latent vector representations for molecular graphs. …”
Get full text
Final Year Project (FYP) -
891
A Study on the Effectiveness of Deep Learning-Based Anomaly Detection Methods for Breast Ultrasonography
Published 2023-03-01“…Herein, we specifically compared the sliced-Wasserstein autoencoder with two representative unsupervised learning models autoencoder and variational autoencoder. The anomalous region detection performance is estimated with the normal region labels. …”
Get full text
Article -
892
Semantic Information Enhanced Network Embedding with Completely Imbalanced Labels
Published 2022-11-01“…The problem of data incompleteness has become an intractable problem for network representation learning(NRL) methods,which makes existing NRL algorithms fail to achieve the expected results.Despite numerous efforts have done to solve the issue,most of previous methods mainly focused on the lack of label information,and rarely consider data imbalance phenomenon,especially the completely imbalance problem that a certain class labels are completely missing.Learning algorithms to solve such problems are still explored,for example,some neighborhood feature aggregation process prefers to focus on network structure information,while disregarding relationships between attribute features and semantic features,of which utilization may enhance representation results.To address the above problems,a semantic information enhanced network embedding with completely imbalanced labels(SECT)method that combines attribute features and structural features is proposed in this paper.Firstly,SECT introduces attention mechanism in the supervised learning for obtaining the semantic information vector on precondition of considering the relationship between the attribute space and the semantic space.Secondly,a variational autoencoder is applied to extract structural features under an unsupervised mode to enhance the robustness of the algorithm.Finally,both semantic and structural information are integrated in the embedded space.Compared with two state-of-the-art algorithms,the node classification results on public data sets Cora and Citeseer indicate the network vector obtained by SECT algorithm outperforms others and increases by 0.86%~1.97% under Mirco-F1.As well as the node visualization results exhibit that compared with other algorithms,the vector distances among different-class clusters obtained by SECT are larger,the clusters of same class are more compact,and the class boundaries are more obvious.All these experimental results demonstrate the effectiveness of SECT,which mainly benefited from a better fusion of semantic information in the low-dimensional embedding space,thus extremely improves the performance of node classification tasks under completely imbalanced labels.…”
Get full text
Article -
893
Design of an integrated model with temporal graph attention and transformer-augmented RNNs for enhanced anomaly detection
Published 2025-01-01“…We employ a Multimodal Variational Autoencoder-MVAE that fuses video, audio, and motion sensor information in a manner resistant to noise and missing samples. …”
Get full text
Article -
894
Improving spleen segmentation in ultrasound images using a hybrid deep learning framework
Published 2025-01-01“…Specifically, our approach achieved a mean Intersection over Union (mIoU) of 94.17% and a mean Dice (mDice) score of 96.82%, surpassing models such as Splenomegaly Segmentation Network (SSNet), U-Net, and Variational autoencoder based methods. The proposed method also achieved a Mean Percentage Length Error (MPLE) of 3.64%, further demonstrating its accuracy. …”
Get full text
Article -
895
Compressing gene expression data using multiple latent space dimensionalities learns complementary biological representations
Published 2020-05-01“…We identify more curated pathway gene sets significantly associated with individual dimensions in denoising autoencoder and variational autoencoder models trained using an intermediate number of latent dimensionalities. …”
Get full text
Article -
896
Leveraging spatial transcriptomics data to recover cell locations in single-cell RNA-seq with CeLEry
Published 2023-07-01“…CeLEry has an optional data augmentation procedure via a variational autoencoder, which improves the method’s robustness and allows it to overcome noise in scRNA-seq data. …”
Get full text
Article -
897
TRAFFIC CONTROL RECOGNITION WITH AN ATTENTION MECHANISM USING SPEED-PROFILE AND SATELLITE IMAGERY DATA
Published 2022-06-01“…In this paper, instead of using expensive surveying methods, we propose an automatic way based on a Conditional Variational Autoencoder (CVAE) to recognize traffic regulators, i. e., arm rules at intersections, by leveraging the GPS data collected from vehicles and the satellite imagery retrieved from digital maps, i. e., Google Maps. …”
Get full text
Article -
898
Model Selection of Hybrid Feature Fusion for Coffee Leaf Disease Classification
Published 2023-01-01“…First, we propose several hybrid models to extract the information feature in the input images by combining MobileNetV3, Swin Transformer, and variational autoencoder (VAE). MobileNetV3, acting on the inductive bias of locality, can extract image features that are closer to one another (local features), while the Swin Transformer is able to extract feature interactions that are further apart (high-level features). …”
Get full text
Article -
899
Non-Autoregressive Transformer Based Ego-Motion Independent Pedestrian Trajectory Prediction on Egocentric View
Published 2023-01-01“…The proposed model, referred to as the TransPred network in this paper, is composed of three main modules: vehicle motion compensation, non-autoregressive transformer, and conditional variational autoencoder(CVAE). The transformer structure is employed to effectively handle raw images and the historical trajectory of the target pedestrian, enabling the generation of advanced future predictions. …”
Get full text
Article -
900
A novel automatic cough frequency monitoring system combining a triaxial accelerometer and a stretchable strain sensor
Published 2021-05-01“…The data from all the participants were categorized into a training dataset and a test dataset. Using a variational autoencoder, a machine learning algorithm with deep learning, the components of the test dataset were automatically judged as being a “cough unit” or “non-cough unit”. …”
Get full text
Article