Showing 881 - 900 results of 1,212 for search '"variational autoencoder"', query time: 0.48s Refine Results
  1. 881

    A semi-supervised learning approach for automated 3D cephalometric landmark identification using computed tomography by Hye Sun Yun, Chang Min Hyun, Seong Hyeon Baek, Sang-Hwy Lee, Jin Keun Seo

    Published 2022-01-01
    “…The proposed method first detects a small number of easy-to-find reference landmarks, then uses them to provide a rough estimation of the all landmarks by utilizing the low dimensional representation learned by variational autoencoder (VAE). The anonymized landmark dataset is used for training the VAE. …”
    Get full text
    Article
  2. 882

    Deep learning model for smart wearables device to detect human health conduction by Rathod Hiral Yashwantbhai, Haresh Dhanji Chande, Sachinkumar Harshadbhai Makwana, Payal Prajapati, Archana Gondalia, Pinesh Arvindbhai Darji

    Published 2024-12-01
    “…Training on raw data is done using a VariationalAutoencoder (VAE). While avoiding rebuilding mistakes, we want to achieve as many brief features as possible. …”
    Get full text
    Article
  3. 883

    A Rumor Detection Method Based on Adaptive Fusion of Statistical Features and Textual Features by Ziyan Zhang, Zhiping Dan, Fangmin Dong, Zhun Gao, Yanke Zhang

    Published 2022-08-01
    “…Statistical features were extracted by encoding statistical information through a variational autoencoder. We extracted semantic features and sequence features as textual features through a parallel network comprising a convolutional neural network and a bidirectional long-term memory network. …”
    Get full text
    Article
  4. 884

    Affective Neural Responses Sonified through Labeled Correlation Alignment by Andrés Marino Álvarez-Meza, Héctor Fabio Torres-Cardona, Mauricio Orozco-Alzate, Hernán Darío Pérez-Nastar, German Castellanos-Dominguez

    Published 2023-06-01
    “…The evaluation uses a Vector Quantized Variational AutoEncoder to create an acoustic envelope from the tested Affective Music-Listening database. …”
    Get full text
    Article
  5. 885

    Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure by Soleimany, Ava, Amini, Alexander A, Schwarting, Wilko, Bhatia, Sangeeta N, Rus, Daniela L

    Published 2019
    “…Our algorithm fuses the original learning task with a variational autoencoder to learn the latent structure within the dataset and then adaptively uses the learned latent distributions to re-weight the importance of certain data points while training. …”
    Get full text
    Get full text
    Get full text
    Get full text
    Article
  6. 886

    Characterizing chromatin folding coordinate and landscape with deep learning by Xie, Wen Jun, Qi, Yifeng, Zhang, Bin

    Published 2022
    “…We applied a deep-learning approach, variational autoencoder (VAE), to analyze the fluctuation and heterogeneity of chromatin structures revealed by single-cell imaging and to identify a reaction coordinate for chromatin folding. …”
    Get full text
    Article
  7. 887

    Single-nucleus cross-tissue molecular reference maps toward understanding disease gene function by Eraslan, Gökcen, Drokhlyansky, Eugene, Anand, Shankara, Fiskin, Evgenij, Subramanian, Ayshwarya, Slyper, Michal, Wang, Jiali, Van Wittenberghe, Nicholas, Rouhana, John M, Waldman, Julia, Ashenberg, Orr, Lek, Monkol, Dionne, Danielle, Win, Thet Su, Cuoco, Michael S, Kuksenko, Olena, Tsankov, Alexander M, Branton, Philip A, Marshall, Jamie L, Greka, Anna, Getz, Gad, Segrè, Ayellet V, Aguet, François, Rozenblatt-Rosen, Orit, Ardlie, Kristin G, Regev, Aviv

    Published 2023
    “…Here, we applied four single-nucleus RNA sequencing methods to eight diverse, archived, frozen tissue types from 16 donors and 25 samples, generating a cross-tissue atlas of 209,126 nuclei profiles, which we integrated across tissues, donors, and laboratory methods with a conditional variational autoencoder. Using the resulting cross-tissue atlas, we highlight shared and tissue-specific features of tissue-resident cell populations; identify cell types that might contribute to neuromuscular, metabolic, and immune components of monogenic diseases and the biological processes involved in their pathology; and determine cell types and gene modules that might underlie disease mechanisms for complex traits analyzed by genome-wide association studies.…”
    Get full text
    Article
  8. 888

    Peak learning of mass spectrometry imaging data using artificial neural networks by Abdelmoula, Walid M, Lopez, Begona Gimenez-Cassina, Randall, Elizabeth C, Kapur, Tina, Sarkaria, Jann N, White, Forest M, Agar, Jeffrey N, Wells, William M, Agar, Nathalie YR

    Published 2023
    “…Therefore, we assess if a probabilistic generative model based on a fully connected variational autoencoder can be used for unsupervised analysis and peak learning of MSI data to uncover hidden structures. …”
    Get full text
    Article
  9. 889

    A domain knowledge-informed design space exploration methodology for mechanical layout design by Li, Kangjie, Gao, Yicong, Lou, Shanhe

    Published 2024
    “…This is realised by constructing a layout generation variational autoencoder (LGVAE) model, which uses a latent space as an interface to generate the layouts. …”
    Get full text
    Journal Article
  10. 890

    Molecular generation using gated graph convolutional neural networks and reinforcement learning by Divyansh, Gupta

    Published 2019
    “…For this purpose, we build upon an existing state-of-the-art architecture called Junction Tree Variational Autoencoder (JT-VAE), which learns continuous latent vector representations for molecular graphs. …”
    Get full text
    Final Year Project (FYP)
  11. 891

    A Study on the Effectiveness of Deep Learning-Based Anomaly Detection Methods for Breast Ultrasonography by Changhee Yun, Bomi Eom, Sungjun Park, Chanho Kim, Dohwan Kim, Farah Jabeen, Won Hwa Kim, Hye Jung Kim, Jaeil Kim

    Published 2023-03-01
    “…Herein, we specifically compared the sliced-Wasserstein autoencoder with two representative unsupervised learning models autoencoder and variational autoencoder. The anomalous region detection performance is estimated with the normal region labels. …”
    Get full text
    Article
  12. 892

    Semantic Information Enhanced Network Embedding with Completely Imbalanced Labels by FU Kun, GUO Yun-peng, ZHUO Jia-ming, LI Jia-ning, LIU Qi

    Published 2022-11-01
    “…The problem of data incompleteness has become an intractable problem for network representation learning(NRL) methods,which makes existing NRL algorithms fail to achieve the expected results.Despite numerous efforts have done to solve the issue,most of previous methods mainly focused on the lack of label information,and rarely consider data imbalance phenomenon,especially the completely imbalance problem that a certain class labels are completely missing.Learning algorithms to solve such problems are still explored,for example,some neighborhood feature aggregation process prefers to focus on network structure information,while disregarding relationships between attribute features and semantic features,of which utilization may enhance representation results.To address the above problems,a semantic information enhanced network embedding with completely imbalanced labels(SECT)method that combines attribute features and structural features is proposed in this paper.Firstly,SECT introduces attention mechanism in the supervised learning for obtaining the semantic information vector on precondition of considering the relationship between the attribute space and the semantic space.Secondly,a variational autoencoder is applied to extract structural features under an unsupervised mode to enhance the robustness of the algorithm.Finally,both semantic and structural information are integrated in the embedded space.Compared with two state-of-the-art algorithms,the node classification results on public data sets Cora and Citeseer indicate the network vector obtained by SECT algorithm outperforms others and increases by 0.86%~1.97% under Mirco-F1.As well as the node visualization results exhibit that compared with other algorithms,the vector distances among different-class clusters obtained by SECT are larger,the clusters of same class are more compact,and the class boundaries are more obvious.All these experimental results demonstrate the effectiveness of SECT,which mainly benefited from a better fusion of semantic information in the low-dimensional embedding space,thus extremely improves the performance of node classification tasks under completely imbalanced labels.…”
    Get full text
    Article
  13. 893

    Design of an integrated model with temporal graph attention and transformer-augmented RNNs for enhanced anomaly detection by Sai Babu Veesam, Aravapalli Rama Satish, Sreenivasulu Tupakula, Yuvaraju Chinnam, Krishna Prakash, Shonak Bansal, Mohammad Rashed Iqbal Faruque

    Published 2025-01-01
    “…We employ a Multimodal Variational Autoencoder-MVAE that fuses video, audio, and motion sensor information in a manner resistant to noise and missing samples. …”
    Get full text
    Article
  14. 894

    Improving spleen segmentation in ultrasound images using a hybrid deep learning framework by Ali Karimi, Javad Seraj, Fatemeh Mirzadeh Sarcheshmeh, Kasra Fazli, Amirali Seraj, Parisa Eslami, Mohamadreza Khanmohamadi, Helia Sajjadian Moosavi, Hadi Ghattan Kashani, Abdoulreza Sajjadian Moosavi, Masoud Shariat Panahi

    Published 2025-01-01
    “…Specifically, our approach achieved a mean Intersection over Union (mIoU) of 94.17% and a mean Dice (mDice) score of 96.82%, surpassing models such as Splenomegaly Segmentation Network (SSNet), U-Net, and Variational autoencoder based methods. The proposed method also achieved a Mean Percentage Length Error (MPLE) of 3.64%, further demonstrating its accuracy. …”
    Get full text
    Article
  15. 895

    Compressing gene expression data using multiple latent space dimensionalities learns complementary biological representations by Gregory P. Way, Michael Zietz, Vincent Rubinetti, Daniel S. Himmelstein, Casey S. Greene

    Published 2020-05-01
    “…We identify more curated pathway gene sets significantly associated with individual dimensions in denoising autoencoder and variational autoencoder models trained using an intermediate number of latent dimensionalities. …”
    Get full text
    Article
  16. 896

    Leveraging spatial transcriptomics data to recover cell locations in single-cell RNA-seq with CeLEry by Qihuang Zhang, Shunzhou Jiang, Amelia Schroeder, Jian Hu, Kejie Li, Baohong Zhang, David Dai, Edward B. Lee, Rui Xiao, Mingyao Li

    Published 2023-07-01
    “…CeLEry has an optional data augmentation procedure via a variational autoencoder, which improves the method’s robustness and allows it to overcome noise in scRNA-seq data. …”
    Get full text
    Article
  17. 897

    TRAFFIC CONTROL RECOGNITION WITH AN ATTENTION MECHANISM USING SPEED-PROFILE AND SATELLITE IMAGERY DATA by H. Cheng, H. Lei, S. Zourlidou, M. Sester

    Published 2022-06-01
    “…In this paper, instead of using expensive surveying methods, we propose an automatic way based on a Conditional Variational Autoencoder (CVAE) to recognize traffic regulators, i. e., arm rules at intersections, by leveraging the GPS data collected from vehicles and the satellite imagery retrieved from digital maps, i. e., Google Maps. …”
    Get full text
    Article
  18. 898

    Model Selection of Hybrid Feature Fusion for Coffee Leaf Disease Classification by Muhamad Faisal, Jenq-Shiou Leu, Jeremie T. Darmawan

    Published 2023-01-01
    “…First, we propose several hybrid models to extract the information feature in the input images by combining MobileNetV3, Swin Transformer, and variational autoencoder (VAE). MobileNetV3, acting on the inductive bias of locality, can extract image features that are closer to one another (local features), while the Swin Transformer is able to extract feature interactions that are further apart (high-level features). …”
    Get full text
    Article
  19. 899

    Non-Autoregressive Transformer Based Ego-Motion Independent Pedestrian Trajectory Prediction on Egocentric View by Yujin Kim, Eunbin Seo, Chiyun Noh, Kyongsu Yi

    Published 2023-01-01
    “…The proposed model, referred to as the TransPred network in this paper, is composed of three main modules: vehicle motion compensation, non-autoregressive transformer, and conditional variational autoencoder(CVAE). The transformer structure is employed to effectively handle raw images and the historical trajectory of the target pedestrian, enabling the generation of advanced future predictions. …”
    Get full text
    Article
  20. 900

    A novel automatic cough frequency monitoring system combining a triaxial accelerometer and a stretchable strain sensor by Takehiro Otoshi, Tatsuya Nagano, Shintaro Izumi, Daisuke Hazama, Naoko Katsurada, Masatsugu Yamamoto, Motoko Tachihara, Kazuyuki Kobayashi, Yoshihiro Nishimura

    Published 2021-05-01
    “…The data from all the participants were categorized into a training dataset and a test dataset. Using a variational autoencoder, a machine learning algorithm with deep learning, the components of the test dataset were automatically judged as being a “cough unit” or “non-cough unit”. …”
    Get full text
    Article