-
981
The Role of Proton Therapy for Prostate Cancer in the Setting of Hip Prosthesis
Published 2024-01-01“…All plans were robustly optimized and featured: (1) combination anterior oblique and lateral proton beams (AoL), (2) PArc, and (3) VMAT. …”
Get full text
Article -
982
Public perceptions of synthetic cooling agents in electronic cigarettes on Twitter.
Published 2024-01-01“…The deep learning RoBERTa (Robustly Optimized BERT-Pretraining Approach) model that can be optimized for contextual language understanding was used to classify attitudes expressed in tweets about synthetic cooling agents and identify e-cigarette users. …”
Get full text
Article -
983
Uncertainty Quantification of Performance of Supercritical Carbon Dioxide Compressor
Published 2023-09-01“…The aPC method can be used to quantitively evaluate the robustness of S-CO2 compressor performance to the uncertain parameters, which forms a basis for the robust optimal design of the S-CO2 compressor.…”
Get full text
Article -
984
Attention-Based Models for Classifying Small Data Sets Using Community-Engaged Research Protocols: Classification System Development and Validation Pilot Study
Published 2022-09-01“…We then trained an attention-based bidirectional long short-term memory unit (Bi-LSTM) on the classified protocols and compared it to transformer models such as Bidirectional Encoder Representations From Transformers (BERT), Bio + Clinical BERT, and Cross-lingual Language Model–Robustly Optimized BERT Pre-training Approach (XLM-RoBERTa). …”
Get full text
Article -
985
RDAOT: Robust Unsupervised Deep Sub-Domain Adaptation Through Optimal Transport for Image Classification
Published 2023-01-01“…We examine label noise robustness in the source domain and ROT (Robust Optimal Transport) loss to preserve robustness in domain adaptation, which lessens the cost of transporting source distributions to the target distributions. …”
Get full text
Article -
986
-
987
RoBERTa-CoA: RoBERTa-Based Effective Finetuning Method Using Co-Attention
Published 2023-01-01“…In 2019, bidirectional encoder representations from transformers (BERT) and a robust optimized BERT pretraining approach (RoBERTa) were introduced. …”
Get full text
Article -
988
A Large Language Model Screening Tool to Target Patients for Best Practice Alerts: Development and Validation
Published 2023-11-01“…MethodsOur AI screening tool used a BioMed-RoBERTa (Robustly Optimized Bidirectional Encoder Representations from Transformers Pretraining Approach; AllenAI) model to perform classification of physician notes, identifying patients without active bleeding and thus appropriate for a thromboembolism prophylaxis BPA. …”
Get full text
Article -
989
Supervised Text Classification System Detects Fontan Patients in Electronic Records With Higher Accuracy Than ICD Codes
Published 2023-07-01“…Using 80% of the patient data, we trained and optimized multiple machine learning models, support vector machines and 2 versions of RoBERTa (a robustly optimized transformer‐based model for language understanding), for automatically identifying Fontan cases based on notes. …”
Get full text
Article -
990
Robust Handover Optimization Technique with Fuzzy Logic Controller for Beyond 5G Mobile Networks
Published 2022-08-01“…One of the crucial handover (HO) techniques is known as mobility robustness optimization (MRO), which mainly aims to adjust HO control parameters (HCPs) (time-to-trigger (TTT) and handover margin (HOM)). …”
Get full text
Article -
991
Effect of Recent Abortion Legislation on Twitter User Engagement, Sentiment, and Expressions of Trust in Clinicians and Privacy of Health Information: Content Analysis
Published 2023-05-01“…We then trained a Latent Dirichlet Allocation model to select tweets pertinent to the topic of interest and performed a sentiment analysis using Robustly Optimized Bidirectional Encoder Representations from Transformers Pre-training Approach model and a causal impact time series analysis to examine engagement and sentiment. …”
Get full text
Article -
992
DTITD: An Intelligent Insider Threat Detection Framework Based on Digital Twin and Self-Attention Based Deep Learning Models
Published 2023-01-01“…This study proposes a simplified transformer model named DistilledTrans and applies the original transformer model, DistilledTrans, BERT + final layer, Robustly Optimized BERT Approach (RoBERTa) + final layer, and a hybrid method combining pre-trained (BERT, RoBERTa) with a Convolutional Neural Network (CNN) or Long Short-term Memory (LSTM) network model to detect insider threats. …”
Get full text
Article -
993
Training a Deep Contextualized Language Model for International Classification of Diseases, 10th Revision Classification via Federated Learning: Model Development and Validation St...
Published 2022-11-01“…ResultsThe F1 scores of PubMedBERT, RoBERTa (Robustly Optimized BERT Pretraining Approach), ClinicalBERT, and BioBERT (BERT for Biomedical Text Mining) were 0.735, 0.692, 0.711, and 0.721, respectively. …”
Get full text
Article -
994
-
995
-
996
High-Dimensional Optimal Path Planning and Multi-Timescale Lagrangian Data Assimilation in Stochastic Dynamical Ocean Environments
Published 2024“…We propose coupled methods that allow ocean vehicles to robustly and optimally complete their mission while continuously learning from the new information being collected, updating the Lagrangian and Eulerian fields, their joint probabilities, and the robust optimal control of their future trajectories. We showcase preliminary results using the proposed method. …”
Get full text
Thesis