Search alternatives:
till » still (Expand Search), fill (Expand Search), hill (Expand Search)
fine » find (Expand Search), line (Expand Search), five (Expand Search)
call » cell (Expand Search)
drago » dragon (Expand Search), dragos (Expand Search), drag (Expand Search)
train » strain (Expand Search), brain (Expand Search)
dragns » dragons (Expand Search), drains (Expand Search), dragos (Expand Search)
till » still (Expand Search), fill (Expand Search), hill (Expand Search)
fine » find (Expand Search), line (Expand Search), five (Expand Search)
call » cell (Expand Search)
drago » dragon (Expand Search), dragos (Expand Search), drag (Expand Search)
train » strain (Expand Search), brain (Expand Search)
dragns » dragons (Expand Search), drains (Expand Search), dragos (Expand Search)
-
1
-
2
Investigating Fine-Tuning of Language Models for Multiple-Choice Questions
Published 2024“…We specifically investigate training data properties related to positional bias in fine-tuned language model performance on correctly answering MCQs. …”
Get full text
Thesis -
3
Investigating fine-tuning of large language models for text summarisation
Published 2024“…In contrast, bigger models, like GPT-4, with its 1.7 trillion parameters, generated near-perfect summaries without having been trained on a specific dataset. The limited performance increase from the fine-tuned models was likely due to small datasets and medium-sized LLMs. …”
Get full text
Conference or Workshop Item -
4
HMM speech recognition with reduced training
Published 2009Get full text
Get full text
Conference Paper -
5
-
6
OSPC: Multimodal Harmful Content Detection using Fine-tuned Language Models
Published 2024Get full text
Article -
7
Robustness to training disturbances in SpikeProp Learning
Published 2020Get full text
Journal Article -
8
-
9
Design of a linear motor-based magnetic levitation train prototype
Published 2024“…This study explores the modelling of a magnetic levitation train and its implementation using a microcontroller. …”
Get full text
Article -
10
Bystanders
Published 2024“…If they take helpful action, they may be called “active” or “positive” bystanders, or “up-standers.” …”
Get full text
Book chapter -
11
Who learns when workers are trained? a case of safety training of maintenance contractors’ workers for a major petrochemical plant shutdown
Published 2003“…Subsequently, detailed correlation using the model was performed on all Ei=f(A, X, L, P, Rp, RR, T), which were plots of training effectiveness vs Ei as percentages, gave good quantitative parameters for further simulations in the future. …”
Get full text
Article -
12
Internet based training for engineers in the aspect of six sigma
Published 2011Get full text
Thesis -
13
EyeTrAES: Fine-grained, Low-Latency Eye Tracking via Adaptive Event Slicing
Published 2024Get full text
Article -
14
Battery plug-in electric vehicle concept design : (motor-drive train and batterypack)
Published 2012Get full text
Final Year Project (FYP) -
15
Evaluating Adaptive Layer Freezing through Hyperparameter Optimization for Enhanced Fine-Tuning Performance of Language Models
Published 2024“…Additionally, we explore which layers inside the models usually hold more contextual information from pre-training that might be valuable to keep ‘frozen’ when fine-tuning on small datasets. …”
Get full text
Thesis -
16
An Effective Med-VQA Method Using a Transformer with Weights Fusion of Multiple Fine-Tuned Models
Published 2024“…The second model, the greedy-soup-based model, uses a greedy soup technique based on the fusion of multiple fine-tuned models to set the model parameters. The greedy soup selects the model parameters by fusing the model parameters that have significant performance on the validation accuracy in training. …”
Get full text
Article -
17
An Unusual Harassment Training That Was Warmly Received and Also Inspired Bystanders
Published 2025Get full text
Working Paper -
18
Dictionary training for sparse representation as generalization of K-means clustering
Published 2013Get full text
Get full text
Journal Article -
19
The training intensity distribution of marathon runners across performance levels
Published 2024“…Results: Training volume across all runners was 45.1±26.4 km·wk-1, but the fastest runners within the dataset (marathon time 120-150 min) accumulated >3 times more volume than slower runners. …”
Get full text
Article -
20
Effective image synthesis for effective deep neural network training
Published 2024“…To address this issue, a solution called data-limited image generation has been proposed. …”
Get full text
Thesis-Doctor of Philosophy