Showing 1 - 20 results of 555 for search '(till OR (fine OR (call OR all))) ((drago OR train) OR dragns)*', query time: 0.19s Refine Results
  1. 1

    The next train = 下一班车 by Lou, Mei Jun

    Published 2018
    Get full text
    Final Year Project (FYP)
  2. 2

    Investigating Fine-Tuning of Language Models for Multiple-Choice Questions by Wang, Ivy A.

    Published 2024
    “…We specifically investigate training data properties related to positional bias in fine-tuned language model performance on correctly answering MCQs. …”
    Get full text
    Thesis
  3. 3

    Investigating fine-tuning of large language models for text summarisation by Khaliq, Usama, Patel, Preeti

    Published 2024
    “…In contrast, bigger models, like GPT-4, with its 1.7 trillion parameters, generated near-perfect summaries without having been trained on a specific dataset. The limited performance increase from the fine-tuned models was likely due to small datasets and medium-sized LLMs. …”
    Get full text
    Conference or Workshop Item
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9

    Design of a linear motor-based magnetic levitation train prototype by Mohd Zaidi, Muhammad Syafiq, Mohd Hassan, Siti Lailatul, Abdul Halim, Ili Shairah, Sulaiman, Nasri

    Published 2024
    “…This study explores the modelling of a magnetic levitation train and its implementation using a microcontroller. …”
    Get full text
    Article
  10. 10

    Bystanders by Rowe, Mary, Giraldo-Kerr, Anna

    Published 2024
    “…If they take helpful action, they may be called “active” or “positive” bystanders, or “up-standers.” …”
    Get full text
    Book chapter
  11. 11

    Who learns when workers are trained? a case of safety training of maintenance contractors’ workers for a major petrochemical plant shutdown by A., Fakhru'l-Razi, S.E., Iyuke, M.B., Hassan, M.S., Aini

    Published 2003
    “…Subsequently, detailed correlation using the model was performed on all Ei=f(A, X, L, P, Rp, RR, T), which were plots of training effectiveness vs Ei as percentages, gave good quantitative parameters for further simulations in the future. …”
    Get full text
    Article
  12. 12
  13. 13
  14. 14
  15. 15

    Evaluating Adaptive Layer Freezing through Hyperparameter Optimization for Enhanced Fine-Tuning Performance of Language Models by Figueroa, Reinaldo

    Published 2024
    “…Additionally, we explore which layers inside the models usually hold more contextual information from pre-training that might be valuable to keep ‘frozen’ when fine-tuning on small datasets. …”
    Get full text
    Thesis
  16. 16

    An Effective Med-VQA Method Using a Transformer with Weights Fusion of Multiple Fine-Tuned Models by Al-Hadhrami, Suheer, Menai, Mohamed El Bachir, Al-Ahmadi, Saad, Alnafessah, Ahmad

    Published 2024
    “…The second model, the greedy-soup-based model, uses a greedy soup technique based on the fusion of multiple fine-tuned models to set the model parameters. The greedy soup selects the model parameters by fusing the model parameters that have significant performance on the validation accuracy in training. …”
    Get full text
    Article
  17. 17
  18. 18
  19. 19

    The training intensity distribution of marathon runners across performance levels by Muniz-Pumares, Daniel, Hunter, Ben, Meyler, Samuel, Maunder, Ed, Smyth, Barry

    Published 2024
    “…Results: Training volume across all runners was 45.1±26.4 km·wk-1, but the fastest runners within the dataset (marathon time 120-150 min) accumulated >3 times more volume than slower runners. …”
    Get full text
    Article
  20. 20

    Effective image synthesis for effective deep neural network training by Cui, Kaiwen

    Published 2024
    “…To address this issue, a solution called data-limited image generation has been proposed. …”
    Get full text
    Thesis-Doctor of Philosophy