Gpt3mix: Leveraging large-scale language models for text augmentation KM Yoo, D Park, J Kang, SW Lee, W Park arXiv preprint arXiv:2104.08826, 2021 | 240 | 2021 |
TaleBrush: Sketching stories with generative pretrained language models JJY Chung, W Kim, KM Yoo, H Lee, E Adar, M Chang Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems …, 2022 | 231 | 2022 |
Learning to compose task-specific tree structures J Choi, KM Yoo, S Lee Proceedings of the AAAI Conference on Artificial Intelligence 32 (1), 2018 | 230* | 2018 |
Self-guided contrastive learning for BERT sentence representations T Kim, KM Yoo, S Lee arXiv preprint arXiv:2106.07345, 2021 | 215 | 2021 |
What changes can large-scale language models bring? intensive study on hyperclova: Billions-scale korean generative pretrained transformers B Kim arXiv preprint arXiv:2109.04650, 2021 | 116 | 2021 |
Ground-truth labels matter: A deeper look into input-label demonstrations KM Yoo, J Kim, HJ Kim, H Cho, H Jo, SW Lee, S Lee, T Kim arXiv preprint arXiv:2205.12685, 2022 | 88 | 2022 |
Data augmentation for spoken language understanding via joint variational generation KM Yoo, Y Shin, S Lee Proceedings of the AAAI conference on artificial intelligence 33 (01), 7402-7409, 2019 | 86 | 2019 |
Dialogbert: Discourse-aware response generation via learning to recover and rank utterances X Gu, KM Yoo, JW Ha Proceedings of the AAAI Conference on Artificial Intelligence 35 (14), 12911 …, 2021 | 81 | 2021 |
Memory-efficient fine-tuning of compressed large language models via sub-4-bit integer quantization J Kim, JH Lee, S Kim, J Park, KM Yoo, SJ Kwon, D Lee Advances in Neural Information Processing Systems 36, 2024 | 77 | 2024 |
Aligning large language models through synthetic feedback S Kim, S Bae, J Shin, S Kang, D Kwak, KM Yoo, M Seo arXiv preprint arXiv:2305.13735, 2023 | 57 | 2023 |
Self-generated in-context learning: Leveraging auto-regressive language models as a demonstration generator HJ Kim, H Cho, J Kim, T Kim, KM Yoo, S Lee arXiv preprint arXiv:2206.08082, 2022 | 56 | 2022 |
Alphatuning: Quantization-aware parameter-efficient adaptation of large-scale pre-trained language models SJ Kwon, J Kim, J Bae, KM Yoo, JH Kim, B Park, B Kim, JW Ha, N Sung, ... arXiv preprint arXiv:2210.03858, 2022 | 33 | 2022 |
Mutual information divergence: A unified metric for multimodal generative models JH Kim, Y Kim, J Lee, KM Yoo, SW Lee Advances in Neural Information Processing Systems 35, 35072-35086, 2022 | 27 | 2022 |
Critic-guided decoding for controlled text generation M Kim, H Lee, KM Yoo, J Park, H Lee, K Jung arXiv preprint arXiv:2212.10938, 2022 | 24 | 2022 |
Response generation with context-aware prompt learning X Gu, KM Yoo, SW Lee arXiv preprint arXiv:2111.02643, 2021 | 24 | 2021 |
Leveraging class hierarchy in fashion classification H Cho, C Ahn, K Min Yoo, J Seol, S Lee Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2019 | 24 | 2019 |
Prompt-augmented linear probing: Scaling beyond the limit of few-shot in-context learners H Cho, HJ Kim, J Kim, SW Lee, S Lee, KM Yoo, T Kim Proceedings of the AAAI Conference on Artificial Intelligence 37 (11), 12709 …, 2023 | 21 | 2023 |
Variational hierarchical dialog autoencoder for dialog state tracking data augmentation KM Yoo, H Lee, F Dernoncourt, T Bui, W Chang, S Lee arXiv preprint arXiv:2001.08604, 2020 | 19 | 2020 |
Kmmlu: Measuring massive multitask language understanding in korean G Son, H Lee, S Kim, S Kim, N Muennighoff, T Choi, C Park, KM Yoo, ... arXiv preprint arXiv:2402.11548, 2024 | 18 | 2024 |
Generating information-seeking conversations from unlabeled documents G Kim, S Kim, KM Yoo, J Kang arXiv preprint arXiv:2205.12609, 2022 | 16* | 2022 |