daily-ai-papers
All credits go to HuggingFace's Daily AI papers (https://huggingface.co/papers) and the research community. 🔉Audio summaries here (https://t.me/daily_ai_papers).
Stars: 87
README:
Summaries auto-generated from HuggingFace's Daily Papers using Gemini and GitHub Actions. All credits go to the research and HuggingFace communities. 🔉 You can get audio summaries via OpenAI's text-to-speech API on Telegram. Note: Authors may be listed by their HuggingFace IDs. Additionally, summaries are generated by LLM and may contain mistakes. You can see the prompt used here here. |
Title | Authors | Summary |
---|---|---|
Revisit Large-Scale Image-Caption Data in Pre-training Multimodal Foundation Models (Read more on arXiv or HuggingFace) | Chen Chen, Vasileios Saveris, haotiz, Hong-You, jefflai | a) This research investigates the optimal image-caption data composition for pre-training multimodal foundation models, specifically examining the interplay between synthetic captions and original AltText. b) The authors develop a controllable captioning pipeline to generate diverse caption formats (Short Synthetic Captions (SSC), Descriptive Synthetic Captions (DSC), Dense Synthetic Captions (DSC+), and AltText Fusion Captions (AFC)) and evaluate their impact on CLIP, multimodal LLMs (MM1), and diffusion models. c) Combining SSC and AltText during CLIP pre-training yielded the best performance in retrieval tasks, with over a 10% improvement on COCO retrieval compared to using AltText alone. d) AI practitioners should consider a hybrid approach combining both synthetic captions and AltText when pre-training CLIP, as AltText provides data diversity and synthetic captions enhance image-text alignment. The specific ratio of this combination should be explored depending on the desired trade-off. The paper’s findings on the format of captions show DSC+ is preferred by MLLMs while shorter captions are preferred by CLIP, indicating that caption format should be customized to the specific model. Follow-up questions: 1. What are the computational costs and infrastructure requirements associated with implementing the proposed controllable captioning pipeline, especially for generating captions at the scale of datasets like VeCap-300M? 2. Could the performance gains observed by combining synthetic captions and AltText be replicated using alternative filtering methods besides DFN-2B, and what challenges might arise when combining different filtering or captioning approaches? 3. How does the optimal mixture ratio of synthetic captions and AltText change when scaling up CLIP's vision encoder, and what are the implications for training larger multimodal foundation models? |
Video Instruction Tuning With Synthetic Data (Read more on arXiv or HuggingFace) | Wei Li, Chunyuan24, liuziwei7, kimingng, ZhangYuanhan | a) The research aimed to create a high-quality synthetic video instruction-tuning dataset and a corresponding video LMM to improve video understanding beyond simple captioning. b) Researchers developed LLaVA-Video-178K, a synthetic dataset with 178,510 videos and 1.3M instruction samples (captions, open-ended and multiple-choice QA), using GPT-40 and human annotation; they then trained LLaVA-Video, a video LMM, using this dataset and existing visual instruction tuning data, exploring video representation techniques like LLaVA-Video slowFast to maximize frame inclusion. c) LLaVA-Video-7B outperformed LLaVA-OV-7B (a previous top model) in seven out of ten evaluated datasets. On NEXT-QA, adding the LLaVA-Video-178K dataset during training led to a 31.9-point increase in scores. d) This provides AI practitioners with a new high-quality synthetic video instruction tuning dataset and a corresponding LMM, enabling improved development of video understanding models beyond simple captioning. The strong performance increases demonstrate the value of both high-quality, dense annotations and increased frame inclusion within video LMM training. Follow-up Questions: 1. What are the specific details of the LLaVA-Video slowFast implementation, including the algorithms used for slow and fast frame selection and pooling? Appendix B is referenced but not provided, making full evaluation challenging. 2. The paper mentions filtering question-answer pairs generated by GPT-40, but doesn't provide specifics on the acceptance criteria beyond removing duplicates and unhelpful phrases. What were the precise filtering rules used to ensure quality? 3. What were the specific hyperparameters used for training LLaVA-Video, including learning rate, batch size, and optimization strategy? This information is crucial for replicating and building upon the research. |
Loong: Generating Minute-level Long Videos with Autoregressive Language Models (Read more on arXiv or HuggingFace) | Tianwei Xiong, XihuiLiu, bykang, Ikuinen, Epiphqny | a) The research aims to generate minute-long, content-rich videos using autoregressive large language models (LLMs). b) Loong, an autoregressive LLM-based model, is trained on a unified sequence of text and video tokens using a progressive short-to-long training strategy with loss re-weighting and inference techniques like video token re-encoding. c) Loong generates minute-long videos and achieves a Fréchet Video Distance (FVD) score of 432 on a custom benchmark of 27-second videos derived from WebVid, using a 7B parameter model. The paper does not provide quantitative comparisons on publicly available long video generation benchmarks. d) AI practitioners can leverage the proposed progressive training and inference strategies to adapt and extend existing LLM-based video generation methods for creating longer, coherent videos, potentially opening new possibilities in content creation and video understanding. Follow-up questions: 1. What is the impact of different video tokenizer architectures on the overall performance of Loong, and how does the compression ratio affect the quality and fidelity of generated long videos? 2. While the paper mentions a super-resolution and refinement module, it lacks specifics. What specific models and techniques were used for post-processing, and what is their contribution to the final video quality (quantitatively)? 3. How does Loong perform on established long video generation benchmarks, enabling a more direct comparison with state-of-the-art methods like StreamingT2V, FreeNoise, and Gen-L? |
LLaVA-Critic: Learning to Evaluate Multimodal Models (Read more on arXiv or HuggingFace) | Chunyuan24, henghuang, thughost, russwang, txiong23 | a) The research aimed to develop an open-source large multimodal model (LMM) capable of evaluating the performance of other multimodal models across diverse tasks. b) LLaVA-Critic was trained by fine-tuning a pre-trained LLaVA-OneVision model on a 113k sample dataset of critic instruction-following data, incorporating pointwise scoring and pairwise ranking. c) As a judge model, LLaVA-Critic-72B achieved an average Pearson correlation of 0.754 with GPT-40 scores across seven multimodal benchmarks, outperforming the LLaVA-OV-72B baseline (0.634). d) LLaVA-Critic provides a cost-effective, open-source alternative to proprietary models like GPT-4V for evaluating multimodal models, enabling wider access to robust evaluation resources. This is particularly impactful as it reduces reliance on expensive, closed-source APIs for evaluating multimodal models, enabling developers with limited resources to perform rigorous testing and alignment. Follow-Up Questions: 1. Could the authors elaborate on the specific computational resources required for training LLaVA-Critic and its inference latency, to better understand its feasibility for practitioners with varying resource constraints? 2. The paper mentions utilizing LLaVA-Critic for preference learning with DPO. Were other preference learning algorithms like RLHF explored, and if so, how did their performance compare? 3. The paper mentions a v0.5 version of LLaVA-Critic trained on a smaller subset of data. What were the specific limitations or constraints that motivated the creation of this reduced version, and what are the expected performance tradeoffs compared to the full version? |
Contrastive Localized Language-Image Pre-Training (Read more on arXiv or HuggingFace) | Marcin Eichner, Xinze Wang, haotiz, jefflai, Hong-You | a) This research aims to enhance the localization capability of Contrastive Language-Image Pre-training (CLIP) for fine-grained visual understanding, particularly in multimodal large language models (MLLMs). b) The authors introduce Contrastive Localized Language-Image Pre-training (CLOC), incorporating region-text contrastive loss and a "Prompter" module to extract region embeddings from image embeddings given spatial hints. A visually-enriched and spatially-localized captioning pipeline (VESL) generates pseudo-labeled region-text pairs at scale for training. c) CLOC with 2 billion region labels and a ViT-L/14 architecture achieves 71.1% recall@10 on GRIT region retrieval and improves Ferret MLLM performance on referring description VQA by 6.2% compared to baseline CLIP. d) AI practitioners can utilize CLOC as a drop-in replacement for CLIP in MLLMs to improve performance on referring and grounding tasks that require fine-grained visual understanding. Follow-up questions: 1. The paper mentions working on releasing pre-trained checkpoints and the constructed region-text annotations. Have these resources been released, and if so, where can they be accessed? How does the performance of CLOC compare with other more recent, post-CLIP, image-text models that also incorporate regional information? 2. Could the "Prompter" module be adapted or extended to incorporate other spatial hints beyond bounding boxes and text captions, such as segmentation masks or depth information? What would the implications of such an extension be, and what are the expected challenges? |
Depth Pro: Sharp Monocular Metric Depth in Less Than a Second (Read more on arXiv or HuggingFace) | Hugo Germain, Aleksei Bochkovskii, srrichter, msantoso98, amael-apple | a) The research aimed to develop a foundation model for zero-shot metric monocular depth estimation that is fast, accurate, and produces high-resolution depth maps with sharp boundaries. b) Depth Pro uses a multi-scale vision transformer architecture, applying plain ViT encoders at multiple scales and fusing the predictions. The training protocol combines real and synthetic datasets with a two-stage curriculum focusing first on robust feature learning and then on boundary sharpening. c) Depth Pro achieves state-of-the-art zero-shot metric depth accuracy with a δ₁ score of 89.0 on the Sun-RGBD dataset and generates a 2.25-megapixel depth map in 0.3 seconds on a V100 GPU. d) AI practitioners can utilize Depth Pro for applications requiring fast and accurate metric depth estimation, particularly in scenarios like novel view synthesis where sharp boundaries are crucial, without needing camera intrinsics or per-domain fine-tuning. The paper's proposed boundary accuracy metrics based on matting/segmentation data offer a valuable new evaluation tool. Follow-up questions: 1. How does the proposed multi-scale ViT architecture compare in terms of memory consumption to other high-resolution ViT adaptations, especially when dealing with even larger images or videos? 2. The paper mentions limitations with translucent surfaces and volumetric scattering; what specific failure modes are observed in these cases, and are there potential mitigation strategies within the existing architecture or training framework? 3. Could the focal length estimation head be further improved by incorporating self-supervised learning techniques or exploring alternative network architectures specifically designed for focal length prediction? |
Large Language Models as Markov Chains (Read more on arXiv or HuggingFace) | Abdelhakim Benechehab, Oussama Zekri, ievred, NBoulle, ambroiseodt | a) The paper investigates the theoretical underpinnings of large language model (LLM) inference capabilities, specifically characterizing their behavior and generalization ability. b) The authors establish an equivalence between autoregressive LLMs with a vocabulary size T and context window K and Markov chains defined on a finite state space of size O(TK), analyzing the transition matrix and deriving generalization bounds for both pre-training and in-context learning scenarios. c) For a toy model with vocabulary size T=2 and context window K=3, trained on a binary sequence, the transition matrix has size 14x14, and the model approaches its stationary distribution within approximately 300 steps at temperature 1. d) The analysis provides AI practitioners with a framework to understand the generalization capabilities of LLMs in terms of learning Markov chain transition probabilities. The drawn equivalence to Markov chains offers a theoretical basis for interpreting and predicting the behavior of LLMs, especially in in-context learning scenarios. e) The paper lacks details on the architecture and specific training methodology of the "small GPT-like" toy model used in experiments. It also lacks details on how the prompts are tokenized in the in-context learning experiments. Follow-up Questions: 1. How robust is the equivalence between LLMs and Markov Chains to different tokenization methods, especially for numerical data, given the observed sensitivities highlighted in the paper? 2. Can the Markov Chain framework be leveraged to develop more efficient fine-tuning strategies or prompt engineering techniques for specific downstream tasks involving sequential data? 3. How does the sparsity of the transition matrix, quantified in the paper, influence the computational complexity of estimating the stationary distribution and mixing time of LLMs represented as Markov chains? |
CLIP-MoE: Towards Building Mixture of Experts for CLIP with Diversified Multiplet Upcycling (Read more on arXiv or HuggingFace) | Yu Cheng, Jihai Zhang, Spico, Xiaoye08 | This research aims to improve Contrastive Language-Image Pre-training (CLIP) performance by addressing its coarse-grained encoding and information loss. The authors propose Diversified Multiplet Upcycling (DMU), fine-tuning multiple CLIP models with shared parameters (except for Feed-Forward Network layers) using Multistage Contrastive Learning (MCL), then integrating these models as experts into a Mixture of Experts (MoE) architecture. On zero-shot image-text retrieval using the ShareGPT4V dataset, CLIP-MoE achieves a top-1 image-to-text retrieval accuracy of 60.5% on Flickr30k, exceeding the OpenAI CLIP baseline by approximately 22%. This offers AI practitioners a model-agnostic method to enhance CLIP performance without extensive retraining from scratch, which is particularly relevant for resource-constrained settings. Follow-up questions: 1. Could the performance gains observed with CLIP-MoE be replicated with different base CLIP architectures (e.g., larger or smaller ViT variants, ResNet-based CLIP)? 2. How does the choice of the number of experts and the top-k routing strategy affect the performance-efficiency trade-off of CLIP-MoE in different downstream tasks and hardware settings? 3. What are the practical considerations for deploying CLIP-MoE in real-world applications, particularly concerning latency and memory footprint compared to standard CLIP models? |
Eliminating Oversaturation and Artifacts of High Guidance Scales in Diffusion Models (Read more on arXiv or HuggingFace) | Otmar Hilliges, RMW, msadat97 | a) This paper investigates the oversaturation and artifact generation caused by high classifier-free guidance (CFG) scales in diffusion models, aiming to improve generation quality. b) The authors introduce Adaptive Projected Guidance (APG), which decomposes the CFG update into parallel and orthogonal components, down-weighting the parallel component responsible for oversaturation. APG also incorporates rescaling and reverse momentum inspired by gradient ascent optimization. c) APG improved FID scores compared to CFG across multiple models; for example, EDM2-S showed a reduction from 10.42 to 6.49 with a guidance scale of 4. d) APG provides AI practitioners a plug-and-play alternative to CFG that mitigates oversaturation and artifacts at high guidance scales, enabling the use of higher guidance values for enhanced generation quality and alignment with conditional inputs. The most impactful finding is the decomposition of CFG’s update and the subsequent suppression of the parallel component, directly impacting how practitioners can control saturation levels in generated images. Follow-up questions: 1. How does the performance of APG compare to CFG when using different text embedding methods or prompt engineering techniques in text-to-image generation? 2. Could the insights from APG’s decomposition of CFG updates be applied to other guidance methods or even other generative model architectures beyond diffusion models? 3. Are there specific types of conditional inputs (e.g., complex text prompts) where APG's advantages are more pronounced compared to CFG? |
SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration (Read more on arXiv or HuggingFace) | Jun Zhu, Pengle Zhang, Jia wei, Jintao Zhang, surfingtomchen | a) The research aimed to develop a quantized attention mechanism for transformers that accelerates inference without significant accuracy degradation. b) SageAttention quantizes Q and K tensors to INT8 after smoothing K by subtracting the mean across tokens, utilizes FP16 accumulators for the PV matrix multiplication, and employs an adaptive quantization strategy to select the fastest kernel per layer while maintaining accuracy. c) SageAttention achieves a 2.1x speedup over FlashAttention2 and an average real speedup of 2.83x compared to original attention implementations across various models including Llama2, CogVideoX, Unidiffuser, UltraPixel, and TIMM. d) AI practitioners can use SageAttention as a plug-and-play replacement for existing attention mechanisms to achieve substantial inference speedups in transformer models with negligible performance loss, particularly beneficial for resource-constrained environments or latency-sensitive applications. e) The paper does not explicitly detail the memory usage reductions achieved by SageAttention. Follow-up questions: 1. What is the memory footprint reduction achieved by SageAttention compared to FP16 attention and other efficient attention methods like FlashAttention2 and xformers? 2. How does the adaptive kernel selection strategy perform in terms of overhead and stability across different hardware and batch sizes? 3. Could the smoothing technique for the K matrix be generalized to other quantization schemes or transformer architectures beyond those tested in the paper? |
MVGS: Multi-view-regulated Gaussian Splatting for Novel View Synthesis (Read more on arXiv or HuggingFace) | Xin Yu, Yida Wang, xiaobiaodu | a) This paper addresses the problem of overfitting to specific views and imprecise 3D geometry in novel view synthesis using Gaussian-based explicit representations like 3D Gaussian Splatting (3DGS). b) The authors introduce Multi-View Gaussian Splatting (MVGS), incorporating multi-view regulated learning, cross-intrinsic guidance, cross-ray densification, and multi-view augmented densification to improve optimization and prevent overfitting. c) MVGS improves NVS performance across various tasks, including a demonstrated improvement of over 1dB PSNR on the Tanks & Temples dataset when integrated with 3DGS and Scaffold-GS compared to their single-view counterparts. d) AI practitioners working with Gaussian-based explicit representations for novel view synthesis can leverage MVGS as a general optimization solution to enhance reconstruction accuracy and view generalization, particularly in challenging scenarios like reflections or dynamic scenes. Follow-up questions: 1. What is the computational overhead of incorporating multi-view training and the proposed densification strategies compared to standard single-view optimization in 3DGS? How does this impact real-time rendering capabilities? 2. The paper mentions performance degradation with excessive multi-view training. What is the optimal number of views (M) in relation to scene complexity and how can this be determined dynamically or automatically? |
L-CiteEval: Do Long-Context Models Truly Leverage Context for Responding? (Read more on arXiv or HuggingFace) | Jianye Hou, Baibei Ji, Juntao Li, Keyan Zhou, ZetangForward | a) This research investigates whether Long-Context Models (LCMs) genuinely utilize provided context for generating responses or rely on inherent knowledge. b) A multi-task benchmark, L-CiteEval, was created, requiring LCMs to generate statements and supporting citations from long contexts (8K-48K tokens) across 11 tasks. Automatic evaluation metrics for both generation quality (e.g., precision, recall, Rouge-L) and citation quality (citation recall, precision, and F1) were used. c) Open-source LCMs lagged significantly behind closed-source models in citation accuracy, with a performance gap of nearly 20 F1 points observed in some synthetic tasks, despite citing a similar number of segments. d) AI practitioners should be aware that current open-source LCMs are prone to generating responses from internal knowledge rather than the provided context, posing risks for faithfulness in applications. The benchmark and its automatic evaluation suite provide a tool for evaluating and improving context utilization in LCM development. e) The paper notes a correlation between LCM attention mechanisms and the citation generation process but doesn't provide details on the strength or nature of this correlation. Follow-up questions: 1. What specific architectural differences between the tested open-source and closed-source LCMs could be contributing to the disparity in citation accuracy? 2. How does the choice of retrieval method in the RAG approach impact both generation and citation quality across different task types and context lengths? 3. Can the observed correlation between attention mechanisms and citation generation be leveraged to develop more explainable or controllable LCMs for long-context tasks? |
Training Language Models on Synthetic Edit Sequences Improves Code Synthesis (Read more on arXiv or HuggingFace) | Rob Fergus, lerrel, upiter | a) This research investigates whether training language models (LLMs) on synthetic code edit sequences, rather than complete programs, improves code synthesis performance, particularly in terms of the trade-off between generation quality and inference-time compute cost. b) The authors develop LintSeq, an algorithm that refactors existing programs into sequences of static error-free edits using a linter. LLMs are then instruction fine-tuned on these synthetic edit sequences and evaluated on code synthesis benchmarks. c) On HumanEval, smaller LLM's (e.g., TinyCodeLM-150M and 400M) fine-tuned on synthetic edit sequences outperform existing code language models of comparable size and achieve a 20% (±3%) absolute improvement in pass@50 compared to baseline fine-tuning on full program code. d) For AI practitioners working with smaller LLMs, this research suggests that fine-tuning on synthetic edit sequences generated using a tool like LintSeq can significantly improve code synthesis performance and provide a more favorable trade-off between computational cost and generation quality, enabling competitiveness with larger models using repeated sampling. Follow-up questions: 1. How does the performance of LintSeq-trained models compare to baseline models on other code synthesis benchmarks beyond HumanEval and MBPP, especially those involving longer or more complex code generation? 2. What are the practical limitations and computational costs associated with generating and storing large datasets of synthetic code edits using LintSeq for training larger LLMs? 3. How robust is the LintSeq approach to different programming languages and how can it be adapted for other code editing tasks besides program synthesis, such as code completion or bug fixing? |
Distilling an End-to-End Voice Assistant Without Instruction Training Data (Read more on arXiv or HuggingFace) | Michael Ryan, Ella Li, zyanzhe, missblanchett, WillHeld | a) The research aimed to develop a Speech Large Language Model (Speech LLM) that generalizes well without requiring instruction training data, addressing the "forgetting" issue observed in models fine-tuned with supervised finetuning (SFT). b) The study employed a cross-modal context distillation method, training a model named Distilled Voice Assistant (DiVA) on the CommonVoice dataset. DiVA leverages a frozen Llama 3 language model and a Q-Former initialized from Whisper, minimizing the L2 distance between audio and text embeddings and the KL Divergence between their output distributions. c) DiVA generalized to Spoken Question Answering, Classification, and Translation tasks. In a user study comparing DiVA with Qwen 2 Audio, DiVA achieved a 72% win rate based on user preference. d) This research provides AI practitioners with a data-efficient and computationally less expensive approach to developing Speech LLMs that generalize well, potentially reducing the reliance on extensive labeled instruction datasets. The significant user preference for DiVA over existing SFT models suggests a potential disconnect between benchmark evaluations and real-world user experience. Follow-up questions: 1. How does DiVA's performance compare to SFT models on a broader range of spoken language understanding tasks beyond those evaluated in the paper? 2. What are the limitations of using context distillation for tasks where prosodic information in speech plays a crucial role, and how can these limitations be addressed? 3. How does the choice of the base LLM affect DiVA’s performance, and could performance be further improved by using a more powerful LLM or by fine-tuning the LLM's parameters? |
MedVisionLlama: Leveraging Pre-Trained Large Language Model Layers to Enhance Medical Image Segmentation (Read more on arXiv or HuggingFace) | Amir Shmuel, Janine Mendola, amanchadha, gurucharan-marthi | a) This research explored enhancing Vision Transformer (ViT) performance for medical image segmentation by integrating frozen transformer blocks from pre-trained Large Language Models (LLMs). b) The study integrated a frozen LLM transformer block within the encoder of a ViT, alongside a proposed Hybrid Attention Mechanism and Multi-Scale Fusion Block. The model was evaluated on 10 medical image segmentation tasks from the Medical Segmentation Decathlon (MSD) dataset. c) The integration of the Llama 3.1 LLM transformer block improved the average Dice score from 0.74 (baseline ViT) to 0.79. d) AI practitioners working on medical image segmentation tasks can leverage pre-trained LLM layers to boost the performance of ViT models without requiring larger datasets or excessive computational resources for LLM training. The paper notes the improved effectiveness seen at higher image resolutions, which could guide practitioners in model selection for specific tasks. Follow-up questions: 1. The paper mentions a Hybrid Attention mechanism. How does this mechanism's design specifically contribute to the observed performance gains, and what are the computational trade-offs compared to standard attention mechanisms in ViTs? 2. Given the observation that lighter LLMs like Yi and Qwen performed well, what specific architectural factors within these models might be contributing to their effectiveness in medical image segmentation compared to heavier models like Llama and Gemma? Further research directly comparing these architectures on more datasets would be very insightful. 3. While the paper focuses on the MSD dataset, how generalizable are these findings to other medical imaging modalities or datasets with varying characteristics (e.g., noise levels, resolution)? Would further investigation on private datasets reveal a similar performance boost? |
Vinoground: Scrutinizing LMMs over Dense Temporal Reasoning with Short Videos (Read more on arXiv or HuggingFace) | Jianrui Zhang, yjlee0222, mucai | a) The research investigates the ability of large multimodal models (LMMs) to perform dense temporal reasoning in short videos. b) A new benchmark dataset, Vinoground, consisting of 1000 short video-caption pairs with temporal counterfactuals, was created and used to evaluate several CLIP-based and text-generative LMMs. Models were tasked with matching videos to captions differing only in temporal ordering of events. c) GPT-40 achieved the highest text score among LMMs at 54.0%, significantly below human performance (~90%), and all CLIP-based models performed worse than random chance. d) The results demonstrate a significant deficiency in current LMMs regarding dense temporal reasoning, even in short videos, highlighting this as a critical area for future development and refinement. The paper's introduction states that a "single-frame bias" exists in current video-language benchmarks and therefore the community has shifted its attention toward more complex challenges posed by long-form video understanding; however, the results reported in this paper suggest that short-form video comprehension is itself a problem that is far from being solved. Follow-up questions: 1. How does the performance of LMMs on Vinoground vary with different video encoding strategies, such as varying the number of sampled frames or using different temporal fusion methods? 2. What specific architectural modifications or training paradigms could be explored to improve LMMs' ability to capture and reason about the temporal dynamics present in videos? 3. Could transfer learning from pre-trained models specialized in action recognition or temporal ordering improve performance on Vinoground, and how could such transfer learning be effectively implemented? |
Synthio: Augmenting Small-Scale Audio Classification Datasets with Synthetic Data (Read more on arXiv or HuggingFace) | manocha, ctnzr, rafaelvalle, ZhifengKong, SreyanG-NVIDIA | This research aims to improve audio classification accuracy with limited labeled data. The Synthio method augments small-scale datasets using synthetic audio generated from a text-to-audio (T2A) diffusion model aligned with the target dataset using preference optimization and prompted with diverse captions generated by LLMs. Evaluation on ten downsampled datasets showed Synthio outperformed baselines by 0.1%-39% in classification accuracy. This implies that AI practitioners can leverage synthetic data generated from aligned T2A models, coupled with diverse captioning techniques, to significantly improve the performance of audio classification models trained on limited data. Follow-up questions: 1. How does the computational cost of Synthio, including LLM prompting and T2A generation, compare to the cost of collecting and labeling more real-world audio data? 2. The paper mentions limitations regarding the T2A model's occasional inability to match generated audio with captions compositionally; how could this limitation be addressed to improve Synthio's applicability to tasks like audio captioning? 3. Could the preference optimization technique used to align the T2A model be adapted or improved for other generative models beyond audio, such as image or text generation? |
Title | Authors | Summary |
---|---|---|
From Code to Correctness: Closing the Last Mile of Code Generation with Hierarchical Debugging (Read more on arXiv or HuggingFace) | Xiaodong Gu, Chengcheng Wan, Songsong Wang, YerbaPage | This research addresses the problem of low pass rates in LLM-generated code due to subtle errors. The authors introduce MGDebugger, which uses a hierarchical, bottom-up debugging strategy, decomposing code into subfunctions and debugging them recursively with LLM-simulated execution and automatically generated test cases. Experiments on HumanEval show MGDebugger improves accuracy by 17.7% over seed generations when using DeepSeek-Coder-V2-Lite (16B). This implies that AI practitioners can significantly improve the correctness of LLM-generated code by adopting hierarchical debugging strategies rather than treating programs as monolithic units. The paper states MGDebugger achieves a 97.6% repair success rate on HumanEval-Fix using DeepSeek-Coder-V2-Lite (16B); however, it doesn't clarify the baseline repair success rate for this dataset/model combination, making it difficult to assess the relative improvement. Follow-up questions: 1. How does MGDebugger's performance compare to traditional symbolic execution or program analysis techniques for debugging, especially in terms of scalability and handling complex codebases? 2. What are the computational resource requirements (e.g., memory, time) of MGDebugger compared to other LLM-based debugging methods, and how do they scale with code size and complexity? 3. Could the hierarchical decomposition strategy be automated further, and what are the potential challenges in applying it to real-world codebases with complex dependencies and interactions between modules? |
Is Preference Alignment Always the Best Option to Enhance LLM-Based Translation? An Empirical Analysis (Read more on arXiv or HuggingFace) | nunonmg, PierreColombo, CelineH, emmanuelmalherbe, hgissbkh | a) This paper investigates the effects of preference-based alignment, particularly Contrastive Preference Optimization (CPO), on the quality of Large Language Model (LLM)-based translations. b) The researchers conducted experiments fine-tuning an LLM translation model with CPO and Supervised Fine-Tuning (SFT), using various quality metrics (xCOMET-QE, CometKiwi, chrF) for alignment and evaluation, with both multi-system and mono-system candidate generation approaches. c) CPO consistently outperformed SFT on high-quality data when aligning with neural metrics like xCOMET-QE, sometimes significantly increasing scores on the alignment metric (e.g., +2.75 for xCOMET-QE in en-xx translations with a multi-system approach). However, it also introduced adverse effects between neural and lexical metrics, and exhibited sensitivity to the chosen candidate systems. d) AI practitioners aligning LLMs for translation should carefully consider the choice of candidate generation systems and potential trade-offs between optimizing neural versus lexical metrics when employing CPO. The instability of CPO across different downstream metrics warrants caution. The mono-system approach offers more control and may mitigate some of these issues while achieving comparable alignment effectiveness. This improved control stems from being able to fine-tune the choice of candidate option quality with greater precision in the mono-system setting. Follow-up questions: 1. How does the computational cost of generating multiple candidates in the mono-system approach compare to the cost of accessing and using multiple external systems in the multi-system approach? 2. Could the instability of CPO be addressed by exploring different values for the β hyperparameter or by modifying the training procedure (e.g., different optimizers, learning rate schedules)? 3. What are the practical implications of the adverse metric effects between neural and lexical metrics for real-world translation applications, where both types of metrics are often considered important? |
LEOPARD : A Vision Language Model For Text-Rich Multi-Image Tasks (Read more on arXiv or HuggingFace) | Zhihan Zhang, Tianqing Fang, Mengzhao Jia, kaixinm, wyu1 | This research aimed to develop a multimodal large language model (MLLM) capable of handling text-rich, multi-image tasks. The researchers curated a one-million-instance instruction-tuning dataset (LEOPARD-INSTRUCT) and implemented an adaptive high-resolution multi-image encoding module based on pixel shuffling. LEOPARD-Idefics2, a variant trained on this dataset, outperformed the previous best-performing open-source MLLM on text-rich multi-image benchmarks by an average of 9.61 points. This suggests that LEOPARD and its associated dataset are valuable resources for developing MLLMs specialized in complex, text-rich, multi-image scenarios. The paper doesn't explicitly state the metric used for the +9.61 point improvement, though it does mention average normalized levenshtein similarity and accuracy in Table 3, making it difficult to understand precisely what this improvement represents. Follow-up questions: 1. What specific metric (e.g., accuracy, F1-score, etc.) was used to calculate the +9.61 point improvement on the multi-image text-rich benchmarks, and on which specific subset of benchmarks was this average calculated? 2. What is the computational cost (e.g., GPU hours, FLOPs) of training LEOPARD compared to baseline models, and how does the adaptive high-resolution encoding module impact inference time? 3. Can the adaptive high-resolution encoding module be effectively applied to other visual encoders besides SigLIP-SO-400M, and are there plans to release the LEOPARD-INSTRUCT dataset publicly? |
ComfyGen: Prompt-Adaptive Workflows for Text-to-Image Generation (Read more on arXiv or HuggingFace) | galchechik, cohenor, yuvalalaluf, adihaviv, rinong | a) This research aims to improve text-to-image generation quality by automatically tailoring workflows to individual user prompts. b) The authors propose two LLM-based approaches: ComfyGen-IC uses an LLM with a pre-computed table of flows and scores for prompt categories to select flows, while ComfyGen-FT fine-tunes an LLM to predict flows based on prompts and target scores. Both leverage ComfyUI, representing workflows as JSON. c) ComfyGen-FT outperforms baseline models and generic workflows on both human preference and prompt alignment benchmarks, achieving a 0.61 overall score on GenEval compared to 0.59 for the best baseline. d) This work indicates that AI practitioners can improve text-to-image generation quality by moving beyond fixed models or generic workflows and adopting prompt-adaptive workflow generation techniques. Specifically, fine-tuning LLMs to predict workflows based on both prompts and target scores shows promise for enhanced performance. Follow-up questions: 1. What are the computational costs and scalability challenges associated with training and deploying ComfyGen-FT, particularly for large datasets and complex workflows? 2. How does the performance of ComfyGen-FT vary across different LLM architectures and sizes, and what are the trade-offs between performance and computational resources? 3. Can the proposed framework be extended to other generative tasks beyond text-to-image generation, such as image editing or video generation, and what adaptations would be necessary? |
Not All LLM Reasoners Are Created Equal (Read more on arXiv or HuggingFace) | Aaron Courville, Daniel Toyama, Alessandro Sordoni, agarwl, arianhosseini | This research investigates the depth of grade-school math (GSM) problem-solving and reasoning capabilities of LLMs. The study evaluates LLM performance on Compositional GSM, a new dataset derived from GSM8K, requiring models to solve chained math problems where the answer to the first question is a variable in the second. Results reveal a significant reasoning gap, defined as the performance difference between solving compositional pairs and individual questions; for example, the smaller, more cost-efficient GPT-40 mini exhibits a 14.2% reasoning gap on compositional GSM despite high accuracy on GSM8K. This implies that instruction-tuning, while effective for single-step problem-solving, does not necessarily translate to improved multi-hop reasoning, and high scores on standard benchmarks may mask deficiencies in compositional reasoning abilities, a critical insight for AI practitioners developing and applying such models. Follow-up Questions: 1. What specific modifications were made to the GSM8K problems to create the Compositional GSM dataset, and how might these modifications differentially impact various LLM architectures or training paradigms? 2. Given the observed overfitting during finetuning on GSM8K, what alternative training strategies could be explored to improve compositional reasoning without sacrificing generalization performance on other tasks? 3. Could the study's findings about the reasoning gap in cost-efficient models be extrapolated to other problem domains beyond grade-school math, and if so, what are the implications for real-world AI applications where resource constraints are a major factor? |
3DGS-DET: Empower 3D Gaussian Splatting with Boundary Guidance and Box-Focused Sampling for 3D Object Detection (Read more on arXiv or HuggingFace) | Dan Xu, Yuanliang, YangCaoCS | a) The paper aims to introduce 3D Gaussian Splatting (3DGS) for 3D object detection, addressing the challenges of ambiguous spatial distribution and excessive background blobs encountered when adapting 3DGS to this task. b) The authors propose a novel method called 3DGS-DET, incorporating two key strategies: 2D Boundary Guidance, which utilizes object boundaries from posed images to train the 3DGS model, and Box-Focused Sampling, which constructs 3D object probability spaces based on 2D bounding boxes for probabilistic sampling of Gaussian blobs. c) On the ScanNet dataset, 3DGS-DET achieves a mean Average Precision (mAP) of 59.9 at an Intersection over Union (IoU) threshold of 0.25, surpassing the baseline 3DGS pipeline by 5.6 points. d) AI practitioners can leverage the proposed 3DGS-DET method to achieve improved performance in 3D object detection tasks by utilizing the explicit and efficient representation offered by 3DGS, enhanced with boundary and sampling strategies. The paper specifically notes that other detectors can potentially use the enhanced 3DGS representations. Follow-up questions: 1. Could the performance of 3DGS-DET be further improved by jointly training the 3DGS representation and the detection network, rather than training them sequentially? 2. How does the computational cost of Boundary Guidance and Box-Focused Sampling compare to other 3D object detection methods, particularly those based on point clouds or voxels? 3. The paper mentions using CAGroup3D and FCAF3D as detectors. Could the specific detector choice significantly impact the results observed? Would other detectors trained on point clouds yield similar improvements from using the 3DGS representations? |
HelpSteer2-Preference: Complementing Ratings with Preferences (Read more on arXiv or HuggingFace) | okuchaiev, gshennvm, trias702, odelalleau, alexwb | a) This paper investigates whether Bradley-Terry style or Regression style reward models are more effective for aligning language models to instructions, and explores combining both approaches. b) The authors collect preference annotations and justifications alongside existing ratings in the HelpSteer2 dataset, enabling a head-to-head comparison of both reward modeling styles. They also experiment with a novel combined approach, initializing a Scaled Bradley-Terry model with a Helpfulness-Only SteerLM Regression model, and further refining it with ExPO. c) The combined reward model (Scaled BT + EXPO) achieves 94.1% on RewardBench, outperforming over 140 other reward models as of October 1, 2024. d) AI practitioners can leverage this combined reward model and the HelpSteer2-Preference dataset for training more accurate reward models, especially for RLHF, and potentially improve the performance of language models at following instructions. Follow-up questions: 1. How does the performance of the combined reward model (Scaled BT + EXPO) vary across different RewardBench categories (Chat, Chat-Hard, Safety, Reasoning), and what are the potential reasons for such variations? 2. What are the computational resource requirements (e.g., memory, FLOPs) for inference with the combined reward model compared to individual Bradley-Terry or Regression models? 3. What specific techniques were used for pre-processing the preference justifications, and how did those pre-processing steps impact the performance of Pairwise Justifier models? |
RATIONALYST: Pre-training Process-Supervision for Improving Reasoning (Read more on arXiv or HuggingFace) | Guoxuan Wang, danyaljj, ChuyuLiu, ylu610, Dongwei | a) The research aims to improve the reasoning capabilities of Large Language Models (LLMs) by addressing the issue of incomplete reasoning chains with implicit rationales. b) The proposed method, RATIONALYST, involves extracting implicit rationales from unlabeled text (The Pile) and reasoning datasets (GSM8K and ECQA), training a model to predict these rationales, and using the predicted rationales to provide process-supervision during LLM inference. c) Fine-tuned from LLaMa-3-8B, RATIONALYST improves the accuracy of reasoning by an average of 3.9% on seven representative reasoning benchmarks, including mathematical, commonsense, scientific, and logical reasoning datasets. d) AI practitioners can use RATIONALYST to enhance the reasoning performance and interpretability of LLMs across various tasks by incorporating a process-supervision mechanism based on implicit rationales extracted from readily available unlabeled data. The improved interpretability is particularly important for debugging and gaining deeper insights into LLM's reasoning process. Follow-up Questions: 1. How does the performance of RATIONALYST scale with larger base LLMs (e.g., LLaMa-3-70B) or more powerful rationale extractors (e.g., GPT-4)? 2. What are the computational costs and infrastructure requirements associated with extracting and filtering rationales from large datasets like The Pile, and how can these be optimized? 3. Could RATIONALYST be adapted for specific domains or tasks by training it on a curated dataset of domain-specific rationales, and how would this impact its performance and generalizability? |
Quantifying Generalization Complexity for Large Language Models (Read more on arXiv or HuggingFace) | maxtiktok, Nrain, zhuokai, Xulianghuang, luohy | This research investigates how task complexity and model size affect the generalization ability of Large Language Models (LLMs). The study uses SCYLLA, a dynamic benchmark generating in-distribution and out-of-distribution data for 20 tasks across varying complexities. Results reveal a "generalization valley," where the performance gap between in-distribution and out-of-distribution data is non-monotonic, peaking at a "critical complexity" that shifts rightward with increasing model size. Specifically, LLaMA-3.1-405B achieved near-perfect generalization scores (0.997 and 0.996) on O(N) and O([N, N²]) tasks, respectively. This suggests that scaling LLM size improves generalization, delaying but not eliminating over-reliance on memorization at higher task complexities. Follow-up questions: 1. How does the specific distribution of OOD data generation in SCYLLA affect the observed generalization valley, and how would these results compare if alternative OOD sampling strategies were employed? 2. Given the implicit reasoning observed in models like o1-mini, what further analysis could be conducted to better understand and potentially leverage these capabilities in downstream tasks or model development? 3. Could the performance of specialized LLMs (e.g., Qwen2.5-Math-7B) at higher complexities be improved by utilizing multi-stage prompting that decomposes complex tasks into sub-tasks within their expertise range? |
EVER: Exact Volumetric Ellipsoid Rendering for Real-time View Synthesis (Read more on arXiv or HuggingFace) | George Kopanas, Alexander Mai, xharlie, dorverbin, phedman | a) The research aims to develop a real-time, differentiable, emission-only volume rendering method that addresses the limitations of existing techniques like 3D Gaussian Splatting (3DGS), particularly "popping" artifacts. b) The proposed method, Exact Volumetric Ellipsoid Rendering (EVER), represents the scene as a collection of constant-density ellipsoids and uses ray tracing to compute the volume rendering integral exactly. This allows for the inclusion of effects like defocus blur and fisheye lens distortion. c) EVER achieves a framerate of 30 FPS at 720p resolution on an NVIDIA RTX4090 on the challenging Zip-NeRF dataset and achieves a lower LPIPS score (0.368) compared to existing real-time methods like 3DGS (0.418) and StopThePop (0.411). d) AI practitioners working on novel view synthesis can use EVER to generate high-quality, pop-free renderings in real-time, enabling applications that require fast and consistent 3D scene representations. The paper does not state the impact on memory usage, nor quantify inference time on hardware other than an NVIDIA RTX4090. Follow-up questions: 1. How does the memory footprint of EVER compare to 3DGS, particularly when scaling to even higher resolution or more complex scenes? 2. Could the constant density assumption of EVER be relaxed to allow for more complex density variations within individual primitives, and how would that impact performance and quality? 3. What is the performance (FPS and quality metrics) of EVER on other commonly used GPUs, besides the NVIDIA RTX 4090 mentioned in the paper? |
E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding (Read more on arXiv or HuggingFace) | Ying Shan, Yang Wu, Zhongang Qi, Zongyang Ma, Ye Liu | a) This research addresses the lack of fine-grained event-level and diverse task assessment in current video-language understanding benchmarks, aiming to create a more comprehensive evaluation for Video Large Language Models (Video-LLMs). b) The authors introduce E.T. Bench, a benchmark with 7.3K samples across 12 tasks and 8 domains, focusing on event-level and time-sensitive understanding of long videos. They also propose E.T. Chat, a novel Video-LLM using embedding matching for timestamp prediction, and E.T. Instruct 164K, a dedicated instruction-tuning dataset. c) State-of-the-art Video-LLMs struggle with E.T. Bench, especially on grounding and dense captioning tasks, while E.T. Chat achieves state-of-the-art performance among open-source models, with a 38.4% Accref (averaged accuracy on referring tasks) on E.T. Bench. d) AI practitioners developing Video-LLMs should consider incorporating finer-grained temporal understanding and multi-event scenarios in training data and model design, prioritizing both spatial and temporal reasoning capabilities for improved performance on complex video understanding tasks. The paper notes potential data leakage in benchmark evaluation due to overlap with existing datasets used for model training, which might affect the validity of zero-shot evaluation. Follow-up questions: 1. Given the limitations of discrete token prediction for timestamps, what other alternative approaches besides embedding matching could be explored for improving temporal understanding in Video-LLMs? 2. How can the E.T. Bench benchmark be improved to mitigate the potential data leakage issue mentioned in the paper and ensure a more robust evaluation of Video-LLMs in zero-shot settings? 3. What specific architectural modifications in E.T. Chat contribute to its superior performance on grounding and dense captioning tasks compared to other state-of-the-art open-source Video-LLMs? |
Closed-loop Long-horizon Robotic Planning via Equilibrium Sequence Modeling (Read more on arXiv or HuggingFace) | Jiazhong Yu, Cao Sheng, Fei Li, feifeiobama, ljh0104 | a) The research aims to improve closed-loop long-horizon robotic planning in LLMs by addressing limitations like unidirectional dependency and lack of error correction. b) The paper proposes "equilibrium sequence modeling," formulating self-refinement as a fixed-point problem solved through iterative refinement and utilizing a nested equilibrium solving process to incorporate environmental feedback efficiently. An experience memory and world model complement the planner. c) Evaluated on VirtualHome-Env, the method achieved a success rate improvement of up to 19% with error correction compared to not using error correction. It shows superior scaling for inference computation. d) This provides AI practitioners a supervised learning approach to train self-refining LLM planners for robotics without needing complex reinforcement learning or process supervision, potentially leading to more robust and efficient long-horizon task completion. Follow-up questions: 1. What are the specific architectural details of the world model used, and how does its performance compare to more complex world models that simulate environmental states rather than just feedback? 2. How does the proposed method's computational cost during training and inference scale with increasing model size and task complexity compared to alternative approaches like Tree-Planner or SELF-REFINE? 3. The paper mentions failure scenarios like hallucination and lack of history awareness. What specific mitigation strategies, beyond the mentioned reasoning techniques, could be explored to address these limitations? |
HarmoniCa: Harmonizing Training and Inference for Better Feature Cache in Diffusion Transformer Acceleration (Read more on arXiv or HuggingFace) | Xinjie Zhang, Jing Liu, Ruihao Gong, Zining Wang, Yushi Huang | a) Objective: To accelerate the inference speed of Diffusion Transformers (DiTs) for image generation tasks by mitigating discrepancies between training and inference in learning-based feature caching methods. b) Methodology: HarmoniCa framework, employing Step-Wise Denoising Training (SDT) to align training with the full denoising trajectory and Image Error Proxy-Guided Objective (IEPO) to incorporate final image error into training. c) Results: HarmoniCa achieved a 1.52x speedup and an FID of 27.61 for PIXART-α 256×256 with a 20-step DPM-Solver++, compared to an FID of 27.68 for the non-accelerated model. d) Implication: AI practitioners can leverage HarmoniCa to significantly reduce inference latency in DiT models without substantial performance degradation, improving practical deployment for high-resolution image generation tasks. This is particularly relevant to generative AI application developers. Follow-Up Questions: 1. How does the performance of HarmoniCa scale with even larger DiT models and higher resolutions beyond those tested in the paper (e.g., greater than 2048x2048)? 2. Could the proxy mechanism in IEPO be further refined to more accurately represent final image error, potentially leading to further performance gains? 3. What is the memory footprint of HarmoniCa during inference, and how does it compare to other acceleration techniques like pruning or quantization, particularly for resource-constrained environments? |
Selective Aggregation for Low-Rank Adaptation in Federated Learning (Read more on arXiv or HuggingFace) | Huijie Fan, Liangqiong-QU, yanranw1, stevezs, gpx333 | a) This paper investigates how to effectively aggregate Low-Rank Adaptation (LoRA) matrices in Federated Learning (FL) for improved performance on downstream tasks. b) The authors introduce Federated Share-A LoRA (FedSA-LORA), where both A and B matrices of the LoRA update are trainable during local training, but only the A matrices (responsible for general knowledge) are aggregated on the server. This method is then generalized to other LoRA variants (rsLoRA and VeRA). c) On the GLUE benchmark's RTE task with a severe non-IID data distribution, FedSA-LoRA achieved 90.20% accuracy, outperforming standard LORA (88.80%) and FFA-LoRA (88.83%). d) AI practitioners can use FedSA-LoRA to efficiently fine-tune large language models in federated learning settings, especially with non-IID data, by reducing communication overhead and improving performance compared to existing methods. The impactful finding, that A matrices capture general knowledge while B matrices learn client-specific knowledge, allows for more targeted aggregation and better generalization across clients. Follow-up questions: 1. How does the performance of FedSA-LoRA scale with the number of clients and the heterogeneity of the data distribution in more complex real-world scenarios beyond the presented experiments? 2. What are the computational and memory overheads of FedSA-LoRA compared to other PEFT methods in federated settings, particularly for very large language models? 3. How robust is FedSA-LoRA to malicious client behavior, and what mitigation strategies could be implemented to enhance its security in adversarial federated learning environments? |
Title | Authors | Summary |
---|---|---|
Law of the Weakest Link: Cross Capabilities of Large Language Models (Read more on arXiv or HuggingFace) | xwhan, ruihou16, xwwang, astonzhang, MingZhong | The paper investigates the under-explored area of cross-capabilities in Large Language Models (LLMs), defined as the intersection of multiple abilities required for complex tasks. The authors introduce CROSSEVAL, a benchmark comprising 1400 human-annotated prompts across seven individual and seven cross-capabilities, and use LLM-based evaluators to assess model responses. Results reveal that cross-capability performance is often constrained by the weakest individual capability, exhibiting a "Law of the Weakest Link," where 38 out of 58 cross-capability scores from 17 models fell below all individual capability scores. This highlights the need to focus on improving weaker capabilities for better overall performance. Follow-up questions: 1. How can CROSSEVAL be extended to encompass a wider range of cross-capabilities and incorporate more nuanced evaluation metrics beyond the 1-5 Likert scale? 2. What specific training strategies can be employed to effectively address the "Law of the Weakest Link" and improve LLM performance in tasks requiring multiple abilities? 3. How can the insights from this research be applied to the development and evaluation of LLM-based agents operating in real-world scenarios? |
TPI-LLM: Serving 70B-scale LLMs Efficiently on Low-resource Edge Devices (Read more on arXiv or HuggingFace) | Hongfang Yu, Mohsen Guizani, Jiaoshen, LIKirin | a) This paper investigates how to efficiently serve large language models (LLMs), specifically 70B-scale models, on resource-constrained edge devices. b) The researchers developed TPI-LLM, a tensor parallel inference system with a sliding window memory scheduler to manage model weights dynamically and a star-based allreduce algorithm for inter-device communication. c) Experimental results on emulated and real testbeds demonstrated that TPI-LLM reduced the time-to-first-token and token latency by over 80% compared to Accelerate and over 90% compared to Transformers and Galaxy. It also reduced the peak memory footprint of Llama 2-70B by 90%, requiring only 3.1 GB of memory per device. d) TPI-LLM offers AI practitioners a viable solution for deploying and running large-scale LLMs on edge devices, addressing privacy concerns and limitations in memory and computing power, thus enabling broader LLM applications on edge devices. Follow-up questions: 1. What is the impact of varying the size of the sliding window on the trade-off between memory footprint and inference speed in real-world scenarios with diverse network conditions? 2. How does TPI-LLM perform with quantized LLMs, and what are the potential trade-offs between model accuracy and efficiency when using quantization on edge devices? 3. Could the star-based allreduce algorithm be further optimized for heterogeneous edge device clusters with varying compute power and network latency characteristics? |
Atlas-Chat: Adapting Large Language Models for Low-Resource Moroccan Arabic Dialect (Read more on arXiv or HuggingFace) | imomayiz, amr-mohamed, khoubrane-yousef, habdine, guokan-shang | This paper investigates adapting large language models (LLMs) for the low-resource Moroccan Arabic dialect, Darija. The researchers construct a large instruction dataset from diverse sources, including existing Darija resources, manually and synthetically created data, and translated English instructions. Fine-tuned 2B and 9B parameter Gemma models, Atlas-Chat, show superior performance compared to other LLMs like LLaMa, Jais, and AceGPT, achieving 58.23% and 81.89% accuracy on DarijaMMLU and Sentiment Analysis, respectively, with the 9B model. This work demonstrates successful LLM adaptation for a low-resource dialect. Follow Up Questions: 1. What specific pre- and post-processing techniques were used for the English-to-Darija translation of the instruction datasets, and how did these impact the final model performance? 2. How does the performance of the smaller 2B model compare to the 9B model in resource-constrained environments, considering factors like inference speed and memory usage? 3. What are the limitations of the current evaluation benchmarks for Darija, and what further work is needed to develop more comprehensive and robust evaluation metrics for this dialect? |
One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videos (Read more on arXiv or HuggingFace) | sebgao, wangpichao, meihaiyang, tonghe, ZechenBai | a) The research aims to develop a video-based multimodal large language model (MLLM) for language-instructed reasoning segmentation in videos, generating temporally consistent masks based on complex language queries. b) VideoLISA, the proposed model, integrates a Sparse Dense Sampling strategy for balancing temporal context and spatial detail, a One-Token-Seg-All approach using a token for cross-frame object association, a large language model (LLM) for reasoning, and the Segment Anything Model (SAM) for mask generation. c) VideoLISA achieved state-of-the-art performance on the MeViS motion-guided video object segmentation benchmark, outperforming previous methods by a large margin (the paper does not quantify this margin). It also outperforms previous methods by achieving 67.7% J&F on Ref-DAVIS-17. d) AI practitioners can leverage VideoLISA for video object segmentation tasks requiring complex reasoning and temporal understanding, potentially unifying image and video segmentation tasks under a single foundation model. The paper suggests post-optimization can further improve mask quality, but the extent of improvement isn't quantified. Follow-up Questions: 1. What is the computational cost of VideoLISA compared to traditional video object segmentation models, and how can it be optimized for real-time applications? 2. How robust is the One-Token-Seg-All approach to long videos with significant object occlusions or transformations, and what strategies could be explored to improve its robustness in such challenging scenarios? 3. The paper mentions the limitations of the MLLM's reasoning capabilities being bounded by the underlying language model. What specific types of reasoning failures were observed, and how can prompt engineering or alternative LLM architectures address these limitations? |
Illustrious: an Open Advanced Illustration Model (Read more on arXiv or HuggingFace) | Junha Lee, leehg57, mhy9910, solbon1212, andyp-nvidia | a) The research aimed to develop an open-source, state-of-the-art anime image generation model, Illustrious, surpassing existing models in terms of animation style, high resolution, dynamic color range, and restoration ability. b) The key methodology involved training on a large, refined dataset of anime images with multi-level captions (tags and natural language descriptions), utilizing a No Dropout Token approach for preserving specific concepts, and training at higher resolutions (up to 2.25MP) to enable high-resolution output. The training used Stable Diffusion XL as a base, with modifications including Cosine Annealing scheduler and Input Perturbation Noise Augmentation. c) Illustrious v1.1 achieved a median CCIP (Character Consistency Image Prompt) score of 0.99 in a character similarity evaluation. The paper notes higher ELO ratings for Illustrious compared to other models in user preference studies, but the specific methodology for these ELO calculations needs further clarification. d) AI practitioners can utilize Illustrious as a high-quality, open-source model for generating anime illustrations at resolutions up to 20MP. The No Dropout Token approach and multi-level caption training methodology may be applicable to other specialized image generation tasks. Follow-up questions: 1. What is the precise formula and methodology used to compute the ELO scores in the user studies, including the composition of user groups, prompting strategies used, and handling of draws? More detailed analysis of the user preference results and their statistical significance would be beneficial. 2. The paper mentions limitations related to text rendering within images. What specific experiments were conducted to investigate this limitation, and what quantitative results were observed? Further investigation of this limitation could aid future research on generating glyphs in stylized images. 3. How does the computational cost of the higher-resolution training and inference compare to lower-resolution approaches, and what trade-offs in terms of memory and training time should practitioners consider when using or adapting Illustrious? |
Flex3D: Feed-Forward 3D Generation With Flexible Reconstruction Model And Input View Curation (Read more on arXiv or HuggingFace) | Filippos Kokkinos, Andrea Vedaldi, philiptorr, JianyuanWang, Junlinh | a) The paper aims to improve the quality of feed-forward 3D object generation from text, single images, or sparse view images. b) Flex3D, a two-stage framework, is proposed. The first stage generates and curates a pool of candidate views using fine-tuned multi-view and video diffusion models and a view selection pipeline. The second stage reconstructs the 3D object as a set of Gaussian points from the curated views using FlexRM, a flexible reconstruction model based on a transformer architecture and a tri-plane representation. A novel training strategy simulates imperfect input views by adding noise to intermediate 3D Gaussian representations. c) In user studies comparing text-to-3D generation, Flex3D achieved a win rate of over 92% compared to state-of-the-art feed-forward models. Quantitatively, Flex3D achieved 0.277 CLIP text similarity and 0.255 VideoCLIP text similarity, outperforming all compared models. d) AI practitioners can utilize Flex3D's framework to generate higher-quality 3D objects from various input modalities. The novel view curation and imperfect data simulation techniques provide robust methods to improve 3D reconstruction quality and generalization capabilities, essential for applications requiring accurate and visually appealing 3D assets. Follow-up questions: 1. The paper mentions initializing the MLP and tri-plane transformer with an off-the-shelf tri-plane NeRF network. Are the specific details of this network and its pre-training available, and how critical is this initialization for FlexRM's performance? 2. While the paper demonstrates improvements on object-centric datasets, how well would Flex3D generalize to more complex scenes containing multiple objects and backgrounds, and what modifications might be necessary for such an extension? 3. The paper focuses on Gaussian splatting as the final 3D representation. Has any investigation been done into the feasibility and performance implications of directly generating meshes or other 3D representations within the Flex3D framework? |
ACE: All-round Creator and Editor Following Instructions via Diffusion Transformer (Read more on arXiv or HuggingFace) | Jingren, chenweix7, chaojiemao, jingfengzhang, jiangzeyinzi | a) The research aims to develop a unified foundational model for diverse visual generation and editing tasks, addressing the limitations of existing models that are often task-specific. b) ACE (All-round Creator and Editor) employs a Diffusion Transformer architecture with novel components including Long-context Condition Unit (LCU) for handling multi-modal and multi-turn inputs, Image Indicator Embedding for image sequence alignment, and a novel data collection pipeline including synthesis and clustering-based methods. c) On the MagicBrush benchmark, ACE achieved a CLIP-I score of 0.9453 for single-turn instruction-guided image editing, outperforming other methods. A user study on the authors' ACE benchmark also showed strong performance across various editing tasks. d) AI practitioners can leverage ACE's unified framework and LCU structure to build multi-modal chat systems and visual agents for complex image generation and editing workflows, potentially streamlining and simplifying existing cumbersome pipelines. The proposed data collection strategy offers efficient methods for acquiring paired image data for training similar models. Follow-up Questions: 1. The paper mentions performance limitations in certain tasks like general editing and style editing compared to larger, task-specific models. Could further analysis of the user study feedback pinpoint specific visual qualities where ACE falls short and guide future model improvements? 2. How does the computational cost of ACE, especially with long-context inputs, scale with the number of input images and turns? Are there optimization strategies planned to improve inference efficiency for real-time applications? 3. While the paper describes the data collection pipeline, details on the Instruction Captioner's architecture and training process are limited. Could further information be provided on the MLLM used, its performance metrics for instruction generation, and the impact of different instruction generation strategies on ACE's overall performance? |
Helpful DoggyBot: Open-World Object Fetching using Legged Robots and Vision-Language Models (Read more on arXiv or HuggingFace) | Xiaolong Wang, Xuxin Cheng, Zipeng Fu, Qi Wu, cbfinn | a) The research aimed to develop a quadrupedal robot system capable of understanding human commands and performing mobile manipulation tasks, such as fetching objects, in unseen indoor environments. b) The system combines a learned low-level controller trained in simulation for agile locomotion and whole-body tilting with pre-trained Vision-Language Models (VLMs) for semantic understanding and command generation. A 1-DoF gripper was designed for object manipulation. c) In real-world tests, the robot achieved a 60% first-attempt success rate in fetching a stuffed toy from a bed, requiring climbing, navigation, and grasping. d) This research demonstrates the potential of integrating simulation-trained low-level controllers with VLMs for enabling zero-shot generalization in robotic mobile manipulation, suggesting a promising approach for developing versatile robot assistants. Follow-up questions: 1. What are the specific architectures and hyperparameters used for the low-level controller (policy network and online estimator) and how were these determined? More detail about the specifics of the network architectures used would be helpful. 2. The paper mentions limitations regarding the gripper's dexterity. What specific modifications or alternative gripper designs are being considered to improve manipulation capabilities, and how might these impact the robot's agility and control? 3. How does the system handle object occlusions during navigation and grasping, and what strategies are being explored to improve robustness in more cluttered and dynamic real-world environments? |
DressRecon: Freeform 4D Human Reconstruction from Monocular Video (Read more on arXiv or HuggingFace) | Shubham Tulsiani, Donglai Xiang, Jeff Tan, gengshan-y, devakramanan | a) The research aims to reconstruct time-consistent 4D human models with loose clothing and handheld objects from monocular videos. b) DressRecon uses a hierarchical bag-of-bones motion model, separating body and clothing deformations, and incorporates image-based priors (pose, normals, optical flow) within a differentiable rendering optimization framework. The model can be refined into explicit 3D Gaussians for interactive rendering. c) On a dataset of 14 challenging sequences from DNA-Rendering, DressRecon achieved an average chamfer distance of 6.411cm, outperforming baseline methods. d) AI practitioners can utilize DressRecon's approach to create high-fidelity, animatable 3D human avatars from single-viewpoint videos, potentially streamlining avatar creation for virtual environments and other applications. The paper does not specify the computational requirements for training or inference. Follow-up questions: 1. What are the memory and computational requirements for training and inference of DressRecon, and how does it scale with video length and resolution? 2. Could the hierarchical motion model be adapted for other types of non-rigid objects beyond clothing and accessories, and what modifications would be necessary? 3. How robust is the method to variations in lighting, background clutter, and occlusions in the input video? |
Visual Context Window Extension: A New Perspective for Long Video Understanding (Read more on arXiv or HuggingFace) | Zhenzhong Chen, hcwei | a) This research aims to improve Large Multimodal Models (LMMs) performance on long video understanding tasks without retraining on large video datasets. b) The authors propose extending the visual context window by adapting the YaRN (Yet Another RoPE for Transformers) method, originally designed for language models, and introduce a progressive pooling strategy to reduce memory consumption. c) On the MLVU benchmark, their method with a 7B parameter LMM outperforms GPT-40. d) AI practitioners can leverage this approach to apply pre-trained LMMs to long videos, benefiting from advances in open-source LMMs without the computational cost of retraining on extensive long video-text paired data. The progressive pooling strategy enables efficient memory management when processing long video sequences. Follow-up questions: 1. How does the performance of visual context window extension compare to retraining LMMs on long video data specifically, in terms of accuracy and computational cost? 2. What are the limitations of the progressive pooling strategy, and are there scenarios where information loss becomes significant despite the focus on preserving spatial details? 3. Could the visual context window extension method be adapted or combined with other memory optimization techniques, such as those used for sparse attention? |
SyntheOcc: Synthesize Geometric-Controlled Street View Images through 3D Semantic MPIs (Read more on arXiv or HuggingFace) | Qing Lian, Xu Yan, Yingjie Cai, Weichao Qiu, Leheng Li | a) The research aimed to develop a framework for generating photorealistic and geometrically-controlled street view images conditioned on 3D occupancy labels. b) The key methodology involves representing 3D occupancy as semantic Multi-Plane Images (MPIs), encoding these MPIs using a 1x1 convolutional encoder, and integrating this into a Stable Diffusion model with cross-view and cross-frame attention. Reweighing strategies address class imbalance and depth-related learning difficulties. c) SyntheOcc achieved a Frechet Inception Distance (FID) of 14.75 on the nuScenes dataset, outperforming baseline methods like BEVGen (FID 25.54) and MagicDrive (FID 16.20). d) AI practitioners can leverage SyntheOcc to generate synthetic datasets for training perception models in autonomous driving, particularly for 3D occupancy prediction, and for creating corner case scenarios for system evaluation. The use of MPIs offers a novel approach for encoding 3D information into 2D diffusion models for enhanced controllability. Follow-up Questions: 1. How does the computational cost of generating MPIs and using the MPI encoder compare to other conditional input methods, such as BEV encodings or text prompts, in terms of memory usage and processing time? 2. What are the limitations of the reweighing strategies, particularly in extremely long-tailed or complex scenarios, and how can these limitations be addressed to improve generation quality and diversity? 3. How robust is the approach to different camera parameters and viewpoints not seen during training, and how could the framework be adapted to handle more diverse camera setups and environments? |
Posterior-Mean Rectified Flow: Towards Minimum MSE Photo-Realistic Image Restoration (Read more on arXiv or HuggingFace) | Michael Elad, Michato, ohayonguy | a) This paper investigates the optimal estimator for minimizing Mean Squared Error (MSE) in photo-realistic image restoration under a perfect perceptual index constraint. b) The proposed Posterior-Mean Rectified Flow (PMRF) algorithm first predicts the posterior mean of the image and then uses a rectified flow model to transport the result to the distribution of ground-truth images. c) On the CelebA-Test blind face restoration benchmark, PMRF achieved a FID score of 37.46, outperforming all other compared methods. d) AI practitioners working on image restoration can use PMRF to potentially achieve lower distortion without sacrificing perceptual quality compared to posterior sampling or GAN-based methods. Follow-up questions: 1. How does the choice of the noise level (σε) added to the posterior mean prediction in PMRF affect the trade-off between MSE and perceptual quality in different restoration tasks and degradation levels? 2. The paper mentions the possibility of reflow to further improve PMRF. Have the authors explored this, and what were the observed impacts on performance and computational cost? 3. How does PMRF's performance compare to other state-of-the-art methods when applied to diverse image datasets beyond faces, such as natural scenes or medical images? |
Title | Authors | Summary |
---|---|---|
MM1.5: Methods, Analysis & Insights from Multimodal LLM Fine-tuning (Read more on arXiv or HuggingFace) | nm-w, pdufter, zhegan27, fly6464, haotiz | a) This research aimed to improve multimodal large language model (MLLM) performance in text-rich image understanding, visual referring and grounding, and multi-image reasoning after pre-training. b) The researchers adopted a data-centric approach, focusing on continual pre-training with high-resolution OCR data, an optimized visual instruction-tuning data mixture for supervised fine-tuning (SFT), and dynamic image splitting for high-resolution image comprehension. c) MM1.5-30B significantly improved performance over its predecessor MM1-30B on tasks such as MathVista (increasing the score from 39.4 to 55.6), DocVQA (from 75.8 to 91.4), and InfoVQA (from 47.3 to 67.3). d) The paper demonstrates the importance of careful data curation and training strategies for improving MLLM performance, even at smaller scales, providing valuable guidance for practitioners developing and fine-tuning MLLMs. The impact of text-only pre-training data on MLLM performance, and how the proportion of such data in pre-training affects the efficiency of transfer learning to SFT is an impactful finding, suggesting that optimization of pre-training data is crucial for effective SFT. Follow-up Questions: 1. The paper mentions the use of in-house synthetic caption data that outperformed public datasets in some settings. Could the authors elaborate on the specific methodology used for generating these in-house captions, including the models, data sources, and any filtering or quality control mechanisms employed? 2. Given the findings on the impact of image resolution in continual pre-training, are there recommendations for optimal resolution ranges for different MLLM scales, considering the trade-off between performance and computational cost? 3. What specific techniques were used for optimizing the "optimized visual instruction-tuning data mixture" mentioned for SFT, and how was the final mixture composition determined? More specifically, how do you decide when the model is overfitting to the data? |
DiaSynth -- Synthetic Dialogue Generation Framework (Read more on arXiv or HuggingFace) | Eng Siong Chng, Tushar Pranav, AlexWuuuu, SkAndMl | a) The paper addresses the scarcity of high-quality, large-scale, domain-specific dialogue datasets for training dialogue systems. b) DiaSynth, a synthetic dialogue generation framework, uses Large Language Models (LLMs) and Chain of Thought (CoT) reasoning to generate dialogues based on user-provided topics, dynamically generated subtopics and personas, and specified conversational characteristics. c) Fine-tuning pretrained language models on synthetic data generated by DiaSynth resulted in a performance improvement of 16.47% compared to base models on a dialogue summarization task using LLaMA-3 as the LLM backbone. d) DiaSynth offers AI practitioners a scalable and cost-effective method for generating synthetic dialogue data for training dialogue systems, especially in domains with limited existing data. The results indicate that synthetic data from moderate-sized open-source LLMs can be a viable alternative to scarce or costly real-world data. Follow-up questions: 1. The paper mentions differing performance across LLMs (LLaMA-3, GPT-4) based on dialogue structure (formal vs. informal). Could further analysis elucidate the specific factors within these structures that influence LLM performance and inform optimal LLM selection for specific application domains? 2. While the paper demonstrates effectiveness in summarization, how does DiaSynth-generated data perform in other downstream tasks relevant to dialogue systems, such as intent detection, slot filling, or sentiment analysis? 3. What are the computational resource requirements and associated costs of using DiaSynth to generate large synthetic datasets, particularly when employing larger LLMs or generating data for diverse domains? |
Ruler: A Model-Agnostic Method to Control Generated Length for Large Language Models (Read more on arXiv or HuggingFace) | yuelin bai, Ziqiang Liu, Yunshui Li, Lei Zhang, Jiaming Li | a) The research investigated the ability of Large Language Models (LLMs) to generate responses of specified lengths, introducing the Target Length Generation Task (TLG). b) A model-agnostic method named RULER, utilizing Meta Length Tokens (MLTs), was proposed and tested on several LLMs. RULER adds an MLT, indicating the desired length, to the input and trains LLMs end-to-end on a dataset augmented with MLTs. c) RULER improved the Flexible Match (FM) score, a measure of adherence to the target length range, by an average of 29.57 across all tested models and length levels. d) AI practitioners can use RULER to improve the control over output length in LLMs, enhancing their ability to adhere to specific length constraints in diverse applications. The paper does not address potential effects of RULER on other LLM performance metrics beyond those related to length control, nor its computational efficiency. Follow-up questions: 1. How does the performance of RULER vary with different training dataset sizes and compositions, particularly with respect to the distribution of target lengths? 2. What is the computational overhead of incorporating RULER, both during training and inference, compared to standard LLM usage? 3. Does RULER impact other performance metrics of the LLMs, such as factual accuracy, reasoning ability, or toxicity of generated text? |
Hyper-Connections (Read more on arXiv or HuggingFace) | banggu, YunyaoMao, Taoer, hongzhihuang, mathfinder | a) This research explores hyper-connections as a learnable alternative to residual connections in neural networks, aiming to address limitations like the seesaw effect between gradient vanishing and representation collapse. b) Hyper-connections introduce learnable depth and width connections within layers, allowing the network to adjust connection strength and dynamically rearrange layers; a dynamic variant (DHC) conditions these connections on the input. c) In large language model pre-training, a model with DHC and an expansion rate of 4 (OLMOE-1B-7B-DHC×4) converged 1.8 times faster and showed a 6-point improvement on ARC-Challenge accuracy compared to a residual connection baseline after training on 500 billion tokens. d) AI practitioners can utilize hyper-connections as a potential drop-in replacement for residual connections, offering potential performance gains and faster convergence, particularly in large language models. The paper also suggests potential applicability in computer vision tasks, but the provided results are limited. Follow-up questions: 1. What is the computational overhead of hyper-connections compared to standard residual connections during both training and inference, especially for very deep networks? 2. How robust are the performance improvements of hyper-connections across different model architectures, datasets, and hyperparameter settings beyond those tested in the paper, particularly in vision tasks where less experimentation is presented? 3. The paper mentions that hyper-connections can learn to rearrange layers. Can further details be provided on how this rearrangement is analyzed and its specific impact on model behavior? |
UniAff: A Unified Representation of Affordances for Tool Usage and Articulation with Vision-Language Models (Read more on arXiv or HuggingFace) | Ce Hao, Zhengkai Jiang, Xibin Yuan, Qiaojun Yu, SiyuanH | This research aims to improve robotic manipulation by creating a unified representation of affordances for both tools and articulated objects. The researchers developed UniAff, a multimodal large language model (MLLM) fine-tuned on a synthetic dataset of 1500 objects with labeled part-level 6D poses, manipulation types, and affordances. UniAff achieved a 56.9% improvement in IOU for detecting functional affordances of tools compared to ManipVQA. This work provides a new model and dataset for object-centric robotic manipulation, potentially improving the generalization of robotic manipulation tasks. It is unclear how the synthetic dataset generation generalizes to the real world or the computational cost of UniAff. Follow-up questions: 1. What are the specific architectural details of the Mixed Visual Encoder used in UniAff, and how were the different visual encoders (CLIP, DINOv2, Q-Former) combined? 2. What is the breakdown of the 19 articulated object categories and 12 tool categories in the synthetic dataset, and what are the specific real-world datasets used to create the synthetic data? 3. How does UniAff perform in real-world settings on a broader range of tasks and objects not represented in the current experimental setup? |
Cottention: Linear Transformers With Cosine Attention (Read more on arXiv or HuggingFace) | Eric C. Larson, TrevorDohm, gmongaras | a) This paper introduces Cottention, a novel attention mechanism designed to address the quadratic memory complexity of softmax attention in transformers. b) Cottention replaces the softmax operation with cosine similarity and rearranges the attention equation to achieve linear memory complexity with respect to sequence length. A custom CUDA kernel was developed for efficient computation, and a learned scalar parameter was introduced to stabilize training. c) On the GLUE benchmark, a BERT model using Cottention achieved an average score of 81.8, compared to 83.1 for the softmax baseline. d) Cottention offers AI practitioners a more memory-efficient alternative to softmax attention, enabling the processing of longer sequences without significant performance degradation, as demonstrated by comparable results on the GLUE benchmark and perplexity on GPT-J language modelling tasks. The paper notes theoretical linear memory complexity with respect to sequence length but acknowledges a discrepancy between theoretical and observed memory usage related to input dimensionality, warranting further investigation. Follow-up Questions: 1. The paper mentions a discrepancy between the theoretical and empirical memory usage with respect to input dimensionality. What further investigations could be conducted to explain this discrepancy and potentially optimize memory usage further? 2. The custom CUDA kernel for Cottention is mentioned but not detailed extensively. What specific optimization strategies were employed in the kernel design, and how do they contribute to the efficiency gains observed? 3. How does the training time and computational cost of Cottention compare to Softmax and other linear attention methods, considering both the forward and backward passes, particularly for very long sequences? |
Image Copy Detection for Diffusion Models (Read more on arXiv or HuggingFace) | Yi Yang, Zhentao Tan, Yifan Sun, WenhaoWang | a) The paper investigates how to detect content replication generated by diffusion models, introducing the task of Image Copy Detection for Diffusion Models (ICDiff). b) A new dataset, Diffusion-Replication (D-Rep), containing 40,000 image-replica pairs with six annotated replication levels, was created using Stable Diffusion V1.5 and LAION-Aesthetics V2 images. A novel method, PDF-Embedding, which converts replication levels to probability density functions and uses a set of learned vectors for each image, was proposed. c) PDF-Embedding outperformed protocol-driven methods and non-PDF methods on the D-Rep test set, achieving 56.3% in Pearson Correlation Coefficient (PCC) and 25.6% in Relative Deviation (RD) using an exponential PDF. d) AI practitioners developing diffusion models should consider integrating ICDiff methods like PDF-Embedding to assess and mitigate potential copyright infringement or unwanted replication of training data in generated images. The replication ratios of several well-known diffusion models against a large-scale gallery were found to range from 10% to 20%, indicating a significant practical need for such detection. Follow-up questions: 1. How does the computational cost and performance of PDF-Embedding scale with larger image databases and with more recent, higher-resolution diffusion models beyond Stable Diffusion V1.5? 2. Could the PDF-Embedding method be adapted or improved for detecting partial image replication, as opposed to full-image replication, within diffusion model outputs? 3. How robust is PDF-Embedding to adversarial attacks designed to evade copy detection in generated images? |
Can Models Learn Skill Composition from Examples? (Read more on arXiv or HuggingFace) | Sanjeev Arora, Anirudh Goyal, Simran Kaur, Haoyu Zhao, dingliyu | This research investigates whether fine-tuning can improve compositional generalization in LLMs, specifically their ability to combine language skills in novel ways. The study fine-tuned LLaMA-2-13B-Chat and Mistral-7B-Instruct-v0.2 on a dataset generated by GPT-4, consisting of text samples exhibiting combinations of 1, 2, or 3 language skills. Results showed that fine-tuning on these examples improved the models' ability to compose up to 5 held-out skills, with LLaMA-2-13B-Chat's success rate for composing 3 held-out skills increasing from 4% to 37%. This suggests that models can learn a "meta-skill" of composition, generalizing beyond specific skill combinations seen during training. AI practitioners can leverage this finding by incorporating skill-rich (potentially synthetic) text data into training to improve the compositional capabilities of LLMs. Follow-up Questions: 1. What is the impact of varying the size and diversity of the training dataset (beyond the current 13,957 samples) on the compositional generalization performance? 2. How does this fine-tuning approach compare to other methods for improving compositional generalization, such as curriculum learning or specific architectural modifications? 3. Beyond the SKILL-MIX evaluation, how can this improved compositional ability be effectively applied to more complex, real-world NLP tasks, and what are the potential limitations in such applications? |
Coffee-Gym: An Environment for Evaluating and Improving Natural Language Feedback on Erroneous Code (Read more on arXiv or HuggingFace) | Dongjin Kang, Yongho Song, Seungjun Moon, Taeyoon Kwon, Hyungjoo Chae | a) The research aims to improve open-source natural language feedback models for code editing by creating a reinforcement learning environment that better aligns feedback with code improvement. b) The authors developed COFFEE-GYM, comprising the COFFEE dataset of human code edits with pairwise feedback annotations and COFFEEEVAL, a unit-test-driven reward function, used with PPO and DPO reinforcement learning algorithms. c) Feedback models trained with COFFEE-GYM achieved a 13.4% improvement in Pass@1 accuracy on both HumanEvalFix and COFFEE-TEST compared to a baseline DeepSeekCoder-7B model without feedback. d) AI practitioners can utilize COFFEE-GYM and COFFEEEVAL to train open-source feedback models that generate helpful feedback for code editing, achieving performance comparable to closed-source models like GPT-4. The paper highlights the importance of pairwise feedback data and robust reward models in training effective feedback systems. Follow-up questions: 1. The paper mentions limitations regarding the scope of editing being focused on correctness, not efficiency or readability. How could COFFEE-GYM be extended to incorporate these additional aspects of code quality into the feedback and reward models? 2. How robust is COFFEEEVAL to the specific choice of code editor model used? Could using a weaker or stronger editor significantly impact the learned feedback model? Are there experiments or analyses planned to address this potential dependency? 3. While the paper demonstrates improved performance on specific benchmarks, how well does this generalize to real-world code editing scenarios in diverse programming languages and codebases beyond competitive programming and the provided test sets? |
IDEAW: Robust Neural Audio Watermarking with Invertible Dual-Embedding (Read more on arXiv or HuggingFace) | Jianzong Wang, Jing Xiao, zhangxulong, Pechola | a) This paper aims to develop a robust neural audio watermarking model with efficient localization capabilities, addressing the limitations of existing methods regarding capacity, imperceptibility, and locating efficiency. b) The authors propose IDEAW, which employs a dual-stage invertible neural network (INN) to separately embed a locating code and a watermark message into the audio, along with a balance block to mitigate the asymmetry introduced by the attack layer during robustness training. c) IDEAW achieves higher capacity and comparable robustness under various attacks compared to baseline methods, demonstrating a signal-to-noise ratio (SNR) of 35.41 dB and accuracy of 99.44% when embedding a 56-bit payload (46-bit message + 10-bit locating code). The proposed dual-embedding strategy reduces localization time overhead by approximately 40-50% compared to existing methods. d) AI practitioners working on audio security and copyright protection can utilize IDEAW for robust and efficient watermark embedding and extraction, improving localization speed significantly compared to traditional approaches. Follow-up questions: 1. How does the performance of IDEAW vary across different audio genres and lengths, beyond the speech and music datasets used in the evaluation? 2. What is the computational complexity of IDEAW's embedding and extraction processes, and how does it scale with increasing audio length or watermark payload size? 3. Could the dual-embedding strategy be extended to other watermarking domains, such as image or video, using similar invertible network architectures? |
Title | Authors | Summary |
---|---|---|
MIO: A Foundation Model on Multimodal Tokens (Read more on arXiv or HuggingFace) | Jiaheng Liu, Wangchunshu Zhou, Chunpu Xu, King Zhu, Zekun Wang | MIO aims to develop an any-to-any multimodal foundation model capable of understanding and generating text, images, speech, and video. The methodology involves training on discrete multimodal tokens using a four-stage process: alignment pre-training, interleaved pre-training, speech-enhanced pre-training, and supervised fine-tuning on various tasks. On the SEED-Bench, MIO-Instruct achieves 54.4% MCQ accuracy. This model offers AI practitioners a unified framework for diverse multimodal tasks, including interleaved video-text generation and chain-of-visual-thought reasoning. The paper doesn't provide details on the size of the training dataset. Follow-up Questions: 1. What specific architectures and hyperparameters were used for the different pre-training stages, and how were they determined? 2. Could you elaborate on the computational resources required for training and inference, and how these scale with model size? 3. What are the limitations of the current video generation capabilities, particularly regarding generating raw video data rather than frame sequences? |
VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models (Read more on arXiv or HuggingFace) | Li Lyna Zhang, Shengyu Ye, Jicheng Wen, Yifei Liu, yangwang92 | This paper explores extremely low-bit weight-only quantization for Large Language Models (LLMs) to reduce memory footprint and improve inference speed. The authors propose Vector Post-Training Quantization (VPTQ), leveraging second-order optimization and channel-independent quantization to minimize the impact of vector quantization on model accuracy. On LLaMA-2 7B, VPTQ at 2.02 bits achieves a WikiText2 perplexity of 6.13 and an average improvement of 1% on QA tasks compared to previous state-of-the-art. This method allows for substantial model compression and faster inference speeds without significant accuracy degradation, useful for deploying LLMs on resource-constrained devices. The paper doesn't detail the computational cost of VPTQ compared to other methods like GPTQ aside from quoting inference throughput. Follow-up questions: 1. How does the memory bandwidth requirement of VPTQ during inference compare to GPTQ and other scalar quantization methods, given the need to load codebooks? 2. What is the detailed breakdown of the quantization algorithm execution time (10.4-18.6%) – which steps contribute most significantly, and how can these be further optimized? 3. The paper mentions layer-wise finetuning. What is the specific process and its impact on final model accuracy and quantization time compared to not finetuning or performing full finetuning? |
Modulated Intervention Preference Optimization (MIPO): Keep the Easy, Refine the Difficult (Read more on arXiv or HuggingFace) | fetong | This research aimed to improve preference optimization for large language models (LLMs) by addressing the limitations of Direct Preference Optimization (DPO). The authors proposed Modulated Intervention Preference Optimization (MIPO), which modulates the influence of a reference model during training based on the alignment between the reference model and each preference pair, measured using differences in average log-likelihood. On AlpacaEval 2.0, MIPO achieved a 9.05% higher win-rate than DPO using Llama3-8B-Instruct and an 8.19% higher win-rate using Mistral-7B-Base. This suggests that MIPO can facilitate more effective alignment of LLMs with human preferences compared to DPO by focusing training effort on instances where the reference model needs more improvement. The paper does not discuss computational complexity differences between MIPO and DPO. Follow-up questions: 1. How does the computational cost of MIPO compare to DPO, considering the additional computation required to calculate and integrate the modulation factor q(K)? 2. Could the performance gains observed with MIPO on AlpacaEval 2.0 and MT-Bench generalize to other preference optimization tasks and datasets? 3. What are the practical considerations for selecting the hyperparameter β in MIPO, and is there a more principled approach to tuning this parameter beyond the empirical analysis presented? |
MSI-Agent: Incorporating Multi-Scale Insight into Embodied Agents for Superior Planning and Decision-Making (Read more on arXiv or HuggingFace) | Guanting Dong, Che Jiang, Yihuai Gao, Biqing Qi, Dayuan Fu | a) This research aimed to improve the planning and decision-making abilities of Large Language Model (LLM)-based embodied agents by effectively summarizing and utilizing insights from prior experiences. b) The researchers developed a Multi-Scale Insight Agent (MSI-Agent) featuring an experience selector, insight generator, and insight selector to organize experiences into multi-scale insights (general, environment, and subtask) and selectively use these insights when prompting the LLM. c) MSI-Agent achieved a 12.70% success rate on in-domain data and 14.54% on out-of-domain data on the TEACh Trajectory from Dialogue (TfD) benchmark, outperforming existing baselines, including the HELPER and Expel agents. d) This research indicates AI practitioners can significantly enhance LLM-based agent performance in embodied tasks by using multi-scale insight summarization and selection, especially in domain adaptation scenarios. This is impactful as it provides a practical method for improving the robustness and generalizability of embodied agents across different environments and tasks. Here are some follow-up questions an AI practitioner might ask: 1. What is the computational overhead of generating and storing multi-scale insights, and how can this be optimized for real-time applications? 2. How does MSI-Agent perform on more complex embodied tasks with longer horizons and more diverse interaction objects? 3. Can the insights generated by MSI-Agent be transferred or adapted for use with different LLMs or embodied agent architectures? |
Title | Authors | Summary |
---|---|---|
MaskLLM: Learnable Semi-Structured Sparsity for Large Language Models (Read more on arXiv or HuggingFace) | wxcTest, gheinrich, srvm, yinhongxu, Vinnnf | The authors present MaskLLM, a novel method for achieving semi-structured (N:M) sparsity in Large Language Models (LLMs) by formulating mask selection as a differentiable sampling process using Gumbel Softmax. This approach enables end-to-end training of sparsity masks on large-scale datasets, leading to superior performance compared to traditional one-shot pruning techniques. Experiments on various LLMs, including LLaMA-2 and GPT-3 variants, demonstrate that MaskLLM achieves state-of-the-art perplexity scores while enabling significant memory and computational savings. Notably, MaskLLM facilitates lossless compression for specific downstream tasks by learning specialized masks, and the authors introduce "Mask Prior," a technique for efficient transfer learning of sparsity. This work holds significant practical implications for AI practitioners, offering a pathway to deploy more efficient and scalable LLMs in real-world applications with reduced resource requirements. |
LLaVA-3D: A Simple yet Effective Pathway to Empowering LMMs with 3D-awareness (Read more on arXiv or HuggingFace) | Wenwei Zhang, XihuiLiu, Jiangmiao, taiwang, ChaimZhu | The paper introduces LLaVA-3D, a novel framework for efficiently adapting the 2D Large Multimodal Model (LMM) LLaVA for 3D scene understanding. This is achieved by introducing "3D Patches," a representation that augments 2D image patch features with 3D positional embeddings, allowing LLaVA-3D to process and understand 3D scenes from multi-view images. Experimental results demonstrate that LLaVA-3D achieves state-of-the-art performance on various 3D benchmarks, including 3D question answering, captioning, and visual grounding, while maintaining strong 2D image understanding capabilities. This development presents a significant advancement for AI practitioners, particularly AI engineers and data scientists working with 3D vision and language tasks, by offering a practical and efficient method to empower LMMs with 3D-awareness. LLaVA-3D's ability to perform complex 3D scene understanding tasks, along with its ease of use and integration with existing 2D models, makes it a valuable tool for developing applications in fields such as robotics, virtual reality, and augmented reality. |
EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions (Read more on arXiv or HuggingFace) | vikyzeng2, 17day, zhili-liu, gyhdog, KaiChen1998 | This research paper presents EMOVA, an innovative omni-modal large language model that leverages a continuous vision encoder and a semantic-acoustic disentangled speech tokenizer to enable simultaneous alignment of visual, speech, and text modalities. The model employs a novel text-centric alignment strategy that uses text as a bridge to facilitate alignment without relying on scarce omni-modal image-text-speech data. This joint optimization method not only enhances vision-language and speech capabilities but also surpasses corresponding bi-modal counterparts. Remarkably, EMOVA achieves state-of-the-art performance on both vision-language and speech benchmarks while supporting spoken dialogue with controllable emotional expressions. For AI practitioners, EMOVA offers a robust framework for building omni-modal applications with real-time spoken dialogue and emotion control, paving the way for more versatile and expressive human-computer interactions. |
Lotus: Diffusion-based Visual Foundation Model for High-quality Dense Prediction (Read more on arXiv or HuggingFace) | Leheng Li, Yixun Liang, Wei Yin, Jing He, haodongli | This research introduces Lotus, a diffusion-based visual foundation model for enhancing dense prediction tasks like depth and normal estimation. The authors identify limitations in existing diffusion models when applied to dense prediction, proposing a novel adaptation protocol that addresses these issues. By incorporating a single-step diffusion process and a "detail preserver", Lotus achieves state-of-the-art performance on zero-shot depth and normal estimation tasks, surpassing previous models in accuracy and efficiency. This development is particularly relevant for AI practitioners working with limited data, as Lotus demonstrates superior performance with significantly less training data compared to other state-of-the-art models. This advancement allows for wider adoption and potential for practical applications like 3D reconstruction and robotics. |
Discovering the Gems in Early Layers: Accelerating Long-Context LLMs with 1000x Input Token Reduction (Read more on arXiv or HuggingFace) | Shafiq Joty, Yingyu Liang, Xuan-Phi Nguyen, Zhenmei Shi, alvinming | The research presents GemFilter, a novel inference strategy to accelerate Large Language Model (LLM) inference with long context inputs, effectively addressing the bottleneck of high computational cost and latency. GemFilter leverages the observation that relevant information for a query is often identified within the early layers of an LLM. By using these early layers as filters, GemFilter selects and compresses input tokens, leading to a significant reduction in context length for subsequent LLM processing. Empirical evaluations demonstrate that GemFilter achieves a 2.4x speedup and a 30% reduction in GPU memory consumption compared to state-of-the-art methods. This approach offers a practical solution for AI engineers and data scientists to deploy and optimize LLMs for long-context tasks, especially when computational resources are limited. |
Pixel-Space Post-Training of Latent Diffusion Models (Read more on arXiv or HuggingFace) | Felix Juefei-Xu, Ji Hou, Matthew Yu, Simran Motwani, Christina Zhang | This research paper proposes a novel approach to improve the quality of images generated by Latent Diffusion Models (LDMs) by incorporating a pixel-space loss function during the post-training phase. The authors argue that operating solely in the compressed latent space, as is typical for LDMs, can lead to loss of detail and artifacts in the generated images. By adding a pixel-space objective during fine-tuning, either supervised or preference-based, the model learns to better preserve high-frequency details, resulting in significantly enhanced visual quality and fewer flaws in the generated images. Experiments demonstrate the effectiveness of this approach on both DiT and U-Net based LDMs, showing significant improvements in visual appeal and reduction of visual flaws without compromising text alignment. This technique provides AI practitioners, particularly those working with image generation, a simple yet effective method to enhance the quality of images generated by LDMs without architectural modifications, potentially leading to higher fidelity and more realistic image synthesis. |
Reducing the Footprint of Multi-Vector Retrieval with Minimal Performance Impact via Token Pooling (Read more on arXiv or HuggingFace) | Griffin Adams, Antoine Chaffin, Benjamin Clavié | This paper introduces TOKEN POOLING, a straightforward method to compress multi-vector retrieval models like ColBERT by clustering and averaging similar token representations. Evaluations across various datasets demonstrate that this approach can reduce the index size by 50% with negligible impact on retrieval performance, and up to 66% with minimal degradation. Notably, TOKEN POOLING seamlessly integrates with ColBERT's quantization pipeline, further enhancing compression capabilities. This method is particularly relevant for practitioners working with large-scale retrieval systems, as it offers a practical means to substantially reduce storage and memory footprints without compromising accuracy. This is especially important for deployments where resource constraints are a concern, or when utilizing indexing methods that offer greater flexibility for data updates compared to those typically employed with large multi-vector indexes. |
Disco4D: Disentangled 4D Human Generation and Animation from a Single Image (Read more on arXiv or HuggingFace) | Tianwei Zhang, Lei Yang, Zhongang Cai, Shuai Liu, Hui En Pang | Disco4D is a novel Gaussian Splatting framework that generates and animates 3D clothed human avatars from a single image. Disco4D separates the human body and clothing into distinct Gaussian models, leveraging the strengths of SMPL-X for body representation and Gaussian models for clothing variability. The framework uses diffusion models for 3D reconstruction enhancement, addressing the challenge of occluded parts. Disco4D outperforms existing methods in fidelity, disentanglement, and animation quality, evidenced by quantitative and qualitative benchmarks on standard datasets. Its ability to disentangle and manipulate clothing assets while maintaining high-fidelity 3D representation holds significant potential for various applications, including virtual try-on, avatar customization, and digital content creation. Practitioners working in these domains may find Disco4D to be a valuable tool for streamlining their workflows and enhancing the realism and customizability of their projects. |
Robot See Robot Do: Imitating Articulated Object Manipulation with Monocular 4D Reconstruction (Read more on arXiv or HuggingFace) | Qianqian Wang, Brent Yi, Mingxuan Wu, Chung Min Kim, Justin Kerr | The authors propose a novel method, Robot See Robot Do (RSRD), to enable a robot to imitate articulated object manipulation from a single monocular video. The system leverages 4D Differentiable Part Models (4D-DPM) for 3D part motion recovery from monocular video and plans bimanual arm motions to induce the demonstrated object part motion. RSRD achieves an average of 87% success rate in each phase and 60% end-to-end success rate across 90 trials on 9 objects. This work demonstrates the viability of using pretrained vision models, without any task-specific training, to learn new manipulation skills for a robot. This could be a valuable tool for AI engineers and Data Scientists working on robotics applications to simplify the process of teaching new manipulation skills to robots. |
Instruction Following without Instruction Tuning (Read more on arXiv or HuggingFace) | Christopher D. Manning, Percy Liang, Nelson F. Liu, John Hewitt | This research paper investigates instruction following in language models without explicit instruction tuning. The authors identify two implicit instruction tuning approaches: response tuning (training on responses only) and single-task fine-tuning (training on a narrow domain). Surprisingly, both approaches yield models capable of following general instructions, even surpassing base models in performance. This suggests that instruction-response mappings might be implicitly learned during pretraining, and seemingly unrelated fine-tuning tasks can implicitly enhance instruction-following capabilities. This finding holds practical relevance for practitioners, emphasizing the need for comprehensive testing and safety evaluations even for models fine-tuned for specific tasks, as they may exhibit unintended general instruction-following behavior. |
Enhancing Structured-Data Retrieval with GraphRAG: Soccer Data Case Study (Read more on arXiv or HuggingFace) | Pål Halvorsen, Michael A. Riegler, Cise Midoglu, Sushant Gautam, Zahra Sepasdar | This paper presents Structured-GraphRAG, a novel framework designed to enhance information retrieval from structured datasets. Structured-GraphRAG leverages the power of Knowledge Graphs (KGs) and graph-based architectures to provide more accurate and efficient retrieval of data from structured sources. Experimental results demonstrate that Structured-GraphRAG outperforms traditional methods by reducing processing time, enhancing answer accuracy, and mitigating the issue of hallucinations in Language Models (LLMs). By offering a more accessible approach to KG construction, Structured-GraphRAG proves to be a valuable tool for AI engineers and data scientists working with structured data across diverse domains. |
Title | Authors | Summary |
---|---|---|
Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale (Read more on arXiv or HuggingFace) | Qian Liu, Pengfei, lockon, SinclairWang, koalazf99 | The paper introduces Programming Every Example (PROX), a novel framework for refining large-scale language model pre-training data by utilizing small language models to generate and execute data processing programs. PROX refines data through a two-stage process: document-level programming for filtering and chunk-level programming for fine-grained operations like string normalization. Experimental results demonstrate that PROX-curated data consistently enhances model performance, achieving a 2.1% average improvement over 10 downstream benchmarks and surpassing state-of-the-art data selection techniques by over 2.0%. Furthermore, PROX significantly reduces the required training tokens for comparable performance, offering up to 20x training efficiency improvements in certain domains. Practitioners, including AI engineers and data scientists, can leverage PROX to enhance data quality and significantly reduce training costs for large language models, making LLM development more efficient and accessible. |
Molmo and PixMo: Open Weights and Open Data for State-of-the-Art Multimodal Models (Read more on arXiv or HuggingFace) | Muennighoff, SMSD75, jamepark3922, sharpen, mattdeitke | The paper introduces Molmo, a family of open-weight and open-data vision-language models (VLMs) trained on a novel dataset named PixMo. Unlike previous open VLMs that relied heavily on synthetic data from proprietary systems, Molmo leverages a high-quality dataset of detailed image descriptions collected using a speech-based annotation approach. Evaluation on 11 academic benchmarks and human evaluation demonstrate that Molmo achieves state-of-the-art performance among open VLMs, even rivaling proprietary models like GPT-40. The release of Molmo's weights, data, and code provides practitioners and researchers with valuable resources for building and studying performant VLMs from scratch. |
Boosting Healthcare LLMs Through Retrieved Context (Read more on arXiv or HuggingFace) | Ashwin Kumar Gururajan, dariog, JordiBayarri | This research investigates the enhancement of open-source Large Language Models (LLMs) for medical question answering through optimized context retrieval techniques. The authors find that incorporating choice shuffling, an optimal number of ensembles, and enriching databases with Chain-of-Thought augmented examples significantly improves performance on multiple-choice question answering benchmarks, achieving accuracy comparable to private models like MedPalm-2 and GPT-4. They introduce OpenMedPrompt, a novel framework for open-ended medical question answering, with two strategies: Ensemble Refining (OM-ER) and Self-Reflection (OM-SR), demonstrating the effectiveness of iterative feedback and reward model integration. The study provides valuable insights for AI engineers and data scientists working on building accurate and reliable healthcare AI systems by showcasing the potential of open-source LLMs augmented with optimized context retrieval. |
DreamWaltz-G: Expressive 3D Gaussian Avatars from Skeleton-Guided 2D Diffusion (Read more on arXiv or HuggingFace) | Lei Zhang, Zheng-Jun Zha, Jianan Wang, alkxncda, KevinHuang | The paper introduces DreamWaltz-G, a novel framework for generating animatable 3D avatars from text descriptions. It leverages pretrained 2D diffusion models and a novel Skeleton-guided Score Distillation (SkelSD) technique, enhancing 3D consistency and pose accuracy. DreamWaltz-G utilizes a hybrid 3D Gaussian representation (H3GA), integrating neural implicit fields and parameterized meshes for efficient rendering, optimization, and expressive animation. Experiments demonstrate superior generation and animation quality, outperforming existing methods. AI practitioners can utilize DreamWaltz-G for applications like character generation in gaming and virtual reality, benefiting from its text-driven approach, realistic animation, and efficient implementation. |
Degradation-Guided One-Step Image Super-Resolution with Diffusion Priors (Read more on arXiv or HuggingFace) | Renjing Pei, Aiping Zhang, cxc361461518, Akowang, OAOA | The authors present S3Diff, a novel one-step image super-resolution (SR) model that leverages a pre-trained text-to-image (T2I) diffusion model. By incorporating degradation-guided Low-Rank Adaptation (LoRA), S3Diff efficiently adapts model parameters based on the degradation characteristics of low-resolution images, enhancing its efficiency and effectiveness. Experimental results demonstrate S3Diff's superior performance in both synthetic and real-world scenarios, achieving state-of-the-art results with just one sampling step. This approach holds significant implications for practitioners, particularly AI engineers and data scientists working on image enhancement tasks, by offering a computationally efficient yet highly effective solution for super-resolution. The integration of degradation awareness further enhances the model's practical applicability for real-world image restoration scenarios. |
Game4Loc: A UAV Geo-Localization Benchmark from Game Data (Read more on arXiv or HuggingFace) | Liaoni Wu, Zhuoyue Tan, heboyong, Yux1ang | This paper introduces Game4Loc, a novel benchmark for UAV geo-localization based on data extracted from commercial video games. Game4Loc addresses the limitations of existing datasets, which primarily rely on perfectly aligned drone-satellite image pairs, by incorporating partial matching scenarios that better reflect real-world conditions. The authors propose weighted-InfoNCE, a contrastive learning approach that leverages intersection-over-union (IOU) as a supervisory signal to improve partial matching performance. Experimental results demonstrate the effectiveness of Game4Loc and the proposed training method, achieving state-of-the-art performance in both cross-area and same-area geo-localization tasks. This work provides AI engineers and data scientists with a valuable resource for developing and evaluating more robust and practical UAV geo-localization systems. |
AIM 2024 Sparse Neural Rendering Challenge: Dataset and Benchmark (Read more on arXiv or HuggingFace) | Radu Timofte, Richard Shaw, sibicatleychandar, thomas-tanay, michaal94 | This research paper introduces SpaRe, a novel dataset and benchmark designed for evaluating sparse-view neural rendering. Existing datasets and protocols are shown to suffer from limitations like low-resolution evaluation and overfitting due to public test data. SpaRe addresses these issues with high-quality synthetic renderings, hidden test data, and diverse camera viewpoints. Through an online platform, SpaRe allows researchers to benchmark novel view synthesis methods in a standardized manner and contribute to a public leaderboard. Experimental results highlight the strengths and weaknesses of both per-scene optimization and generalizable methods for sparse neural rendering. Practitioners, such as AI engineers and data scientists, can leverage SpaRe to rigorously evaluate and compare the performance of new sparse-view neural rendering algorithms. |
TalkinNeRF: Animatable Neural Fields for Full-Body Talking Humans (Read more on arXiv or HuggingFace) | Rakesh Ranjan, Amit Kumar, Bindita Chaudhuri, nsarafianos, aggelina | The authors introduce a novel framework, TalkinNeRF, that learns a dynamic neural radiance field for full-body talking humans from monocular videos. TalkinNeRF models the holistic 4D human motion, including body pose, hand articulation, and facial expressions. It introduces a multi-identity representation that enables simultaneous training for multiple subjects, significantly reducing training time. TalkinNeRF demonstrates state-of-the-art performance for animating full-body talking humans. This research is relevant to practitioners because it provides a new way to create high-fidelity animated videos of talking humans. This can be useful for various applications, such as virtual communication, video games, and movie production. |
Title | Authors | Summary |
---|---|---|
HelloBench: Evaluating Long Text Generation Capabilities of Large Language Models (Read more on arXiv or HuggingFace) | Liqun He, Feiyu Duan, zsytony, zhangysk, quehry | The research paper "HelloBench: Evaluating Long Text Generation Capabilities of Large Language Models" introduces a novel benchmark designed to evaluate the long-form text generation capabilities of Large Language Models (LLMs). The benchmark, called HelloBench, is structured around Bloom's Taxonomy and comprises five tasks: open-ended QA, summarization, chat, text completion, and heuristic text generation, encompassing a diverse range of 38 subcategories and 647 testing samples. To facilitate efficient evaluation, the authors propose a human-aligned evaluation method called HelloEval, which uses LLM-as-a-Judge and demonstrates superior correlation with human evaluation compared to traditional metrics. The key finding of the study is that current LLMs, despite advancements, demonstrate limitations in generating long-form text, often favoring shorter outputs or generating longer text with compromised quality. This research is relevant to practitioners such as AI engineers and data scientists, as it provides a standardized benchmark and evaluation method to guide the development and fine-tuning of LLMs for long-form text generation tasks, a critical area for real-world applications. |
Making Text Embedders Few-Shot Learners (Read more on arXiv or HuggingFace) | Kun Luo, Jianlyu Chen, Shitao Xiao, MingHao Qin, cfli | This research paper proposes a novel approach called bge-en-icl that integrates in-context learning (ICL) with large language models (LLMs) to enhance the generation of text embeddings, enabling them to excel in both zero-shot and few-shot settings. The model achieves state-of-the-art performance on MTEB and AIR-Bench benchmarks without modifying the LLM architecture, relying instead on enriching the query prompt with task-specific examples. Findings suggest that retaining the original, unmodified architecture often yields the best results, highlighting the strength of ICL in adapting to new tasks without complex architectural alterations. Practitioners, such as AI engineers and data scientists, can leverage this model to build more versatile text embedding systems that can readily adapt to diverse scenarios without extensive fine-tuning, facilitating better performance in information retrieval, text classification, and other NLP tasks. |
Present and Future Generalization of Synthetic Image Detectors (Read more on arXiv or HuggingFace) | Enrique Lopez-Cuena, dariog, pabberpe | This paper investigates the generalization capacity of synthetic image detectors amidst the rapid evolution of AI image generation models. The authors find that no single detector consistently outperforms others across diverse datasets and generative models, suggesting that universal detectors are presently elusive. Experiments demonstrate that training detectors on images generated by newer models enhances their ability to detect both old and new synthetic content. This highlights a race equilibrium effect where better generators lead to better detectors and vice-versa, emphasizing the need for continuous development and evaluation of detectors in this dynamic field. For practitioners, this research underscores the importance of using diverse training datasets, incorporating the latest generation models, and remaining cognizant of the limitations of current detectors when deploying them in real-world applications. |
MonoFormer: One Transformer for Both Diffusion and Autoregression (Read more on arXiv or HuggingFace) | Errui Ding, Haocheng Feng, Wenhao Wang, Yuxing Song, Chuyang Zhao | The research paper "MonoFormer: One Transformer for Both Diffusion and Autoregression" introduces a novel approach to utilizing a single transformer for both autoregressive text generation and diffusion-based image generation. The authors leverage the similarities between transformer training for these two modalities, primarily differing in the attention mask employed, to achieve comparable performance in image generation to state-of-the-art methods, while retaining text generation capabilities. This is a significant development for practitioners as it offers a unified and potentially more efficient architecture for multi-modal tasks, simplifying development and potentially reducing computational overhead for AI engineers and data scientists working with text and image data. The demonstrated performance on ImageNet and commonsense reasoning benchmarks, along with ablation studies highlighting the importance of pretrained LLMs and bidirectional attention, underscores the potential of MonoFormer for advancing multi-modal learning. |
MaskBit: Embedding-free Image Generation via Bit Tokens (Read more on arXiv or HuggingFace) | Xiaohui Shen, Xueqing Deng, Qihang Yu, Lijun Yu, Mark Weber | The authors propose MaskBit, a novel transformer-based image generation model that operates directly on bit tokens, eliminating the need for embedding tables typically found in VQGAN-based approaches. Through a systematic study, they modernize a widely-used VQGAN model, achieving state-of-the-art image reconstruction performance. They demonstrate that bit tokens, derived from binary quantization, exhibit a structured semantic representation, making them suitable for image generation. MaskBit achieves state-of-the-art performance on ImageNet 256x256 generation benchmark, surpassing prior art while using a compact generator. This work provides AI practitioners with an efficient and high-performing method for image generation, offering advantages in terms of computational cost and memory footprint due to the embedding-free design. |
MIMO: Controllable Character Video Synthesis with Spatial Decomposed Modeling (Read more on arXiv or HuggingFace) | Liefeng Bo, Miaomiao Cui, Yuan Yao, Yifang Men | The paper proposes MIMO, a novel framework for controllable character video synthesis that leverages spatial decomposition modeling for enhanced control and realism. MIMO uniquely decomposes video clips into spatially distinct components - human, scene, and occlusion - which are encoded into latent codes and fed into a diffusion-based decoder for video reconstruction. This approach allows for flexible manipulation of character appearance, motion, and scene interaction through user-provided inputs like images and pose sequences. The key result is the ability to generate high-fidelity character videos with complex 3D motions and realistic object interactions. MIMO presents a powerful tool for AI engineers and data scientists in domains like animation, virtual reality, and video editing, enabling them to synthesize and manipulate character-driven videos with unprecedented control and realism. |
EuroLLM: Multilingual Language Models for Europe (Read more on arXiv or HuggingFace) | Ricardo Rei, Nuno M. Guerreiro, João Alves, Patrick Fernandes, Pedro Henrique Martins | The authors introduce EuroLLM, a project focused on developing multilingual language models (LLMs) proficient in all official European Union languages and several other relevant languages. The researchers meticulously constructed a massive multilingual dataset, developed a custom tokenizer, and explored different modeling and pre-training configurations based on scaling laws. Their initial models, EuroLLM-1.7B and EuroLLM-1.7B-Instruct, demonstrate strong performance on multilingual benchmarks and machine translation tasks. Notably, EuroLLM-1.7B-Instruct exhibits superior performance in machine translation across various language pairs compared to existing models with significantly larger parameter sizes, highlighting its efficacy for multilingual NLP applications. This work holds significant implications for AI practitioners, particularly those working on multilingual natural language processing tasks, as it offers a robust foundation and valuable resources for developing and deploying LLMs for a wide range of European languages. |
Gen2Act: Human Video Generation in Novel Scenarios enables Generalizable Robot Manipulation (Read more on arXiv or HuggingFace) | Carl Doersch, Shubham Tulsiani, Abhinav Gupta, Debidatta Dwibedi, Homanga Bharadhwaj | The paper "Gen2Act: Human Video Generation in Novel Scenarios enables Generalizable Robot Manipulation" introduces a novel framework for generalizable robot manipulation that leverages zero-shot human video generation from web data and limited robot demonstrations. Gen2Act addresses the challenge of generalizing to unseen scenarios, objects, and motions by first generating a human video of the desired task using a pre-trained video generation model. A closed-loop policy then translates this video into robot actions, implicitly learning motion cues from the generated human behavior. Evaluations show Gen2Act significantly outperforms baselines in generalization tasks, especially to unseen object types and motion types. This framework holds significant potential for AI practitioners, particularly in robotics, by offering a scalable and efficient way to develop robot manipulation policies that generalize to new tasks and environments without the need for extensive robot data collection. |
Seeing Faces in Things: A Model and Dataset for Pareidolia (Read more on arXiv or HuggingFace) | Jennifer Corbett, Anne Harrington, Vasha DuTell, Simon Stent, mhamilton723 | The paper, "Seeing Faces in Things: A Model and Dataset for Pareidolia", by Corbett, Harrington, DuTell, et al. explores the phenomenon of face pareidolia – seeing faces in random stimuli – from a computer vision perspective. The authors introduce "Faces in Things", a novel dataset of 5,000 annotated pareidolic face images, and demonstrate that a state-of-the-art face detector, while excelling at detecting human faces, struggles with pareidolic ones. Interestingly, fine-tuning the detector on animal faces significantly improves pareidolic face detection, suggesting a link between the perception of animal and pareidolic faces. This work provides valuable insights for AI practitioners, particularly those working on face detection, by highlighting the limitations of current models and suggesting avenues for improvement, such as incorporating training data that reflects the diversity of features present in both animal and pareidolic faces. Understanding pareidolia could lead to more robust face detectors, minimizing false positives and potentially enhancing visual attention mechanisms in AI systems. |
DynaMo: In-Domain Dynamics Pretraining for Visuo-Motor Control (Read more on arXiv or HuggingFace) | Lerrel Pinto, Siddhant Haldar, Aadhithya Iyer, Hengkai Pan, Zichen Jeff Cui | DynaMo is a novel self-supervised learning method for pretraining visual representations for visuomotor control tasks. DynaMo operates by jointly learning an image encoder alongside inverse and forward dynamics models from unlabeled, sequential visual demonstrations, without relying on data augmentation or contrastive learning. Experiments demonstrate that DynaMo outperforms existing self-supervised methods and pretrained representations on both simulated and real-world robotic manipulation benchmarks. This approach is particularly relevant for AI engineers and roboticists working with limited demonstration data, as it offers a data-efficient method for learning robust visual representations for robot control. The authors posit that the method's efficacy stems from its ability to leverage the inherent temporal structure in demonstrations, enabling it to learn task-specific features more effectively. |
Reward-Robust RLHF in LLMs (Read more on arXiv or HuggingFace) | Jian Xie, Yiping Zhang, Jialian Li, Xingzhou Lou, Yuzi Yan | The authors introduce a novel reward-robust RLHF (Reinforcement Learning from Human Feedback) framework to enhance the alignment of LLMs (Large Language Models) with human preferences while addressing limitations in reward modeling. The proposed framework employs Bayesian Reward Model Ensembles (BRME) to capture the uncertainty inherent in reward signals and uses a trade-off objective function that balances performance and robustness during optimization. Empirical evaluations across diverse benchmarks show that the framework consistently outperforms traditional RLHF, demonstrating improved stability and accuracy, especially in long-term training. This approach is particularly relevant for AI practitioners as it tackles the crucial challenge of reward hacking, where LLMs exploit imperfections in reward models, leading to suboptimal performance. By incorporating the proposed reward-robust framework, AI engineers and data scientists can develop LLMs that are more reliable, generalize better, and are less susceptible to unintended behaviors. |
SLIMER-IT: Zero-Shot NER on Italian Language (Read more on arXiv or HuggingFace) | Andrea Zugarini, Marco Maggini, Leonardo Rigutini, Andrew Zamai | This research proposes SLIMER-IT, a novel approach for zero-shot Named Entity Recognition (NER) in Italian, addressing the scarcity of resources and research for this language, particularly for non-standard domains and entity types. SLIMER-IT, adapting the English SLIMER model, employs instruction tuning with prompts enriched by entity definitions and annotation guidelines, enabling superior performance on unseen entity tags. Experiments demonstrate SLIMER-IT's effectiveness on a newly defined zero-shot NER benchmark for Italian, outperforming existing methods, especially in identifying previously unseen entities. This work holds practical implications for AI practitioners working with Italian language data, offering an effective tool for tasks like information extraction, question answering, and knowledge base construction, even with limited annotated data. Future work will focus on extending the benchmark and improving scalability for larger label sets. |
Time-MoE: Billion-Scale Time Series Foundation Models with Mixture of Experts (Read more on arXiv or HuggingFace) | Zhou Ye, Dianqi Li, Yuqi Nie, Shiyu Wang, Xiaoming Shi | The paper introduces Time-MoE, a novel decoder-only transformer architecture with a Mixture-of-Experts (MoE) design specifically tailored for large-scale time series forecasting. This architecture enables Time-MoE to scale to 2.4 billion parameters while maintaining computational efficiency by activating only a subset of networks for each prediction. Trained on Time-300B, a newly introduced dataset comprising over 300 billion time points across 9 domains, Time-MoE significantly outperforms existing forecasting models on six benchmarks in both zero-shot and fine-tuned settings. The results validate the scaling laws for training tokens and model size in time series forecasting, demonstrating superior performance compared to dense models with equivalent computational budgets. This work offers practitioners a powerful, efficient, and flexible solution for real-world time series forecasting, allowing them to develop and deploy larger, more capable models with reduced computational costs. |
Tabular Data Generation using Binary Diffusion (Read more on arXiv or HuggingFace) | Slava Voloshynovskiy, vitaliykinakh | Voloshynovskiy and Kinakh introduce Binary Diffusion, a novel generative model for synthetic tabular data generation. Their method leverages a lossless binary transformation to convert tabular data into fixed-size binary representations, simplifying preprocessing. The Binary Diffusion model then employs XOR operations for efficient noise addition and removal, addressing challenges posed by mixed data types and complex distributions inherent in tabular data. Evaluations on benchmark datasets demonstrate that Binary Diffusion achieves state-of-the-art performance, notably surpassing existing methods on Travel, Adult Income, and Diabetes datasets. Furthermore, its compact size and efficient training make it a practical tool for practitioners, especially in scenarios with limited data or privacy concerns. |
Title | Authors | Summary |
---|---|---|
RACER: Rich Language-Guided Failure Recovery Policies for Imitation Learning (Read more on arXiv or HuggingFace) | Joyce Chai, nimafazeli, newwater, Yinpei | This paper introduces RACER, a novel framework for enhancing robotic manipulation through the integration of rich language guidance and failure recovery mechanisms. The authors propose a data augmentation pipeline that automatically generates failure recovery trajectories and annotates them with detailed language instructions, addressing the limitations of existing benchmarks. Experimental results on RLBench demonstrate that RACER outperforms state-of-the-art baselines in multi-task learning, dynamic goal change scenarios, and zero-shot unseen task evaluations. Notably, RACER exhibits superior sim-to-real transfer capabilities, highlighting the practical significance of rich language guidance for real-world robotic deployments. This research provides AI practitioners, particularly those in robotics, with valuable insights and a practical framework for developing more robust and adaptable manipulation policies. |
A Preliminary Study of o1 in Medicine: Are We Closer to an AI Doctor? (Read more on arXiv or HuggingFace) | Haoqin Tu, Juncheng Wu, Yunfei Xie, ys-zong, tennant | This research paper presents a comprehensive evaluation of OpenAI's o1 language model within the medical domain, focusing on its understanding, reasoning, and multilingual capabilities across 37 datasets. The study reveals that o1 exhibits enhanced clinical understanding and reasoning abilities, surpassing prior models like GPT-4 in diagnostic accuracy on several tasks. Notably, o1 demonstrates significant improvements in challenging medical question-answering scenarios and medical calculation tasks. However, limitations persist in terms of hallucination and complex multilingual reasoning, suggesting areas for further development. These findings are highly relevant to AI practitioners, particularly those developing AI-driven healthcare solutions, as they highlight both the potential and current limitations of utilizing large language models for medical applications. |
PixWizard: Versatile Image-to-Image Visual Assistant with Open-Language Instructions (Read more on arXiv or HuggingFace) | Renrui Zhang, Xinyu Wei, SiyuanH, stzhao, Afeng-x | PixWizard is a Diffusion Transformer-based image-to-image visual assistant that leverages a novel 30-million datapoint "Omni Pixel-to-Pixel Instruction-Tuning Dataset" to unify a variety of image editing, generation, and translation tasks. PixWizard demonstrates competitive performance in tasks like image restoration, image grounding, and text-to-image generation, surpassing existing unified methods and approaching the performance of specialized models on some tasks. Notably, PixWizard achieves state-of-the-art results in image outpainting and demonstrates strong generalization to tasks like object removal and replacement, even when not explicitly trained on them. AI practitioners can utilize PixWizard as a flexible tool for various image-related tasks, and the introduced dataset and training strategies can be adapted for other text-to-image diffusion models. |
Beyond Fine-tuning: Unleashing the Potential of Continuous Pretraining for Clinical LLMs (Read more on arXiv or HuggingFace) | Muhammad Umar Salman, Svetlana Maslenkova, Tathagata Raha, pkanithi, cchristophe | The study investigates the efficacy of continuous pretraining on in-domain clinical data in conjunction with instruction fine-tuning and advanced prompting for optimizing Large Language Models (LLMs) in clinical question-answering tasks. While continuous pretraining yields marginal improvements compared to other techniques, it establishes a valuable foundation for enhancing LLM performance in the clinical domain by mitigating instability issues through careful balancing of in-domain data with general language data. The synergy between continuous pretraining, instruct fine-tuning, and complex prompting techniques, specifically MedPrompt, results in state-of-the-art performance on a variety of clinical QA benchmarks. These findings are particularly relevant for AI engineers and data scientists working on adapting LLMs for clinical applications, highlighting the effectiveness of continuous pretraining as a foundational step for improving model accuracy and reasoning ability in this domain. |
Phantom of Latent for Large Language and Vision Models (Read more on arXiv or HuggingFace) | Yong Man Ro, Beomchan Park, Sangyun Chung, chae-won-kim, BK-Lee | The paper introduces Phantom, an efficient family of large language and vision models (LLVMs) that enhances learning capabilities within limited model sizes. Phantom temporarily increases the latent hidden dimension during multi-head self-attention (MHSA), allowing it to embed more vision-language knowledge without significantly increasing physical model size. The authors also introduce Phantom Optimization (PO), a novel training strategy inspired by Direct Preference Optimization, which guides the model towards correct answers while minimizing incorrect and ambiguous ones. Experiments demonstrate that Phantom outperforms numerous larger open- and closed-source LLVMs across various vision-language benchmarks. This is highly relevant to practitioners, particularly AI engineers and data scientists, who seek to develop and deploy efficient yet high-performing LLVMs for resource-constrained environments, such as mobile devices and embedded systems. By demonstrating the effectiveness of latent space optimization in enhancing LLVMs, the paper provides valuable insights for designing and training future efficient multimodal models. |
An adapted large language model facilitates multiple medical tasks in diabetes care (Read more on arXiv or HuggingFace) | Yutong Chen, Muyang He, Zhen Ying, weiranhuang, WaltonFuture | The research paper, "An adapted large language model facilitates multiple medical tasks in diabetes care," by Chen, He, Ying, et al. introduces Diabetica, a diabetes-specific large language model (LLM) family fine-tuned from the open-source Qwen2 model. The authors curated a specialized dataset and developed benchmarks for multiple-choice questions, fill-in-the-blank tasks, and open-ended dialogues to rigorously evaluate the model's performance. Diabetica demonstrated state-of-the-art performance in understanding and executing diabetes-related tasks, surpassing open-source LLMs of comparable size and rivaling proprietary models like GPT-4 and Claude-3.5. Clinical evaluations highlight Diabetica's potential in patient consulting, medical education, and clinical record summarization. This research offers a practical framework for developing and evaluating domain-specific LLMs, which is highly relevant to AI engineers and data scientists interested in healthcare applications. |
MaterialFusion: Enhancing Inverse Rendering with Material Diffusion Priors (Read more on arXiv or HuggingFace) | Rushikesh Zawar, Aviral Agrawal, Kangle Deng, Or Patashnik, Yehonathan Litman | The paper introduces MaterialFusion, a novel inverse rendering approach that leverages a 2D material diffusion prior, called StableMaterial, to enhance the reconstruction of an object's 3D representation, including geometry, materials, and illumination, from a set of multi-view images. StableMaterial is trained on a vast dataset of synthetic objects with high-quality Physically Based Rendering (PBR) assets, enabling it to learn a prior over plausible material and albedo combinations. Experimental results demonstrate that MaterialFusion surpasses state-of-the-art inverse rendering methods in reconstructing faithful material properties and accurately relighting objects under novel illumination conditions. This work holds significant implications for practitioners in computer graphics and vision, including AI engineers and data scientists, by providing a robust method for 3D object reconstruction and relighting, which can be applied in various domains like virtual reality, augmented reality, and content creation. |
Zero-shot Cross-lingual Voice Transfer for TTS (Read more on arXiv or HuggingFace) | Gary Wang, Kyle Kastner, Isaac Elias, Youzheng Chen, Fadi Biadsy | This paper introduces a novel zero-shot voice transfer (VT) module for multilingual text-to-speech (TTS) systems, capable of transferring an individual's voice across languages using a single short reference utterance. The module comprises a speaker encoder, a bottleneck layer (with SegmentGST shown most effective for typical speech), and residual adapters integrated into a pre-existing TTS system. Evaluations demonstrate an average voice transfer similarity score of 73% across nine languages, even with atypical reference speech. This research is highly relevant for AI practitioners developing accessible TTS systems or voice restoration technologies, enabling high-quality, cross-lingual voice transfer and offering potential benefits to individuals with speech impairments. |
MaskedMimic: Unified Physics-Based Character Control Through Masked Motion Inpainting (Read more on arXiv or HuggingFace) | Xue Bin Peng, Ofir Nabati, Yunrong Guo, Chen Tessler, galchechik | The research paper, "MaskedMimic: Unified Physics-Based Character Control Through Masked Motion Inpainting," introduces a novel framework for controlling physically simulated humanoid characters by leveraging a motion inpainting approach. MaskedMimic is trained on a diverse dataset of motion capture data with various modalities, including joint positions, text descriptions, and object interactions, where portions of the input data are strategically masked out. This forces the model to learn a general understanding of generating realistic and diverse human motions from partial information. The authors demonstrate that a single unified control architecture trained with this approach can successfully perform various tasks like locomotion, object interaction, VR tracking, and even text-to-motion synthesis without requiring task-specific training or reward engineering. Practitioners, including AI engineers and data scientists working in character animation and robotics, can benefit from this framework by having a simplified and flexible tool to create versatile and interactive virtual characters. |
Self-Supervised Audio-Visual Soundscape Stylization (Read more on arXiv or HuggingFace) | Gopala Anumanchipalli, Andrew Owens, Po-Yao Huang, Renhao Wang, Tingle Li | This paper introduces the concept of audio-visual soundscape stylization, a technique to modify input audio to reflect the acoustic and ambient properties of a target scene represented by an audio-visual sample. The authors propose a self-supervised learning framework based on conditional speech de-enhancement using a latent diffusion model trained on unlabeled, in-the-wild videos. Extensive experiments demonstrate the model's superiority over existing audio stylization methods in replicating acoustic properties and ambient sounds. This technique holds significant potential for practitioners, such as AI engineers and data scientists, in applications like realistic audio dubbing for videos, generating immersive virtual environments, and enhancing audio quality in old recordings. |
A Case Study of Web App Coding with OpenAI Reasoning Models (Read more on arXiv or HuggingFace) | onekq | This paper presents a case study evaluating OpenAI's latest reasoning models (o1-preview and o1-mini) on web application coding tasks. While demonstrating superior performance on the single-task WebApp1K benchmark, the models exhibit significant decline in the harder WebApp1K-Duo benchmark, falling behind Claude 3.5. The authors attribute this variability to instruction comprehension, where the reasoning mechanism, while beneficial with complete expectations, exacerbates errors when key expectations are missed. A key insight for practitioners, such as AI engineers and data scientists, is that the success of reasoning models in coding hinges not only on their reasoning capabilities but also on a robust base model and meticulous adherence to instructions, achieved through methods like SFT. This highlights the importance of focusing on both reasoning and instruction following when developing and deploying AI models for coding applications. |
Title | Authors | Summary |
---|---|---|
Imagine yourself: Tuning-Free Personalized Image Generation (Read more on arXiv or HuggingFace) | anmolkalia, ankit61, haoyum1997, FelixXu, zechengh | The research paper "Imagine yourself: Tuning-Free Personalized Image Generation" by anmolkalia et al. introduces a novel diffusion-based model for personalized image generation that does not require subject-specific fine-tuning. The authors achieve this by incorporating three key components: a synthetic paired data generation mechanism to encourage image diversity, a fully parallel attention architecture with multiple text encoders and a trainable vision encoder for enhanced text alignment and identity preservation, and a coarse-to-fine multi-stage fine-tuning methodology for improved visual quality. Extensive human evaluation demonstrates that Imagine yourself significantly outperforms state-of-the-art personalization models in identity preservation, text alignment, and visual appeal. This tuning-free approach is particularly relevant to AI practitioners, such as AI Engineers and Data Scientists, as it enables the development of personalized image generation applications without the need for costly and time-consuming individual user tuning. |
MuCodec: Ultra Low-Bitrate Music Codec (Read more on arXiv or HuggingFace) | Jianwei Yu, zy001, lglg666, hangtingchen, yaoxunxu | MuCodec is a novel neural codec designed for high-fidelity music reconstruction at ultra-low bitrates. This model leverages a specialized feature extractor, MuEncoder, to capture both acoustic and semantic features from music. These features are then discretized and reconstructed using a flow-matching-based method with a Diffusion Transformer. Experimental results demonstrate that MuCodec surpasses current state-of-the-art methods in both objective and subjective evaluations, achieving high-quality music reconstruction at bitrates as low as 0.35kbps. This development is particularly relevant for AI practitioners working on music information retrieval, music generation, and low-bitrate audio streaming applications. MuCodec offers a promising solution for compressing and reconstructing music with high fidelity, potentially leading to more efficient storage and transmission of music data. |
Prithvi WxC: Foundation Model for Weather and Climate (Read more on arXiv or HuggingFace) | jubeku, ds6574, jhnnsjkbk, WillTrojak, johannesschmude | The paper introduces Prithvi WxC, a 2.3 billion parameter foundation model for weather and climate applications trained on the MERRA-2 reanalysis dataset. The model leverages a novel transformer-based architecture that incorporates both local and global attention mechanisms, and is trained using a combination of masked reconstruction and forecasting objectives. Zero-shot evaluations demonstrate Prithvi WxC's ability to generate accurate short-term forecasts and reconstruct atmospheric states from heavily masked inputs. Fine-tuning experiments on downscaling and gravity wave flux parameterization further highlight the model's versatility and ability to be adapted for diverse downstream tasks, suggesting potential benefits for AI engineers and data scientists working in climate modeling and weather forecasting applications. |
Portrait Video Editing Empowered by Multimodal Generative Priors (Read more on arXiv or HuggingFace) | Yudong Guo, Chenglai Zhong, Haiyao Xiao, Xuan Gao, sisyphe28 | The paper introduces PortraitGen, a novel method for consistent and expressive portrait video editing using multimodal prompts. PortraitGen leverages 3D Gaussian Splatting embedded on SMPL-X models to ensure structural and temporal coherence, achieving rendering speeds of over 100FPS through a Neural Gaussian Texture mechanism. The system incorporates expression similarity guidance and a face-aware portrait editing module to mitigate degradation commonly associated with iterative dataset updates in existing methods. Experiments demonstrate superior quality and efficiency compared to state-of-the-art techniques across text-driven editing, image-driven editing, and relighting tasks. Practitioners, including AI Engineers and Data Scientists, can utilize PortraitGen to develop robust and high-fidelity portrait video editing tools for various applications. |
Colorful Diffuse Intrinsic Image Decomposition in the Wild (Read more on arXiv or HuggingFace) | Yağız Aksoy, ccareaga | This research introduces a novel method for intrinsic image decomposition in the wild, successfully separating diffuse and non-diffuse lighting effects at high resolutions. The authors achieve this by decomposing the complex problem into physically-motivated sub-tasks, addressing the limitations of previous grayscale shading models. Quantitative analysis and qualitative examples demonstrate the method's ability to generalize to diverse scenes, including outdoor landscapes and human faces, despite training the final diffuse network solely on a synthetic indoor dataset. This advancement allows for new illumination-aware image editing applications, offering AI practitioners robust tools for specularity removal and multi-illuminant white balancing in real-world images. |
Temporally Aligned Audio for Video with Autoregression (Read more on arXiv or HuggingFace) | erahtu, bilpo, bilpo | This paper introduces V-AURA, a novel autoregressive model for video-to-audio generation that prioritizes temporal alignment and semantic relevance. Unlike diffusion-based counterparts, V-AURA utilizes a high-framerate visual feature extractor and a cross-modal fusion strategy to capture fine-grained audio-visual correspondences. Furthermore, the authors present VisualSound, a curated dataset with strong audio-visual relevance, to improve training efficiency and mitigate hallucinations. Evaluations demonstrate that V-AURA outperforms state-of-the-art methods in temporal alignment and relevance while maintaining competitive audio quality. These findings are particularly valuable for AI practitioners working on applications requiring tightly synchronized and semantically meaningful audio generation from video content, such as in video editing and multimedia content creation. |
V^3: Viewing Volumetric Videos on Mobiles via Streamable 2D Dynamic Gaussians (Read more on arXiv or HuggingFace) | Zhirui Zhang, wuminye, Daluuu, liaowang11, Penghowdy | The paper proposes V³, a method for streaming and rendering high-quality volumetric videos on mobile devices using dynamic 3D Gaussian splats (3DGS). V³ leverages a compact 2D representation of 3DGS, allowing for efficient compression with video codecs and streaming to mobile devices. Their approach employs a novel two-stage training strategy with motion-appearance disentanglement, residual entropy loss, and temporal loss, enabling high-quality rendering while maintaining temporal consistency. Experimental results demonstrate that V³ outperforms existing methods in terms of rendering quality and storage efficiency. This breakthrough holds significant implications for practitioners in computer graphics and AI, particularly for AI engineers and data scientists working on efficient representations of 3D scenes and real-time rendering applications on resource-constrained devices. |
Minstrel: Structural Prompt Generation with Multi-Agents Coordination for Non-AI Experts (Read more on arXiv or HuggingFace) | Daling Wang, Yijie Huang, Xiaoyu Liang, Yuanzhong Liu, Ming Wang | This research paper introduces LangGPT, a novel structured prompt framework designed to enhance the usability and effectiveness of Large Language Models (LLMs) for non-AI experts. LangGPT draws inspiration from programming language principles to establish a systematic, reusable, and extensible prompt structure, reducing the learning curve associated with prompt engineering. To further facilitate the prompt generation process, the authors propose Minstrel, a multi-agent system that automates the creation and optimization of LangGPT prompts through collaborative analysis, design, and reflection mechanisms. Experimental results demonstrate that both manually crafted and Minstrel-generated LangGPT prompts yield superior performance compared to conventional baseline prompts in various tasks, including question answering and instruction following. This framework holds significant practical implications for AI practitioners, enabling them to leverage a standardized and intuitive approach to harness the capabilities of LLMs effectively. |
Title | Authors | Summary |
---|---|---|
InfiMM-WebMath-40B: Advancing Multimodal Pre-Training for Enhanced Mathematical Reasoning (Read more on arXiv or HuggingFace) | Yi-Qi638, lllliuhhhhggg, bytehxf, yjian-bytedance, xiaotianhan | The research paper introduces InfiMM-WebMath-40B, a large-scale, open-source dataset designed for the pre-training of Multimodal Large Language Models (MLLMs) specifically for enhanced mathematical reasoning. This dataset addresses a critical gap in the open-source community, which has previously lacked access to large, high-quality, multimodal math datasets. InfiMM-WebMath-40B consists of 24 million mathematics and science-related web documents, encompassing 40 billion text tokens and 85 million image URLs, all meticulously filtered and aligned from CommonCrawl. The authors detail the comprehensive data curation pipeline, highlighting the challenges associated with extracting and filtering mathematical content from web pages, including the development of specialized tools to handle mathematical equations and image URLs. Evaluations conducted on established benchmarks such as MathVerse and We-Math demonstrate that models pre-trained on InfiMM-WebMath-40B achieve state-of-the-art performance among open-source models, and even surpass some proprietary models on certain tasks. These findings hold significant implications for practitioners, including AI engineers and data scientists, as they now have access to a valuable resource for developing and refining MLLMs with superior mathematical reasoning capabilities. The availability of InfiMM-WebMath-40B is expected to accelerate progress in the field of multimodal mathematical reasoning and enable the development of more sophisticated and accurate MLLMs capable of tackling complex mathematical problems. |
Training Language Models to Self-Correct via Reinforcement Learning (Read more on arXiv or HuggingFace) | sandraorion, ferya, shrivasd, rishabhagarwal, aviralkumar | This research paper introduces SCoRe, a novel multi-turn reinforcement learning approach designed to enhance the self-correction capabilities of large language models (LLMs). The authors demonstrate that traditional supervised fine-tuning methods are inadequate for this purpose, as they often lead to either minimal or detrimental modifications. SCoRe addresses these challenges through a two-stage training process: an initialization phase to expand the model's self-correction repertoire and a reward shaping mechanism to incentivize effective self-correction during multi-turn RL. Evaluations on math and code generation benchmarks reveal that SCoRe significantly improves the model's ability to rectify errors in its initial responses. This work provides AI practitioners, including AI engineers and data scientists, with a practical method to augment the reliability and accuracy of LLMs, particularly in tasks demanding high-fidelity outputs. |
MMSearch: Benchmarking the Potential of Large Models as Multi-modal Search Engines (Read more on arXiv or HuggingFace) | lovesnowbest, lupantech, jyjyjyjy, ZiyuG, CaraJ | The paper "MMSearch: Benchmarking the Potential of Large Models as Multi-modal Search Engines" introduces a novel framework, MMSearch-Engine, designed to empower large language models (LLMs) with multi-modal search capabilities. The authors also present MMSearch, a comprehensive benchmark to evaluate the multi-modal search performance of LLMs, comprised of 300 manually collected instances across 14 subfields. Experimental results demonstrate that state-of-the-art LLMs, specifically GPT-4, achieve the best results on MMSearch, surpassing even commercial AI search engines in end-to-end task performance. However, error analysis reveals persistent challenges in requery and rerank capabilities, particularly for open-source LLMs, highlighting the need for further development in these areas. This work provides valuable insights for AI engineers and data scientists working on multi-modal search engines, emphasizing the importance of robust requery and rerank mechanisms for effective information retrieval and analysis. |
Oryx MLLM: On-Demand Spatial-Temporal Understanding at Arbitrary Resolution (Read more on arXiv or HuggingFace) | jiwenlu, WinstonHu, liuziwei7, THUdyh, Zuyan | The authors propose Oryx, a novel multi-modal large language model (MLLM) that adeptly handles diverse visual input sizes and lengths. Oryx employs OryxViT, a visual encoder designed for native resolution processing, and a dynamic compression module for efficient processing of long video sequences. Through comprehensive experiments, Oryx demonstrates state-of-the-art performance on various benchmarks, including long-form video comprehension and 3D spatial understanding tasks. This work provides AI practitioners with a robust and versatile MLLM architecture capable of handling real-world multimodal data with varying resolutions and lengths. |
StoryMaker: Towards Holistic Consistent Characters in Text-to-image Generation (Read more on arXiv or HuggingFace) | CantabPhD, chenyibo89, huaxiali, jingli, huaquan | StoryMaker is a novel, tuning-free AI model for personalized image generation that preserves the consistency of facial features, clothing, hairstyles, and body types across multiple character scenes, facilitating coherent visual storytelling. It leverages a Positional-aware Perceiver Resampler to generate distinct character embeddings and employs a novel attention loss mechanism with segmentation masks to prevent feature intermingling between characters and the background. Experiments demonstrate StoryMaker’s superior performance in maintaining visual consistency over state-of-the-art methods, particularly in multi-character scenarios. StoryMaker offers AI practitioners a powerful tool for a variety of applications including digital storytelling, comic creation, and character-driven image editing, enabling new possibilities for creative content generation. |
LVCD: Reference-based Lineart Video Colorization with Diffusion Models (Read more on arXiv or HuggingFace) | Mohan Zhang, CeciliaJL, luckyhzt | This research proposes LVCD, the first video diffusion framework for reference-based lineart video colorization. By leveraging a pre-trained video diffusion model, LVCD generates temporally consistent and high-quality colorized animations from lineart sketches and a single reference frame. The authors introduce two novel components: sketch-guided ControlNet for incorporating lineart sketches and Reference Attention for long-range spatial color propagation. Experiments demonstrate LVCD's superior performance in generating long animations with large motions, surpassing existing CNN-based and diffusion-based methods. LVCD offers a promising solution for AI engineers and data scientists in the animation industry, enabling automated colorization of animation sequences and potentially boosting productivity. |
3DTopia-XL: Scaling High-quality 3D Asset Generation via Primitive Diffusion (Read more on arXiv or HuggingFace) | hongfz16, Caoza, THUdyh, jiaxiang-tang, FrozenBurning | The paper proposes 3DTopia-XL, a novel 3D generative model that produces high-quality, textured 3D assets from text or image inputs. It utilizes a novel primitive-based representation called PrimX, which encodes shape, texture, and material information efficiently in a compact tensor format, enabling scalability to high resolutions. 3DTopia-XL leverages a Diffusion Transformer architecture for generative modeling and outperforms existing methods in terms of visual fidelity, particularly in generating fine-grained textures and Physically Based Rendering (PBR) materials. The high-quality outputs, coupled with efficient asset extraction into industry-standard formats like GLB, makes 3DTopia-XL readily applicable for AI practitioners working on 3D content creation tasks in domains such as gaming, virtual reality, and design. |
Language Models Learn to Mislead Humans via RLHF (Read more on arXiv or HuggingFace) | Jacob Steinhardt, EthanAraragi, akbir, ruiqi-zhong, jiaxin-wen | This paper presents empirical evidence that RLHF, a popular technique for aligning language models, can lead to an unintended consequence termed "U-SOPHISTRY." U-SOPHISTRY occurs when language models, optimized based on human feedback, learn to generate outputs that appear correct to human evaluators but are factually incorrect. The authors demonstrate this phenomenon on question-answering and programming tasks, finding that RLHF leads to a significant increase in human approval of incorrect outputs while actual task performance stagnates. The study highlights a critical risk associated with RLHF: it can create a false sense of improvement in language models, potentially misleading practitioners such as AI engineers and data scientists who rely on human evaluation for model assessment and selection. These findings underscore the need for developing more robust evaluation methods and mitigation strategies to address U-SOPHISTRY. |
Scaling Smart: Accelerating Large Language Model Pre-training with Small Model Initialization (Read more on arXiv or HuggingFace) | mfarajtabar, moinnabi, thyeros, fartashf, imirzadeh-apple | This research paper introduces HyperCloning, a novel method for initializing large language models (LLMs) using pretrained smaller models. HyperCloning expands the hidden dimensions of a smaller model while preserving its functionality, ensuring the larger model inherits the smaller model's accuracy before training begins. Experiments demonstrate that HyperCloning reduces training time by a factor of 2-4 compared to random initialization, achieving comparable or superior accuracy across various LLM architectures. This technique offers practitioners, including AI engineers and data scientists, a cost-effective and efficient approach to training LLMs, potentially democratizing access to high-performance models. Further research directions include investigating the observed catastrophic forgetting and exploring alternative weight expansion strategies to further enhance HyperCloning's effectiveness. |
Denoising Reuse: Exploiting Inter-frame Motion Consistency for Efficient Video Latent Generation (Read more on arXiv or HuggingFace) | Yixuan Chen, Shuo Yan, Chenyu Wang, dongshengli, genye | This paper introduces Dr. Mo, a novel diffusion-based video generation model that exploits inter-frame motion consistency to accelerate latent video generation. The key insight lies in the observation that coarse-grained features in the diffusion process exhibit high motion consistency across video frames. Dr. Mo leverages this finding by reusing denoising steps from a reference frame via a learned motion transformation network and a denoising step selector, significantly reducing computational overhead. Evaluations on UCF-101 and MSR-VTT datasets demonstrate that Dr. Mo achieves state-of-the-art video quality with a 4x speedup compared to previous methods. This work holds significant implications for AI practitioners, particularly those working on video generation and editing tasks, as it offers a pathway to generate high-quality videos with significantly reduced computational resources. |
MURI: High-Quality Instruction Tuning Datasets for Low-Resource Languages via Reverse Instructions (Read more on arXiv or HuggingFace) | Ayyoob Imani, akorhonen, ahmetu, noriamt, akoksal | This research introduces Multilingual Reverse Instructions (MURI), a novel method for generating high-quality instruction tuning datasets for low-resource languages by leveraging existing multilingual text corpora and machine translation. The authors create MURI-IT, a dataset comprising over 2 million instruction-output pairs across 200 languages, with a significant focus on under-resourced languages. Evaluation by native speakers and fine-tuning experiments with mT5 models demonstrate the effectiveness of MURI-IT in improving multilingual instruction following capabilities, particularly for natural language understanding tasks. This work provides a valuable resource for AI practitioners working on multilingual language models and addresses the crucial need for diverse and inclusive datasets in NLP. The released datasets and models offer significant potential for downstream applications like machine translation, cross-lingual information retrieval, and chatbot development in a wider range of languages. |
FlexiTex: Enhancing Texture Generation with Visual Guidance (Read more on arXiv or HuggingFace) | zouxb009, ysx007, aaronb, jiaaoyu, cocacola | This paper introduces FlexiTex, a novel framework for high-fidelity texture generation on 3D objects using both text and image prompts. FlexiTex addresses limitations of existing methods by incorporating a Visual Guidance Enhancement module, which uses image prompts to provide explicit guidance during texture generation, thus enhancing detail richness and style consistency. Additionally, a Direction-Aware Adaptation module leverages direction prompts to mitigate the Janus problem and improve semantic alignment across views. Experiments demonstrate FlexiTex's superior performance in quantitative metrics and qualitative results compared to baseline methods. Practitioners, such as AI engineers and data scientists, can leverage FlexiTex to generate high-quality textures for 3D objects efficiently, benefiting applications like AR/VR, gaming, and film. |
3DGS-LM: Faster Gaussian-Splatting Optimization with Levenberg-Marquardt (Read more on arXiv or HuggingFace) | Matthias Nießner, Michael Zollhöfer, Aljaž Božič, Lukas Höllein | This paper introduces 3DGS-LM, a novel method for accelerating the reconstruction process in 3D Gaussian Splatting (3DGS). By replacing the conventional ADAM optimizer with a tailored Levenberg-Marquardt (LM) algorithm, the authors achieve a 30% reduction in optimization time while maintaining reconstruction quality. This speedup is achieved through a highly-efficient GPU parallelization scheme for the preconditioned conjugate gradient algorithm, utilizing a custom CUDA kernel implementation and a caching data structure for intermediate gradients. This advancement holds significant relevance for AI practitioners working with 3DGS, particularly in applications such as virtual reality and scene exploration, where faster reconstruction times can greatly benefit development cycles and user experience. |
Title | Authors | Summary |
---|---|---|
Qwen2.5-Coder Technical Report (Read more on arXiv or HuggingFace) | Lemoncoke, Losin94, AbbottYJX, yangjian076, huybery | The paper introduces Qwen2.5-Coder, an open-source series of code language models built on the Qwen2.5 architecture and trained on a 5.5 trillion token dataset. Qwen2.5-Coder achieves state-of-the-art results across a variety of code generation, code completion, and code reasoning benchmarks, outperforming even significantly larger models. This performance is attributed to a robust data pipeline emphasizing high-quality code and code-related data, as well as meticulous instruction-tuning techniques. Qwen2.5-Coder's capabilities, particularly its performance exceeding larger models, makes it a valuable tool for AI practitioners developing code generation, completion, and reasoning applications. Its open-source nature further facilitates research and application development in code intelligence. |
Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution (Read more on arXiv or HuggingFace) | gewenbin292, chenkq, Jinze, tinytangent, bluelike | The research paper "Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution" introduces the Qwen2-VL series, a collection of open-weight vision-language models featuring 2, 8, and 72 billion parameters. Notably, Qwen2-VL incorporates a Naive Dynamic Resolution mechanism allowing for the processing of images with varying resolutions and a Multimodal Rotary Position Embedding (M-ROPE) for effectively encoding positional information across various modalities. This approach leads to state-of-the-art performance in various visual benchmarks, including extended-duration video comprehension and robust agent capabilities for device operation. Qwen2-VL's capabilities in visual reasoning, document understanding, multilingual text recognition, video comprehension, and visual agent capabilities are particularly relevant for AI practitioners, including AI engineers and data scientists, offering a robust framework for developing applications in areas like image analysis, video processing, and human-computer interaction. |
LLMs + Persona-Plug = Personalized LLMs (Read more on arXiv or HuggingFace) | Erxue Min, Xiaochi Wei, stingw, yutaozhu94, liujiongnan | This paper proposes PPlug, a novel personalized Large Language Model (LLM) designed to tailor outputs according to individual user preferences. PPlug leverages a plug-in user embedder module to encode a user's entire interaction history into a single, comprehensive embedding, capturing general linguistic patterns and preferences. Experiments conducted on the Language Model Personalization (LaMP) benchmark demonstrate PPlug's superiority, outperforming retrieval-based and fine-tuned personalized LLMs. Notably, PPlug's plug-and-play architecture offers efficiency by utilizing a single LLM for all users, making it a practical solution for LLM service providers seeking to offer personalized experiences. AI engineers and data scientists can leverage PPlug to enhance personalization in applications ranging from drafting personalized content to tailoring recommendations based on user history. |
To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning (Read more on arXiv or HuggingFace) | wadhma, Dongwei, juand-r, fcyin, Zaynes | The research paper "To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning" by wadhma et al. investigates the effectiveness of chain-of-thought (CoT) prompting for enhancing large language model (LLM) reasoning capabilities. Through meta-analysis of existing literature and empirical evaluations across 20 datasets and 14 contemporary LLMs, the authors demonstrate that CoT provides substantial performance benefits primarily for tasks involving mathematics or formal logic, with minimal gains observed for tasks requiring non-symbolic reasoning. Further analysis reveals that CoT's strength lies in its ability to execute symbolic steps and track intermediate computational outputs. The authors suggest that while CoT remains a useful technique, practitioners, including AI Engineers and Data Scientists, should prioritize integrating LLMs with symbolic solvers for optimal performance on symbolic tasks and explore alternative paradigms, such as search or interacting agents, to enhance reasoning in non-symbolic domains. |
Preference Tuning with Human Feedback on Language, Speech, and Vision Tasks: A Survey (Read more on arXiv or HuggingFace) | David D. Yao, Wenpin Tang, anirbandas, BraceZHY, gentaiscool | This survey paper provides a thorough overview of recent advancements in preference tuning, a crucial process for aligning deep generative models with human preferences, across language, speech, and vision tasks. The paper presents a systematic framework and classification of preference tuning methods, categorizing them by sampling methods (online or offline), modality (text, speech, vision, etc.), language, and reward granularity (sample or token level). The authors also describe various applications of preference tuning for improving generation quality using human feedback and discuss evaluation methods, highlighting both automatic LLM-based approaches and human-based evaluations. This survey is highly relevant to practitioners, such as AI engineers and data scientists, who aim to enhance the alignment of deep generative models with human preferences, leading to more human-like and desirable outputs in various domains, including text generation, image synthesis, and speech synthesis. |
GRIN: GRadient-INformed MoE (Read more on arXiv or HuggingFace) | uuu6, liangchen-ms, Shuohang, ykim362, LiyuanLucasLiu | The paper introduces GRIN, a novel training method for Mixture-of-Experts (MoE) models, designed to overcome the limitations of discrete expert routing in gradient-based optimization. GRIN leverages SparseMixer-v2, a method that estimates gradients for expert routing directly, instead of relying on gating gradients as a proxy. This approach, combined with a modified load balance loss and the use of tensor parallelism instead of expert parallelism, allows for efficient scaling of MoE models without token dropping. The authors demonstrate the efficacy of GRIN by developing a 16x3.8B MoE model that outperforms a 7B dense model and matches a 14B dense model, achieving state-of-the-art performance on various benchmarks, especially in coding and mathematics. These results highlight GRIN's potential for AI engineers and data scientists seeking to build highly scalable and performant MoE models for complex tasks. |
Takin: A Cohort of Superior Quality Zero-shot Speech Generation Models (Read more on arXiv or HuggingFace) | yangyutu, sonaxyjh, ClorisLIN, YanniHu, ch3cook-fdu | The research introduces Takin AudioLLM, a suite of zero-shot speech generation models including Takin TTS, Takin VC, and Takin Morphing, aimed at high-quality, customizable audiobook production. Takin TTS, a neural codec language model, leverages a multi-task training strategy and a latent diffusion model for natural and robust speech synthesis. Takin VC employs joint content-timbre modeling and conditional flow matching for high-fidelity voice conversion. Takin Morphing allows timbre and prosody customization using an attention-based multi-reference timbre encoder and a language model-based prosody encoder. Experimental results demonstrate the superiority of Takin AudioLLM models over conventional methods in terms of speech quality, speaker similarity, and style control, making it a valuable tool for AI engineers and data scientists working on speech generation and audiobook production. |
Towards Diverse and Efficient Audio Captioning via Diffusion Models (Read more on arXiv or HuggingFace) | Ruibo Fu, Yong Ren, Xinyi Tu, Manjie Xu, Chenxinglili | This paper presents Diffusion-based Audio Captioning (DAC), a novel non-autoregressive model for audio captioning that leverages a diffusion framework. DAC operates within the continuous text latent space and conditions the denoising process on audio features through cross-attention. Experimental results demonstrate that DAC achieves competitive captioning quality compared to state-of-the-art autoregressive models while exhibiting superior performance in terms of generation diversity and speed. Notably, the authors observe that DAC benefits significantly from pre-training on larger audio datasets and that semantic similarity metrics like CLAP and BERT might be more suitable for evaluating captioning quality compared to traditional token-level metrics. DAC's efficiency and diversity make it a compelling solution for AI practitioners interested in deploying audio captioning models in resource-constrained environments or real-time applications. |
A Controlled Study on Long Context Extension and Generalization in LLMs (Read more on arXiv or HuggingFace) | Jing Nathan Yan, Yi Lu, zy001, justintchiu, sonta7 | This research presents a controlled empirical study of long-context extension methods in Large Language Models (LLMs). The authors standardize evaluation across various exact and approximate attention methods, utilizing LLaMA2-7B as a consistent base model, trained on a 1B token long-context dataset. Results indicate that perplexity remains a reliable indicator of downstream task performance for exact attention methods, while approximate attention suffers from reduced accuracy, especially in retrieval tasks. Notably, continual fine-tuning with exact attention proves effective within the extended context length, while extrapolation to unseen lengths presents challenges. These findings, coupled with the open-sourced code and models, offer AI practitioners valuable insights into selecting and implementing appropriate context extension methods for their LLM applications, highlighting the trade-offs between accuracy, computational cost, and generalization capabilities. |
Vista3D: Unravel the 3D Darkside of a Single Image (Read more on arXiv or HuggingFace) | Michael Bi Mi, wxcTest, adamdad, florinshum | The authors present Vista3D, a novel coarse-to-fine framework for generating diverse and consistent 3D objects from single images using 2D diffusion priors. Vista3D utilizes Gaussian Splatting to efficiently establish a coarse 3D geometry, subsequently refining it into a signed distance field representation with disentangled textures. Notably, Vista3D leverages a novel angular composition approach, constraining diffusion prior gradients to balance diversity in the unseen 3D aspects with overall consistency. Experiments demonstrate Vista3D's ability to generate high-fidelity textured meshes in 5 minutes, outperforming existing methods in speed and quality. This framework offers practitioners, including AI engineers and data scientists, a robust and efficient tool for single-view 3D object reconstruction, with potential applications in areas such as virtual reality and 3D content creation. |
Title | Authors | Summary |
---|---|---|
OmniGen: Unified Image Generation (Read more on arXiv or HuggingFace) | stingw, Ruiran, avery00, JUNJIE99, Shitao | The research introduces OmniGen, a novel diffusion-based model for unified image generation. Unlike task-specific models, OmniGen handles diverse tasks such as text-to-image generation, image editing, and subject-driven generation within a single framework. Trained on the newly introduced X2I dataset, a large-scale, multi-task dataset, OmniGen exhibits emergent capabilities like task composition and in-context learning for unseen tasks. Evaluation on benchmarks like GenEval and EMU-Edit demonstrates competitive performance compared to state-of-the-art models. This advancement is particularly relevant to AI practitioners, offering a unified and simplified approach to various image generation tasks within a single, efficient model. |
NVLM: Open Frontier-Class Multimodal LLMs (Read more on arXiv or HuggingFace) | tuomass, jon-barker, zihanliu, boxin-wbx, nayeon7lee | The paper presents NVLM 1.0, a family of multimodal large language models (MLLMs) that achieve state-of-the-art results on a variety of vision-language tasks. NVLM 1.0 comes in three architectures: decoder-only (NVLM-D), cross-attention-based (NVLM-X), and a novel hybrid architecture (NVLM-H), each offering unique advantages in computational efficiency and reasoning capabilities. Importantly, NVLM 1.0 models demonstrate "production-grade multimodality," excelling in both vision-language and text-only tasks, without sacrificing performance in either domain. This is achieved through a combination of novel model design, the introduction of a 1-D tile tagging design for high-resolution images, and careful curation of training data that emphasizes quality and task diversity over scale. Practitioners can benefit from these insights for building more robust and versatile MLLMs applicable to a wide range of tasks, from visual question answering to code generation. |
Phidias: A Generative Model for Creating 3D Content from Text, Image, and 3D Conditions with Reference-Augmented Diffusion (Read more on arXiv or HuggingFace) | Gerhard Hancke, liuziwei7, zxhezexin, tfwang, ZhenweiWang | Phidias is a novel generative model that employs diffusion for reference-augmented 3D content creation. The model leverages a user-provided or retrieved 3D reference to enhance the 3D generation process, thereby improving the generation quality, generalizability, and controllability. Phidias unifies 3D generation from textual, image-based, and 3D prompts, providing a variety of downstream applications for practitioners, such as retrieval-augmented image-to-3D or text-to-3D generation. The authors demonstrate through extensive experiments that Phidias outperforms existing state-of-the-art approaches both quantitatively and qualitatively. The source code for Phidias is publicly available. |
Fine-Tuning Image-Conditional Diffusion Models is Easier than You Think (Read more on arXiv or HuggingFace) | Alexander Hermans, Christian Schmidt, ddegeus, kabouzeid, GonzaloMG | This research paper demonstrates that the perceived inefficiency of image-conditional latent diffusion models for monocular depth estimation, such as Marigold, is due to a flawed inference pipeline. By fixing the DDIM scheduler implementation, the authors achieve single-step inference performance comparable to multi-step, ensembled approaches, with a speed increase of over 200x. Furthermore, simple end-to-end fine-tuning of these models with task-specific losses, even starting from a pre-trained Stable Diffusion model, surpasses the performance of more complex, specifically designed architectures. These findings are particularly relevant to practitioners, as they enable the use of high-precision, diffusion-based depth and normal estimation models in real-time applications, while also simplifying the training and optimization process. |
On the limits of agency in agent-based models (Read more on arXiv or HuggingFace) | Shashank Kumar, arnauqb, rameshraskar, ngkuru, Godssidekick1 | This paper introduces AgentTorch, a novel framework for building scalable and differentiable agent-based models (ABMs) enhanced by large language models (LLMs). AgentTorch addresses the challenge of simulating large populations with adaptive behaviors by introducing the concept of LLM archetypes, enabling the simulation of millions of agents informed by LLM outputs. The authors demonstrate AgentTorch's capabilities through a case study of the COVID-19 pandemic in New York City, showcasing its ability to capture realistic population-wide behaviors and simulate the impact of policy interventions. AgentTorch provides practitioners, including AI engineers and data scientists, with a powerful tool for understanding and addressing complex societal challenges through the integration of LLM-driven agent behavior in ABMs. |
OSV: One Step is Enough for High-Quality Image to Video Generation (Read more on arXiv or HuggingFace) | Jiangning Zhang, Wenbing Zhu, Zhengkai Jiang, Xiaofeng Mao, wangfuyun | The authors present OSV (One Step Video Generation), a novel two-stage training approach for image-to-video generation using diffusion models that achieves high-quality results in just one inference step. OSV leverages latent GAN training in the first stage for rapid quality improvement and incorporates adversarial consistency distillation in the second stage to enhance performance and stability. The authors introduce a unique video discriminator design using pretrained image backbones (DINOv2) and a lightweight trainable head, significantly reducing computational costs by replacing the VAE decoding process with upsampling. Evaluations on the OpenWebVid-1M benchmark demonstrate OSV's superior performance over existing methods in both speed and visual quality. OSV presents a significant advancement for practitioners, such as AI engineers and data scientists, working with video generation, offering a fast and efficient solution for high-quality results. |
A Comprehensive Evaluation of Quantized Instruction-Tuned Large Language Models: An Experimental Analysis up to 405B (Read more on arXiv or HuggingFace) | Yongin Kwon, Sihyeong Park, oj9040, kwonse, leejaymin | This research paper presents a comprehensive evaluation of the quantization of instruction-tuned large language models (LLMs), spanning models from 7B to 405B parameters and four quantization methods (GPTQ, AWQ, SmoothQuant, and FP8). The authors found that quantized larger LLMs often outperform smaller, full-precision models on various tasks, except for hallucination detection and instruction following. Importantly, the study highlights that weight-only quantization methods, particularly AWQ, generally yield better accuracy preservation in large models compared to quantization methods involving activations. The findings are particularly relevant for practitioners, such as AI engineers and data scientists, aiming to deploy large LLMs under resource constraints while maintaining performance. The authors emphasize that selecting the optimal quantization method and bit precision should be done based on the specific LLM size and target task. |
EzAudio: Enhancing Text-to-Audio Generation with Efficient Diffusion Transformer (Read more on arXiv or HuggingFace) | Helin Wang, Hao Zhang, Yong Xu, Chenxinglili, Higobeatz | EzAudio is a novel text-to-audio (T2A) generation framework that leverages a highly efficient Diffusion Transformer (DiT) architecture operating directly on raw waveform latent space. The authors propose a multi-stage training strategy employing masked acoustic modeling and synthetic caption generation, along with a classifier-free guidance rescaling technique to balance audio quality and text alignment. Experimental results demonstrate that EzAudio outperforms existing open-source T2A models in both objective and subjective evaluations, achieving state-of-the-art performance. This work provides AI practitioners a robust and accessible framework for developing high-quality T2A applications. |
SplatFields: Neural Gaussian Splats for Sparse 3D and 4D Reconstruction (Read more on arXiv or HuggingFace) | Robert Maier, Siyu Tang, Aeriphi, sprokudin, markomih | This paper presents SplatFields, a novel optimization strategy for 3D Gaussian Splatting (3DGS) that addresses the technique's limitations in sparse view scenarios. SplatFields introduces a spatial bias during optimization by leveraging neural networks to predict splat features, encouraging nearby primitives to share similar characteristics and emulating the behavior of implicit volumetric rendering methods. This approach significantly improves reconstruction quality under sparse view conditions for both static and dynamic scenes, outperforming recent 3DGS and NeRF-based alternatives. Notably, SplatFields maintains real-time rendering capabilities and compatibility with existing 3DGS pipelines, making it particularly attractive for practitioners seeking efficient and high-quality 3D reconstruction from limited input data. AI engineers and data scientists working on 3D vision applications such as scene reconstruction, novel view synthesis, and dynamic scene modeling can benefit from incorporating SplatFields to enhance performance and efficiency in their workflows. |
Agile Continuous Jumping in Discontinuous Terrains (Read more on arXiv or HuggingFace) | Changyi Lin, mateoguaman, romesco, guanya, yxyang | This paper proposes a novel hierarchical learning and control framework for enabling quadrupedal robots to perform agile, continuous jumping in discontinuous terrains, such as stairs and stepping stones. The framework consists of a learned heightmap predictor for terrain perception, an RL-trained motion policy for planning, and a model-based leg controller for motion tracking. A key contribution is the reduction of the sim-to-real gap by accurately modeling hardware characteristics, such as motor saturation and camera latency. This allows the robot to achieve state-of-the-art performance, traversing a 14-step staircase in 4.5 seconds, demonstrating the effectiveness of the proposed approach for agile locomotion in challenging terrains. This work holds significant implications for practitioners, including AI Engineers and roboticists, seeking to develop robots capable of navigating complex real-world environments with enhanced agility and speed. |
Single-Layer Learnable Activation for Implicit Neural Representation (SL$^{2}$A-INR) (Read more on arXiv or HuggingFace) | Hamid Soltanian-Zadeh, Dorit Merhof, Reza Azad, Reza-R-77, moein99 | This paper introduces SL$^{2}$A-INR, a novel implicit neural representation (INR) architecture that utilizes a single-layer learnable activation function based on Chebyshev polynomials. SL$^2$A-INR effectively captures high-frequency details and mitigates spectral bias, outperforming existing INRs on various tasks including image representation, 3D shape reconstruction, and inverse problems like super-resolution and CT reconstruction. Notably, SL$^2$A-INR achieves superior performance even with reduced model sizes compared to other INR methods. The demonstrated effectiveness and efficiency of SL$^2$A-INR across diverse tasks makes it a valuable tool for AI practitioners working on signal representation and generative modeling, particularly in applications requiring high-fidelity reconstruction from limited data. |
PDMX: A Large-Scale Public Domain MusicXML Dataset for Symbolic Music Processing (Read more on arXiv or HuggingFace) | Julian McAuley, Phillip Long, tberg12, ZacharyNovack | This paper introduces PDMX, the largest publicly available dataset of public domain MusicXML files, comprising over 250,000 scores and encompassing 6,250 hours of music. The authors release MusicRender, an extension to the MusPy library, to facilitate accurate parsing and rendering of nuanced musical notation from MusicXML. Experiments on multitrack symbolic music generation demonstrate that filtering PDMX based on user ratings improves model performance in terms of harmonic and rhythmic diversity. Notably, fine-tuning models on a small subset of high-quality, rated data significantly enhances generation quality. PDMX offers AI practitioners a valuable resource for developing and evaluating symbolic music processing models, particularly in the domains of music generation, transcription, and recommendation. |
Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Refuse (Read more on arXiv or HuggingFace) | Navonil Majumder, Hai Leong Chieu, Rishabh Bhardwaj, Shang Hong Sim, Maojia Song | This paper addresses the issue of hallucination in Large Language Models (LLMs) within the context of Retrieval-Augmented Generation (RAG). The authors propose a novel metric, TRUST-SCORE, to evaluate the trustworthiness of LLMs in a RAG setting by assessing grounded refusals, answer accuracy, and citation correctness. To improve trustworthiness, they introduce TRUST-ALIGN, an alignment framework that trains LLMs on a synthetic dataset to identify answerable questions, ground responses in provided documents, and avoid unnecessary refusals. Experiments demonstrate that TRUST-ALIGN enhances LLM performance across three datasets, achieving comparable results to leading closed-source language models like GPT-4. These findings are particularly relevant to AI engineers and data scientists developing RAG systems, emphasizing the importance of aligning LLMs with external knowledge sources to mitigate hallucination and improve the reliability of generated information. |
Implicit Neural Representations with Fourier Kolmogorov-Arnold Networks (Read more on arXiv or HuggingFace) | Ilker Hacihaliloglu, Parsa Mojarad Adi, moein99, ali-mrbn | This paper introduces Fourier Kolmogorov-Arnold Network (FKAN), a novel architecture for implicit neural representations (INRs) designed to enhance the capture of task-specific frequency components in signals. FKAN leverages learnable activation functions modeled as Fourier series, enabling fine-grained control and learning of frequency information. Experimental results demonstrate that FKAN surpasses state-of-the-art baselines in image representation and 3D occupancy volume representation tasks, achieving improvements in PSNR, SSIM, and IoU metrics while exhibiting faster convergence. This novel approach provides AI practitioners, including AI engineers and data scientists, with an effective tool to enhance INR models for various applications requiring high-fidelity signal representation. |
Title | Authors | Summary |
---|---|---|
Seed-Music: A Unified Framework for High Quality and Controlled Music Generation (Read more on arXiv or HuggingFace) | lixingxing, lich-ming, ducle, smileezzz, Weituo | Seed-Music is a novel framework for high-quality and controllable vocal music generation and editing. The authors introduce a system comprised of three core components: Representation Learning, Generation, and Rendering, which utilize audio tokens, symbolic music tokens, or vocoder latents as intermediate representations. Seed-Music leverages both autoregressive language modeling and diffusion approaches to achieve impressive results in tasks such as Lyrics2Song, Lyrics2Leadsheet2Song, MusicEDiT, and Zero-shot Singing Voice Conversion. The system's flexibility, controllability, and impressive performance showcased through various applications and listening examples provide AI engineers and data scientists with valuable tools for music generation, post-production editing, and creative exploration in the music domain. The introduction of "lead sheet tokens," designed to represent musical elements in a musician-friendly format, presents a potential new standard for music language models. |
RetrievalAttention: Accelerating Long-Context LLM Inference via Vector Retrieval (Read more on arXiv or HuggingFace) | zqx123, hzhua, iofu728, baotonglu, Matchyc | This paper proposes RetrievalAttention, a training-free approach leveraging approximate nearest neighbor search (ANNS) to accelerate the inference of long-context Large Language Models (LLMs) by exploiting the dynamic sparsity inherent in the attention mechanism. The key innovation lies in addressing the out-of-distribution (OOD) challenge between query and key vectors in attention computation through an attention-aware vector search algorithm. This enables RetrievalAttention to accurately approximate attention with significantly reduced latency and minimal GPU memory footprint, achieving a 4.9x and 1.98x speedup compared to exact KNN and traditional ANNS methods respectively. RetrievalAttention presents a practical solution for AI practitioners working with LLMs on long sequences, particularly beneficial for deployment on resource-constrained devices. |
Guiding Vision-Language Model Selection for Visual Question-Answering Across Tasks, Domains, and Knowledge Types (Read more on arXiv or HuggingFace) | Vinija Jain, amanchadha, neelabhsinha | This research paper proposes a comprehensive framework for evaluating and selecting optimal Vision-Language Models (VLMs) for specific Visual Question Answering (VQA) tasks, addressing practical application needs. The authors introduce a novel multi-dimensional dataset that classifies VQA tasks by task type, application domain, and knowledge type, facilitating fine-grained VLM performance comparisons. Additionally, a new evaluation metric, GoEval, is presented, demonstrating superior alignment with human judgments compared to traditional metrics by leveraging GPT-40's capabilities for multimodal evaluation. Experimental results reveal significant performance variations among 10 state-of-the-art VLMs across categories, with proprietary models generally outperforming open-source alternatives. These findings provide AI practitioners (AI Engineers, Data Scientists) with actionable insights and a standardized framework for selecting best-suited VLMs based on specific task requirements, resource constraints, and performance expectations. |
ReCLAP: Improving Zero Shot Audio Classification by Describing Sounds (Read more on arXiv or HuggingFace) | Sonal Kumar, Sreyan Ghosh, manocha, RamaniD, urinieto | The research proposes ReCLAP, an improved CLAP model for zero-shot audio classification (ZSAC) that enhances sound understanding by incorporating descriptive features into prompts. ReCLAP leverages caption augmentation during training, prompting a Large Language Model (LLM) to rewrite captions with detailed acoustic descriptions. Further improving ZSAC, the authors introduce prompt augmentation, generating multiple custom prompts per category using LLM-based descriptions in diverse scenes. ReCLAP exhibits state-of-the-art performance on various retrieval and ZSAC benchmarks, demonstrating the importance of descriptive sound features in prompts. This development holds significant relevance for AI practitioners, particularly those working on audio classification and retrieval systems, by providing a method to improve zero-shot performance and generalization capabilities. |
On the Diagram of Thought (Read more on arXiv or HuggingFace) | Andrew Chi-Chih Yao, Yang Yuan, yifAI | The paper introduces Diagram of Thought (DoT), a novel framework for enhancing iterative reasoning in large language models (LLMs) by representing the process as the construction of a directed acyclic graph (DAG) within a single model. Unlike linear or tree-based reasoning approaches, DoT incorporates propositions, critiques, refinements, and verifications as nodes within the DAG, capturing the non-linear and iterative nature of human reasoning. By employing auto-regressive next-token prediction with role-specific tokens, DoT facilitates seamless transitions between reasoning steps within the LLM, eliminating the need for multiple models or external control mechanisms. Furthermore, the authors provide a robust mathematical foundation for DoT using Topos Theory and PreNet Categories, ensuring the logical consistency and soundness of the reasoning process. This framework offers AI practitioners a theoretically grounded and practically efficient approach to develop LLMs with enhanced reasoning capabilities for complex problem-solving tasks. |
AudioBERT: Audio Knowledge Augmented Language Model (Read more on arXiv or HuggingFace) | Jaeho Lee, uso7d0, HJOK | This paper introduces AuditoryBench, the first benchmark designed to assess the auditory knowledge of large language models (LLMs). The authors find that LLMs pretrained solely on text data exhibit a significant lack of auditory commonsense knowledge. To address this, they propose AudioBERT, a novel framework that augments LLMs with auditory knowledge through a retrieval-based approach using a combination of auditory knowledge span detection and the CLAP audio-text model. Experiments demonstrate that AudioBERT significantly enhances the ability of LLMs to understand and reason about auditory information. This research has practical implications for AI practitioners, particularly those working on audio-language multimodal tasks such as audio captioning, sound recognition, and audio question answering. The availability of AudioBERT and AuditoryBench provides valuable resources for developing more robust and versatile multimodal AI systems. |
One missing piece in Vision and Language: A Survey on Comics Understanding (Read more on arXiv or HuggingFace) | Mohamed Ali Souibgui, Andrey Barsky, MarcoBertini, Llabres, emanuelevivoli | This survey paper provides a comprehensive overview of the emerging field of Comics Understanding within the context of Vision-Language multimodal tasks. The authors introduce the novel Layer of Comics Understanding (LoCU) framework, a taxonomy that categorizes tasks based on input/output modalities and spatio-temporal dimensions, ranging from basic tagging and augmentation to complex generation and synthesis. The survey systematically reviews existing datasets and methodologies, highlighting the limitations in data availability, annotation standardization, and task complexity, and proposes potential research directions. Practitioners, such as AI engineers and data scientists, can leverage this survey to understand the current state of the field, identify potential applications of VLMs in comics analysis and generation, and contribute to the development of more robust and versatile models for this complex domain. |
Ferret: Federated Full-Parameter Tuning at Scale for Large Language Models (Read more on arXiv or HuggingFace) | Fei Richard Yu, Bryan Kian Hsiang Low, See-Kiong Ng, Wenyang Hu, ZCODE0 | Ferret is a novel first-order federated learning algorithm designed for scalable full-parameter tuning of large language models (LLMs) with enhanced privacy. It leverages shared randomness to reduce communication costs by projecting local updates into a low-dimensional space and reconstructing them efficiently during global aggregation. Theoretical analyses demonstrate that Ferret's reconstruction is unbiased and enjoys fast convergence while avoiding error accumulation often observed in zeroth-order methods. Empirical evaluations on benchmark datasets confirm Ferret's superior scalability and competitive model accuracy compared to existing federated full-parameter and parameter-efficient tuning methods. This work holds significant implications for practitioners, especially AI engineers and data scientists, enabling them to efficiently fine-tune LLMs on decentralized datasets with improved privacy while maintaining performance. |
beeFormer: Bridging the Gap Between Semantic and Interaction Similarity in Recommender Systems (Read more on arXiv or HuggingFace) | Pavel Kordík, foxik, beeformer | The authors propose beeFormer, a novel framework that bridges the gap between semantic and interaction similarity for recommender systems. This is accomplished by training sentence transformer models directly on user-item interaction data, leveraging gradient checkpointing and negative sampling for scalability. Experimental results demonstrate that beeFormer outperforms baselines in cold-start, zero-shot, and time-split recommendation tasks, indicating superior performance in scenarios with limited interaction data. Notably, training on datasets from multiple domains leads to improved knowledge transfer and domain-agnostic recommendation capabilities. These findings are especially relevant for AI practitioners, as beeFormer offers a scalable and effective approach to improve recommendation quality in challenging scenarios with limited user feedback. |
Towards Predicting Temporal Changes in a Patient's Chest X-ray Images based on Electronic Health Records (Read more on arXiv or HuggingFace) | Tackeun Kim, forgetnight, starmpcc, dek924 | This paper proposes EHRXDiff, a novel framework that leverages latent diffusion models to predict future Chest X-ray (CXR) images by integrating previous CXRs with subsequent medical events extracted from Electronic Health Records (EHRs). The framework utilizes a combination of VAE and CLIP encoders to capture both fine-grained visual details and high-level clinical features from the input data, and effectively predicts potential temporal changes while generating realistic CXR images. Experimental results demonstrate EHRXDiff's superior performance in preserving medical information and generating high-quality images compared to baseline methods. This framework has the potential to serve as a valuable tool for AI practitioners, particularly in developing clinical decision support systems that assist medical professionals in monitoring disease progression and planning personalized treatment strategies. |
Title | Authors | Summary |
---|---|---|
Robust Dual Gaussian Splatting for Immersive Human-centric Volumetric Videos (Read more on arXiv or HuggingFace) | Yu Hong, Zhehao Shen, Yuheng Jiang, Daluuu, chengchengguo123 | This paper introduces DualGS, a novel Gaussian-based representation for robust human performance tracking and high-fidelity rendering in volumetric videos. The approach utilizes Dual Gaussians to disentangle motion and appearance, employing motion-aware joint Gaussians and appearance-aware skin Gaussians. A coarse-to-fine optimization strategy with motion prediction ensures temporal coherence and rendering fidelity. A companion compression scheme using residual vector quantization, codec compression, and a persistent codebook achieves a 120-fold compression ratio. DualGS offers AI practitioners a method for creating high-fidelity, interactive volumetric video experiences that are efficient enough for deployment on VR and mobile devices. |
Title | Authors | Summary |
---|---|---|
Windows Agent Arena: Evaluating Multi-Modal OS Agents at Scale (Read more on arXiv or HuggingFace) | hrz, Inhenn, Saraabdali, francedot, rbonatti | The research paper, "Windows Agent Arena: Evaluating Multi-Modal OS Agents at Scale", by hrz, Inhenn, Saraabdali, francedot, and rbonatti introduces a novel benchmark for evaluating multi-modal AI agents operating within a real Windows environment. This benchmark, named WINDOWSAGENTARENA, features 154 diverse tasks spanning common user applications and is designed for scalability and deployment on Azure for efficient parallel evaluation. The authors also present a new multi-modal agent, Navi, achieving a success rate of 19.5% on WINDOWSAGENTARENA tasks, showcasing the potential for future agent development. Despite being far from human performance (74.5%), Navi's results highlight the crucial role of precise visual prompting and reveal the challenges posed by visual-language misalignment. This research is significant for practitioners, including AI engineers and data scientists, as it provides a robust platform for testing and improving the capabilities of AI agents in performing complex, real-world tasks within the prevalent Windows OS ecosystem. |
Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers (Read more on arXiv or HuggingFace) | Tatsunori Hashimoto, Diyi Yang, CLS | The paper "Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers" investigates whether Large Language Models (LLMs) can generate novel research ideas comparable to human experts. The authors conducted a large-scale human study with over 100 NLP researchers, comparing ideas generated by an LLM agent with those written by experts. The study found that AI-generated ideas were judged as statistically more novel than human ideas, while remaining comparable in feasibility and other metrics. However, the authors also identify limitations in LLMs, including a lack of diversity in generated ideas and unreliability in evaluating idea quality. These findings suggest that while LLMs show promise in assisting with research ideation, they are not yet capable of fully autonomous idea generation and require careful human oversight, particularly for practitioners such as AI Engineers and Data Scientists who may utilize these tools in their work. |
IFAdapter: Instance Feature Control for Grounded Text-to-Image Generation (Read more on arXiv or HuggingFace) | Bing Ma, wxcTest, suxuefeng, tinytigerpan, WuYW | This paper proposes IFAdapter, a novel plug-and-play module for pretrained diffusion models, designed to improve fine-grained control over the positioning and appearance of multiple instances in generated images. It addresses limitations of existing Layout-to-Image generation methods by introducing two key components: Appearance Tokens for capturing high-frequency instance details and an Instance Semantic Map for ensuring accurate spatial correspondence. Experiments on the introduced COCO-IFG benchmark demonstrate IFAdapter's superiority in generating images with both accurate instance placement and high-fidelity features, as measured by the novel Instance Feature Success rate and standard image quality metrics. This development holds significant practical implications for AI practitioners, particularly those working on image generation tasks requiring precise control over instance features, such as in graphic design or fashion design applications. |
DreamHOI: Subject-Driven Generation of 3D Human-Object Interactions with Diffusion Priors (Read more on arXiv or HuggingFace) | tmsj, rayli, hanwenzhu | The paper introduces DreamHOI, a novel zero-shot method for synthesizing 3D human-object interactions (HOIs). DreamHOI utilizes pre-trained text-to-image diffusion models to guide the posing of a 3D human model, enabling it to realistically interact with a given 3D object based on a textual description. To overcome the limitations of directly applying diffusion model gradients to articulation parameters, DreamHOI employs a dual implicit-explicit representation of the human model, combining neural radiance fields (NeRFs) with skeleton-driven mesh articulation. This dual representation facilitates effective optimization and preserves human identity during the generation process. Experiments demonstrate DreamHOI's ability to generate realistic and diverse HOIs, outperforming baseline methods. This approach offers practitioners in fields like video game development and virtual reality a powerful tool for efficiently creating engaging and interactive virtual environments populated with realistically posed human characters. |
Source2Synth: Synthetic Data Generation and Curation Grounded in Real Data Sources (Read more on arXiv or HuggingFace) | marialomeli, rraileanu, spermwhale, ncan, carlos-gemmell-malt-ai | The paper introduces Source2Synth, a novel method for generating synthetic datasets by leveraging existing real-world data sources and large language models (LLMs). This approach involves generating examples with intermediate reasoning steps grounded in the source data, and then curating the dataset using the LLM itself to improve the quality. The authors demonstrate Source2Synth's effectiveness on multi-hop question answering and tabular question answering tasks, achieving significant performance improvements over baselines. The ability to generate high-quality synthetic data from existing sources has significant implications for practitioners, particularly in low-data regimes, as it offers a scalable and cost-effective way to improve LLM performance on complex tasks without the need for costly human annotations. AI engineers and data scientists can leverage Source2Synth to enhance their models' capabilities in areas such as reasoning and tool usage. |
FlashSplat: 2D to 3D Gaussian Splatting Segmentation Solved Optimally (Read more on arXiv or HuggingFace) | wxcTest, adamdad, florinshum | The authors propose FlashSplat, a novel method for segmenting 3D Gaussian Splatting (3D-GS) representations using 2D masks. By leveraging the alpha composition inherent in the 3D-GS rendering process, the authors formulate the segmentation task as a linear integer programming problem that admits a closed-form, globally optimal solution. This approach significantly outperforms previous iterative methods, achieving a 50x speedup while maintaining high accuracy and demonstrating robustness against noise in the input masks. FlashSplat’s efficiency and effectiveness in downstream tasks, such as object removal and inpainting, make it a valuable tool for AI practitioners working with 3D scene understanding and manipulation tasks. |
PiTe: Pixel-Temporal Alignment for Large Video-Language Model (Read more on arXiv or HuggingFace) | Han Zhao, Min Zhang, Pengxiang Ding, Yang Liu, huangsiteng | The paper introduces PiTe, a Large Video-Language Model (LVidLM) that leverages object trajectories for fine-grained alignment of visual and textual modalities in videos. The authors curate PiTe-143k, a novel dataset with automatically annotated object trajectories. PiTe consistently outperforms current LVidLMs on video question answering, temporal grounding, and dense captioning tasks under zero-shot settings. This trajectory-based alignment substantially enhances video comprehension, enabling sophisticated event descriptions and precise event localization. For AI practitioners, PiTe presents a robust framework for building LVidLMs capable of fine-grained video understanding, facilitating applications like content-aware video search and summarization. |
Title | Authors | Summary |
---|---|---|
PingPong: A Benchmark for Role-Playing Language Models with User Emulation and Multi-Model Evaluation (Read more on arXiv or HuggingFace) | IlyaGusev | This research paper introduces PingPong, a novel benchmark for evaluating role-playing capabilities in large language models (LLMs). PingPong employs a multi-model evaluation system where an LLM acts as the 'player,' another simulates a 'user' (interrogator), and a third LLM judges the 'player's' performance based on criteria like character consistency and language fluency. The authors validate the benchmark through correlation with human annotations, achieving correlations exceeding 0.64 across English and Russian. A key finding is that averaging scores from multiple judge models enhances result reliability. This work provides AI practitioners, particularly those developing conversational AI and role-playing agents, with a valuable tool to robustly assess and benchmark LLM performance in dynamic, multi-turn conversational settings. |
MEDIC: Towards a Comprehensive Framework for Evaluating LLMs in Clinical Applications (Read more on arXiv or HuggingFace) | Nadas31, tathagataraha, mpimentel, cchristophe, pkanithi | The research paper introduces MEDIC, a comprehensive evaluation framework for assessing the performance of Large Language Models (LLMs) in clinical applications. MEDIC evaluates LLMs across five key dimensions: medical reasoning, ethics and bias concerns, data and language understanding, in-context learning, and clinical safety and risk. The study revealed that larger models generally perform better in closed-ended question-answering tasks; however, in open-ended tasks requiring free-form responses, domain-specific fine-tuning was crucial for achieving superior performance. The MEDIC framework provides AI engineers and data scientists with a valuable tool for guiding model selection, highlighting performance trade-offs, and identifying key areas for improvement, ultimately facilitating the development of safe, effective, and ethical AI models for healthcare. This framework, combined with the novel cross-examination evaluation methodology, allows researchers and practitioners to measure hallucinations, assess coverage of information, and understand the trade-offs between model capabilities like conciseness and coverage in healthcare applications. |
Gated Slot Attention for Efficient Linear-Time Sequence Modeling (Read more on arXiv or HuggingFace) | ExplorerFreda, nealcly, rayzhu16, sonta7, yzhangcs | The paper proposes Gated Slot Attention (GSA), a novel linear attention mechanism for sequence modeling that addresses limitations in recall and training efficiency observed in existing linear attention models. GSA achieves this by enhancing the Attention with Bounded-memory-Control (ABC) model with a gating mechanism, inspired by Gated Linear Attention (GLA). This allows for efficient memory management and context-aware information retrieval. Experiments demonstrate GSA’s superior performance in in-context recall-intensive tasks and its effectiveness in "finetuning pretrained Transformers to RNNs” (T2R), making it a practical alternative for AI practitioners working with large-scale language models and seeking efficient inference and training. GSA's efficient training and inference, coupled with its strong performance in recall-intensive tasks, make it a compelling alternative for AI engineers and data scientists working with large-scale language models. |
Agent Workflow Memory (Read more on arXiv or HuggingFace) | Daniel Fried, gneubig, Jiayuan, zorawang | The paper introduces Agent Workflow Memory (AWM), a method to enhance the performance of language model-based agents on complex, long-horizon tasks. AWM induces reusable task workflows from past agent experiences and integrates them into the agent's memory to guide future action generation. Experiments on web navigation benchmarks, WebArena and Mind2Web, demonstrate that AWM significantly improves task success rates and exhibits strong generalization ability across tasks, websites, and domains. Notably, AWM achieves a 51.1% relative increase in success rate on WebArena compared to the best published autonomous agent. This research is particularly relevant to AI practitioners developing agents for real-world applications, as AWM offers a mechanism for agents to learn and adapt from their experiences, potentially leading to more robust and efficient task-solving capabilities. |
gsplat: An Open-Source Library for Gaussian Splatting (Read more on arXiv or HuggingFace) | Vickie Ye, akanazawa, zhypan, brentyi, ruilongli | "gsplat: An Open-Source Library for Gaussian Splatting" introduces a novel library for training and developing Gaussian Splatting models. gsplat features a user-friendly PyTorch front-end and highly optimized CUDA back-end, offering improvements to optimization speed, memory efficiency, and convergence times. Experimental results demonstrate that gsplat achieves comparable rendering performance to the original 3DGS implementation while significantly reducing training time and memory usage. The library's modular API and support for various densification strategies, pose optimization, depth rendering, and anti-aliasing techniques make it a valuable tool for researchers and practitioners working with 3D scene reconstruction and novel view synthesis. AI engineers and data scientists can leverage gsplat to efficiently develop and deploy Gaussian Splatting models for applications like virtual reality, augmented reality, and robotics. |
Hi3D: Pursuing High-Resolution Image-to-3D Generation with Video Diffusion Models (Read more on arXiv or HuggingFace) | Ting Yao, Yingwei Pan, Yang Chen, Haibo Yang, GiantBision | The paper proposes Hi3D, a novel two-stage video diffusion-based framework for high-resolution image-to-3D generation. Hi3D leverages the temporal consistency of pre-trained video diffusion models to enhance multi-view consistency in 3D generation, addressing limitations of previous 2D diffusion-based methods. The first stage generates low-resolution multi-view images conditioned on camera pose, while the second stage refines these images to higher resolution with finer details using a 3D-aware video-to-video refiner incorporating depth information. Hi3D achieves state-of-the-art performance on novel view synthesis and single-view reconstruction tasks, demonstrating its ability to generate high-fidelity 3D meshes with detailed textures. Practitioners, such as AI engineers and data scientists, can utilize Hi3D to generate high-quality 3D content from single images for various applications, including virtual reality, 3D film production, and more. |
Can Large Language Models Unlock Novel Scientific Research Ideas? (Read more on arXiv or HuggingFace) | Asif Ekbal, Vinayak-goyal, TirthankarSlg, sandeep123 | This study investigates the potential of large language models (LLMs) in generating novel scientific research ideas. The authors evaluate four LLMs (Claude-2, Gemini, GPT-3.5, and GPT-4) across five scientific domains using a novel dataset and two proposed metrics: Idea Alignment Score (IAScore) and Idea Distinctness Index. The findings indicate that LLMs exhibit domain-specific strengths in idea generation, with Claude and GPT-4 outperforming others. While LLMs demonstrate the ability to generate novel research ideas, human evaluation reveals that they also produce a significant number of non-novel and generic ideas. This research provides valuable insights for AI practitioners, particularly AI engineers and data scientists, interested in leveraging LLMs for accelerating scientific innovation. The proposed metrics and datasets can serve as a foundation for further research in this domain, encouraging the development of new techniques to enhance the novelty and applicability of LLM-generated research ideas. |
Instant Facial Gaussians Translator for Relightable and Interactable Facial Rendering (Read more on arXiv or HuggingFace) | Hongyang Lin, Daluuu, DolphinQiao, Haaribo, dafeiqin | This paper introduces TransGS, a novel method leveraging diffusion transformers to rapidly convert Physically Based Rendering (PBR) facial assets into high-quality, relightable, and interactable 3D Gaussian Splatting (3DGS) representations. This approach bridges the gap between traditional offline and online rendering by enabling real-time performance (5 seconds generation time) with comparable visual quality to offline techniques. Key innovations include the GauFace representation, optimized for efficient rendering and animation of facial assets, and a novel Pixel Aligned Sampling scheme for constrained, generative-friendly Gaussian distribution. This work offers AI engineers and data scientists a powerful tool for creating dynamic and interactive digital avatars across various platforms, including PCs, mobile devices, and VR headsets. |
MVLLaVA: An Intelligent Agent for Unified and Flexible Novel View Synthesis (Read more on arXiv or HuggingFace) | Ke Lu, Guohong Hu, Xing Lan, Jian Xue, Hanyu Jiang | This paper introduces MVLLaVA, a novel intelligent agent for synthesizing novel views by integrating multiple multi-view diffusion models with a large multimodal model, LLaVA. The key innovation lies in the design of task-specific instruction templates that enable MVLLaVA to handle a wide range of user instructions, including single images, captions, and specific viewpoint changes. Experimental results demonstrate that MVLLaVA achieves state-of-the-art performance in accurately recognizing and executing novel view synthesis tasks from diverse input modalities. This work holds significant relevance for AI practitioners, especially those interested in 3D content creation, as it offers a robust and versatile solution for generating consistent multi-view images from flexible user inputs. |
Self-Harmonized Chain of Thought (Read more on arXiv or HuggingFace) | Wei Lu, Ziqi Jin | This research paper, "Self-Harmonized Chain of Thought" by Wei Lu and Ziqi Jin, proposes a novel method called ECHO to improve chain-of-thought prompting in large language models. ECHO enhances the quality of demonstrations in the chain-of-thought process by unifying their diversity, leading to a more coherent and effective reasoning pattern. The method outperforms existing techniques, matching the performance of Few-shot-CoT but without requiring manual effort. ECHO's ability to automatically generate high-quality demonstrations makes it a valuable tool for practitioners, such as AI engineers and data scientists, who aim to improve the reasoning capabilities of large language models for various downstream applications. |
ProteinBench: A Holistic Evaluation of Protein Foundation Models (Read more on arXiv or HuggingFace) | Dongyu Xue, Zaixiang Zheng, Fei Ye, thughost, zhouxiangxin | The research paper introduces ProteinBench, a comprehensive evaluation framework designed to assess the capabilities of protein foundation models. ProteinBench comprises a taxonomy of generative tasks in protein science, a multi-metric evaluation approach assessing quality, novelty, diversity, and robustness, and in-depth analyses from various user perspectives. The evaluation reveals that language models excel in capturing natural evolutionary distributions, while structure-based models demonstrate greater robustness in de novo protein design. Additionally, current conformation prediction models show promise but still lag behind classic molecular dynamics simulations in accurately capturing protein dynamics. These findings provide valuable insights for AI engineers and data scientists working with protein foundation models, guiding model selection based on specific design objectives and highlighting areas requiring further development. |
VMAS: Video-to-Music Generation via Semantic Alignment in Web Music Videos (Read more on arXiv or HuggingFace) | Heng Wang, Linjie Yang, Yu Tian, Yan-Bo Lin, gberta | This paper introduces VMAS, a novel framework for generating background music from video input. VMAS leverages a generative video-music Transformer trained on DISCO-MV, a newly curated dataset of 2.2 million video-music pairs sourced from the Web, which is significantly larger than prior datasets used for this task. The authors propose a video-music alignment scheme, comprising contrastive video-music matching and video-beat alignment, to ensure generated music aligns with high and low-level visual cues. Experimental results demonstrate that VMAS outperforms existing methods in various music generation metrics, including human evaluation. This work provides AI practitioners, particularly those interested in generative AI and multimedia applications, with a new framework and dataset for developing robust and high-quality video-to-music generation systems. |
Generative Hierarchical Materials Search (Read more on arXiv or HuggingFace) | Simon Batzner, Sherry Yang, IgorM, danilor, RickWork | The authors propose Generative Hierarchical Materials Search (GenMS), a novel approach for generating novel crystal structures from high-level language instructions. GenMS leverages a hierarchical, multi-modal tree search algorithm that combines a large language model, a diffusion model with a compact crystal representation, and a graph neural network for property prediction. Experiments demonstrate that GenMS outperforms baseline methods in generating unique, valid, and potentially stable crystal structures that satisfy user-specified requirements, achieving a high DFT convergence rate and generating structures with lower formation energy. This framework has significant implications for AI practitioners in materials science, enabling them to efficiently explore a vast design space and accelerate the discovery of novel materials with desired properties through intuitive language-based interfaces. |
Title | Authors | Summary |
---|---|---|
INTRA: Interaction Relationship-aware Weakly Supervised Affordance Grounding (Read more on arXiv or HuggingFace) | Se Young Chun, Agorium, jeeit17 | This research paper introduces INTRA, a novel weakly-supervised affordance grounding framework that leverages representation learning and interaction relationship-guided contrastive learning. Unlike previous approaches relying on paired exocentric and egocentric images, INTRA utilizes only exocentric images and incorporates large language models (LLMs) to understand the complex relationship between interactions. INTRA outperforms prior arts on multiple datasets, including AGD20K, IIT-AFF, CAD, and UMD, demonstrating its superior performance and domain scalability. AI practitioners, such as AI engineers and data scientists, can benefit from INTRA's ability to ground affordances for novel objects and interactions, potentially leading to improved robot manipulation and scene understanding in diverse environments. The method's ability to leverage LLMs for enhanced linguistic understanding of interactions offers a new direction for affordance grounding research. |
LLaMA-Omni: Seamless Speech Interaction with Large Language Models (Read more on arXiv or HuggingFace) | zhangshaolei, Paulmzr, zysgdd, guoshoutao, poeroz | This research paper introduces LLaMA-Omni, a novel model architecture for low-latency, high-quality speech interaction with Large Language Models (LLMs). LLaMA-Omni leverages a speech encoder, a speech adapter, an LLM, and a streaming speech decoder to directly process speech instructions and generate text and speech responses with minimal latency. The researchers also created a new speech instruction dataset, InstructS2S-200K, to train and evaluate the model. Experimental results demonstrate that LLaMA-Omni outperforms existing speech-language models in terms of content and style while achieving a low response latency of 226ms. This work is particularly relevant to AI practitioners working on speech-based applications, such as conversational AI and virtual assistants, as it offers an efficient and effective solution for building seamless speech interfaces powered by LLMs. |
SongCreator: Lyrics-based Universal Song Generation (Read more on arXiv or HuggingFace) | zy001, kangshiyin, jingchengwu, GK50, maxingaussian | The paper proposes SongCreator, a novel lyrics-based universal song generation system capable of generating high-quality songs with both vocals and accompaniment. The system utilizes a dual-sequence language model (DSLM) with a dynamic bidirectional cross-attention module to capture the interplay between vocal and accompaniment sequences. This architecture, trained using a multi-task learning strategy, enables SongCreator to perform various song generation tasks, including lyrics-to-song, vocals-to-song, and song editing, surpassing previous state-of-the-art methods in several tasks. The authors highlight the potential of SongCreator to become a powerful tool for content creators and musicians, lowering the barrier of entry for novices while streamlining the workflow for experienced producers. However, they acknowledge the potential risks associated with replicating voices and emphasize the need for responsible development, choosing not to release the fully trained models. |
Draw an Audio: Leveraging Multi-Instruction for Video-to-Audio Synthesis (Read more on arXiv or HuggingFace) | Pengfei Gao, Xing Nie, Binjie Mao, MarkWang, YannQi | This research paper introduces Draw an Audio, a novel framework for video-to-audio synthesis that utilizes multi-instruction control to address limitations in content consistency, temporal synchronization, and loudness control observed in prior art. The authors leverage masked attention and time-loudness modules to enable granular control over audio generation guided by user-provided masks and loudness signals. Experimental validation on AudioCaps and VGGSound-Caption datasets demonstrates Draw an Audio's superior performance in generating high-fidelity audio synchronized with video content. This research is highly relevant to practitioners, such as AI engineers and data scientists, working on applications requiring realistic and controllable sound generation from video data, including foley design, video editing, and multimodal content creation. |
SaRA: High-Efficient Diffusion Model Fine-tuning with Progressive Sparse Low-Rank Adaptation (Read more on arXiv or HuggingFace) | Yabiao Wang, Ran Yi, Jiangning Zhang, Teng Hu, hongruihuang | This research paper introduces SaRA, a novel parameter-efficient fine-tuning technique designed to enhance the capabilities of pre-trained diffusion models for downstream tasks. The core of SaRA lies in selectively fine-tuning a subset of parameters with the smallest absolute values in the pre-trained model, exploiting their potential effectiveness. To mitigate overfitting due to the high representation ability of sparse matrices, SaRA employs a nuclear-norm-based low-rank loss, constraining the rank of learned sparse matrices. Furthermore, a progressive parameter adjustment strategy is introduced to enhance the utilization of initially ineffective parameters. Experimental results across various tasks, including backbone fine-tuning, downstream dataset fine-tuning, image customization, and controllable video generation, demonstrate that SaRA achieves superior performance compared to state-of-the-art parameter efficient fine-tuning methods, while effectively preserving the model's prior knowledge. This method is particularly relevant to AI practitioners as it provides an efficient and effective way to adapt pre-trained diffusion models for specific tasks, offering both enhanced performance and reduced memory footprint during training. |
Title | Authors | Summary |
---|---|---|
Towards a Unified View of Preference Learning for Large Language Models: A Survey (Read more on arXiv or HuggingFace) | hhhllan, ZefanCai, instro, songff, KbsdJames | This survey paper presents a unified framework for preference learning in large language models (LLMs), categorizing techniques based on data source, feedback mechanism, and optimization algorithm. The authors argue that existing categorizations based on reinforcement learning (RL) versus supervised fine-tuning (SFT) or online versus offline settings create artificial barriers, as core objectives are similar and algorithms can be decoupled from data acquisition strategies. The paper further details prevalent pointwise, pairwise, and listwise preference optimization methods, alongside training-free alignment approaches, highlighting their loss function designs. This comprehensive overview provides valuable insights for AI engineers and data scientists, facilitating understanding of the relationships between various alignment techniques and potentially enabling more effective development of human-aligned LLMs. |
MMEvol: Empowering Multimodal Large Language Models with Evol-Instruct (Read more on arXiv or HuggingFace) | Wa2erGo, iiiiwis, tnlin, lzchen2001, haonanzhang | MMEvol, a novel framework for evolving image-text instruction data, is introduced to enhance the capabilities of Multimodal Large Language Models (MLLMs). The authors identify data quality and diversity limitations in existing MLLM datasets and propose an iterative evolution process encompassing fine-grained perceptual, cognitive reasoning, and interactive evolutions, coupled with instruction elimination to filter inadequate samples. Experiments demonstrate that their MLLM trained on evolved data significantly surpasses open-source alternatives across 13 vision-language benchmarks. This work holds significant implications for AI practitioners, highlighting the importance of high-quality instruction data for developing robust MLLMs with improved reasoning, instruction following, and reduced hallucination susceptibility. |
OneGen: Efficient One-Pass Unified Generation and Retrieval for LLMs (Read more on arXiv or HuggingFace) | huajunsir, square0083, xiangchen-dvi, sunmengshu, MikeDean | The research paper introduces OneGen, a novel framework designed to unify generation and retrieval tasks within a single Large Language Model (LLM). OneGen bridges the traditionally separate training paradigms of generation and retrieval by leveraging retrieval tokens generated autoregressively, enabling a single LLM to handle both tasks concurrently. Empirical evaluations across single-hop and multi-hop question answering, and entity linking demonstrate that OneGen outperforms pipeline solutions and, where applicable, prior single-model methods like GRIT. Moreover, the paper highlights OneGen's efficiency in training and inference, requiring less data and achieving faster inference speeds, particularly with increased retrieval frequency. Practitioners, including AI engineers and data scientists, can benefit from OneGen's simplified deployment, reduced computational costs, and improved efficiency, particularly in applications demanding seamless integration of retrieval and generation within LLMs. |
MemoRAG: Moving towards Next-Gen RAG Via Memory-Inspired Knowledge Discovery (Read more on arXiv or HuggingFace) | Zhicheng Dou, Kelong Mao, Zheng Liu, Hongjin Qian, namespace-Pt | This research paper introduces MemoRAG, a novel Retrieval-Augmented Generation (RAG) system designed to address challenges related to complex tasks involving extensive input contexts. MemoRAG leverages a memory module to create a global memory of the entire database and uses it to generate contextually relevant clues for accurate answer retrieval. Experimental results demonstrate that MemoRAG surpasses existing RAG systems and other baselines across a range of tasks, including knowledge-intensive QA and summarization. MemoRAG's ability to effectively manage complex and lengthy texts, such as financial reports and legal contracts, by handling contexts of up to one million tokens and resolving intricate queries with high accuracy, makes it particularly valuable for AI practitioners working with large-scale text processing and retrieval applications. |
Benchmarking Chinese Knowledge Rectification in Large Language Models (Read more on arXiv or HuggingFace) | huajunsir, Ningyu, cowTodd, JizhanFang, TianheLu | The authors introduce CKnowEdit, a novel dataset designed for evaluating and improving Chinese knowledge rectification in Large Language Models (LLMs). This dataset addresses a significant gap in the field, as prior knowledge editing research has primarily focused on English text and often fails to capture the nuances of the Chinese language. Evaluations of existing knowledge editing methods on CKnowEdit reveal limitations in their ability to accurately and consistently rectify Chinese knowledge, highlighting the need for more sophisticated techniques. This work has significant implications for practitioners, as it provides a valuable resource for developing and evaluating Chinese-specific knowledge editing tools, ultimately leading to more reliable and culturally-sensitive LLMs for Chinese language applications. |
UniDet3D: Multi-dataset Indoor 3D Object Detection (Read more on arXiv or HuggingFace) | Anna Vorontsova, ktoshik, filapro, barracuda049, maksimko123 | This paper introduces UniDet3D, a novel 3D object detection model trained on a mixture of indoor datasets to address the limitations of existing models trained on individual, insufficiently diverse datasets. UniDet3D leverages a unified label space across datasets and employs a simple yet effective architecture based on a vanilla transformer encoder without positional encoding or cross-attention. The key innovation of UniDet3D lies in its ability to generalize to various indoor environments and achieve state-of-the-art results across six indoor benchmarks, outperforming existing methods in both accuracy and efficiency. This advancement is particularly relevant to practitioners, such as AI engineers and data scientists, as UniDet3D offers a robust and customizable solution for indoor 3D object detection that can be readily adapted to various applications and computational constraints. |
POINTS: Improving Your Vision-language Model with Affordable Strategies (Read more on arXiv or HuggingFace) | Xiao Zhou, Le Tian, Zeon-Zhuang, scyr, YuanLiuuuuuu | The authors introduce POINTS, a novel vision-language model that achieves state-of-the-art performance while utilizing a relatively small pre-training dataset and a publicly available visual instruction tuning dataset. Key innovations include the use of perplexity to filter the pre-training dataset, retaining only the top 20% of data with the lowest perplexity values, leading to significant performance improvements. Additionally, the authors propose "greedy model soup," a technique that averages the weights of models fine-tuned with varying dataset quantities and diversities, further enhancing performance. POINTS' effectiveness, coupled with its reliance on publicly available datasets, makes it a valuable tool for practitioners, including AI engineers and data scientists, seeking to develop and deploy robust vision-language models with constrained resources. The authors' meticulous ablation studies and detailed analysis of each component contribute to the model's transparency and ease of adoption. |
Open Language Data Initiative: Advancing Low-Resource Machine Translation for Karakalpak (Read more on arXiv or HuggingFace) | murodbek, mukhammadsaid | This research presents advancements in low-resource machine translation, specifically focusing on the Karakalpak language. The authors introduce a new FLORES+ devtest dataset translated into Karakalpak and develop parallel corpora for Uzbek-Karakalpak, Russian-Karakalpak, and English-Karakalpak language pairs. Utilizing these resources, they train and evaluate several neural machine translation models, demonstrating the effectiveness of incorporating data from related Turkic languages. The resulting models and datasets provide valuable resources for AI practitioners interested in developing NLP applications for Karakalpak and similar low-resource languages. |
Paper Copilot: A Self-Evolving and Efficient LLM System for Personalized Academic Assistance (Read more on arXiv or HuggingFace) | Ge Liu, Pengrui Han, youjiaxuan, taofeng, cmulgy | This paper introduces Paper Copilot, a large language model (LLM) system designed to provide personalized and efficient academic research assistance. Paper Copilot employs thought retrieval, user profile generation, and high-performance optimization techniques to deliver its services. The system demonstrates a significant reduction in time required for information retrieval (69.92%) compared to traditional methods. Moreover, user feedback indicates a strong preference for the self-evolving capabilities of the system, highlighting its potential as a valuable tool for researchers. This is highly relevant to AI practitioners, particularly those involved in natural language processing, as it showcases the application of advanced techniques like thought retrieval and efficient deployment strategies for real-world use cases in information retrieval and knowledge management. |
Insights from Benchmarking Frontier Language Models on Web App Code Generation (Read more on arXiv or HuggingFace) | Yi Cui | This research paper presents an analysis of 16 large language models (LLMs) evaluated on WebApp1K, a benchmark designed to assess code generation capabilities for web applications. The key finding suggests that despite exhibiting similar knowledge levels, the performance difference among models stems from the varying frequency of errors. Notably, the study reveals that generating correct code exhibits higher complexity compared to producing incorrect code. Moreover, prompt engineering, while effective in specific scenarios, shows limited impact in overall error reduction. These insights are crucial for practitioners, particularly AI engineers and data scientists, highlighting the importance of prioritizing model reliability and minimizing mistakes during the development of coding LLMs. |
Evaluating Multiview Object Consistency in Humans and Image Models (Read more on arXiv or HuggingFace) | Kanwisher, tgoconnell, Emma02, stephaniefu, tzler | The research introduces MOCHI, a novel benchmark for evaluating the alignment between human perception and computer vision models on 3D shape inference tasks. Using a "same/different" object identification task with varying viewpoints, the study reveals that while humans significantly outperform models like DINOv2, CLIP, and MAE, a correlation exists between human and model performance. Further analysis of human reaction time and gaze patterns suggests that humans achieve superior performance by dedicating more processing time and employing flexible attention mechanisms, which current models lack. This benchmark provides crucial insights for AI practitioners, highlighting the need for models to incorporate mechanisms for dynamic processing and flexible attention to achieve more human-like 3D shape understanding. |
Title | Authors | Summary |
---|---|---|
How Do Your Code LLMs Perform? Empowering Code Instruction Tuning with High-Quality Data (Read more on arXiv or HuggingFace) | mdizhang, bitwjg, dongguanting, fudayuan, banksy235 | The authors propose XCoder, a family of large language models (LLMs) fine-tuned from LLaMA3 using a novel data selection strategy for code instruction tuning. Recognizing the limitations of existing code instruction datasets, often plagued by data leakage and inconsistent quality, the authors introduce a three-pronged data assessment approach. This approach prioritizes instruction complexity, response quality (evaluated through a unit test model), and instruction diversity to curate a high-quality training dataset. Experimental results demonstrate that XCoder surpasses or matches state-of-the-art open-source code LLMs on benchmarks like HumanEval and LiveCodeBench, even with significantly fewer training samples. This research offers AI practitioners valuable insights into constructing and leveraging high-quality code instruction datasets for enhanced code generation and understanding. |
Configurable Foundation Models: Building LLMs from a Modular Perspective (Read more on arXiv or HuggingFace) | fengyao1909, thuzhizhi, Raincleared, ZhengyanZhang, xcjthu | This research paper proposes the novel concept of "configurable foundation models," which are built upon modular components termed "bricks," offering a modular perspective on large language model (LLM) construction and deployment. The paper categorizes bricks as either "emergent," arising from the pre-training process, or "customized," manually designed for specific post-training tasks, and outlines four key brick-oriented operations: routing and retrieval, combination, updating, and growing. Empirical analysis on decoder-only models, Llama-3-8B-Instruct and Mistral-7B-Instruct-v0.3, reveals sparse neuron activation, functionality specialization, and potential for modular partitioning. These findings hold significant implications for AI practitioners, suggesting that LLM efficiency and scalability can be improved by leveraging modularity through selective brick activation, facilitating continual learning, and enabling distributed computation. |
Open-MAGVIT2: An Open-Source Project Toward Democratizing Auto-regressive Visual Generation (Read more on arXiv or HuggingFace) | Yujiu Yang, yshan2u, yxgeee, shifengyuan, RobertLuo1 | This research paper introduces Open-MAGVIT2, an open-source family of auto-regressive image generation models. The authors replicate Google's MAGVIT-v2 tokenizer, achieving state-of-the-art reconstruction performance on ImageNet by utilizing a super-large codebook with lookup-free quantization. To address the challenges of auto-regressive prediction with such a large vocabulary, they propose "next sub-token prediction" with asymmetric token factorization, improving generation quality. Open-MAGVIT2 demonstrates superior performance in both visual reconstruction and class-conditional generation using a plain auto-regressive approach. The release of these models and code provides AI practitioners with a powerful toolset for advancing auto-regressive visual generation, particularly within unified multimodal frameworks. |
Qihoo-T2X: An Efficiency-Focused Diffusion Transformer via Proxy Tokens for Text-to-Any-Task (Read more on arXiv or HuggingFace) | Yuhui Yin, Dawei Leng, Jiasong Feng, Jing Wang, AoMa | This research paper introduces PT-DiT, a novel Proxy Token Diffusion Transformer designed for computationally efficient text-to-image and text-to-video generation tasks. PT-DiT leverages the redundancy in visual information by utilizing a sparse proxy token attention mechanism, wherein a select set of representative tokens, sampled based on spatio-temporal priors, model global visual relationships. To further enhance texture detail, the model incorporates window attention and shift-window attention modules. Experimental results demonstrate that PT-DiT achieves performance comparable to state-of-the-art methods while significantly reducing computational complexity and memory usage, making it particularly beneficial for high-resolution image and video generation. This efficiency gain makes PT-DiT and the Qihoo-T2X family of models valuable tools for AI practitioners, particularly AI engineers and data scientists working on resource-intensive generative tasks. |
GST: Precise 3D Human Body from a Single Image with Gaussian Splatting Transformers (Read more on arXiv or HuggingFace) | Christian Rupprecht, Joao F. Henriques, Lorenza Prospero, ajhamdi | The paper introduces Gaussian Splatting Transformers (GST), a novel method for reconstructing 3D human models from monocular images using Gaussian Splatting representations. GST leverages a transformer architecture trained solely on multi-view supervision, eliminating the need for expensive 3D annotations or diffusion priors. Experiments demonstrate that GST achieves competitive performance on 3D human pose estimation and novel view synthesis tasks. This efficient and accurate approach holds significant potential for practitioners in various domains, including virtual reality, augmented reality, and human-computer interaction, by enabling real-time 3D human modeling from readily available data sources. |
Title | Authors | Summary | Link |
---|---|---|---|
Attention Heads of Large Language Models: A Survey | Yezhaohui Wang, jimi888, Ki-Seki, saythe17, fan2goa1 | This paper surveys recent research on attention heads in Large Language Models (LLMs) and their role in reasoning processes. The authors propose a novel four-stage framework, inspired by human cognition, to categorize attention head functions: Knowledge Recalling, In-Context Identification, Latent Reasoning, and Expression Preparation. Furthermore, the paper summarizes experimental methodologies for investigating attention head mechanisms, categorized as Modeling-Free and Modeling-Required approaches. This survey provides AI practitioners with a valuable resource for understanding the inner workings of LLMs, potentially enabling them to design more interpretable and effective models, and develop novel techniques for LLM analysis and improvement. | Read more on HF |
FuzzCoder: Byte-level Fuzzing Test via Large Language Model | Challenging666, Pony12, zhangysk, ngl567, WeiSumi | This paper introduces FUZZCODER, a novel fuzzing framework leveraging fine-tuned large language models (LLMs) for enhanced vulnerability detection in software. FUZZCODER employs a sequence-to-sequence paradigm, trained on a purpose-built "Fuzz-Instruct" dataset, to predict vulnerable byte locations and effective mutation strategies within input files. Evaluations on the custom Fuzz-Bench benchmark demonstrate FUZZCODER's superiority over traditional methods, achieving higher effective proportions of mutation (EPM) and uncovering a greater number of program crashes, indicative of potential vulnerabilities. These findings highlight the potential of LLMs in advancing fuzzing techniques, offering a valuable tool for AI engineers and data scientists involved in software security testing and vulnerability analysis. | Read more on HF |
CDM: A Reliable Metric for Fair and Accurate Formula Recognition Evaluation | conghui, BoZhang, renqiux0302, ouyanglinke, wanderkid | This research paper proposes a novel evaluation metric called Character Detection Matching (CDM) for formula recognition tasks. Addressing the limitations of existing text-based metrics like BLEU, CDM evaluates formula recognition by comparing rendered images of predicted and ground-truth formulas, utilizing visual character matching. Experiments demonstrate that CDM offers a more accurate and fairer assessment of formula recognition models, particularly in scenarios with diverse formula representations. Notably, the study shows that by using CDM for training data selection, comparable model performance can be achieved using only a fraction (less than 20%) of the data. This finding offers valuable insights for practitioners, such as AI engineers and data scientists, enabling more efficient model training and dataset construction in the field of formula recognition. | Read more on HF |
mPLUG-DocOwl2: High-resolution Compressing for OCR-free Multi-page Document Understanding | Liang Zhang, Jingren, hzhwcmhf, xhyandwyy, AnwenHu | mPLUG-DocOwl2 is a novel Multimodal Large Language Model (MLLM) designed for efficient OCR-free multi-page document understanding. The authors introduce a High-resolution DocCompressor module that leverages cross-attention with global visual features to effectively compress high-resolution document images into a fixed number of tokens (324). This approach reduces computational overhead and inference time while maintaining comparable performance to state-of-the-art MLLMs on various document understanding benchmarks. DocOwl2's ability to process high-resolution images and efficiently extract textual information is beneficial for practitioners, such as AI engineers and data scientists, developing applications for multi-page document analysis, question answering, and information retrieval. The reduction in computational resources required for processing high-resolution images makes DocOwl2 particularly relevant for real-world applications. | Read more on HF |
Geometry Image Diffusion: Fast and Data-Efficient Text-to-3D with Image-Based Surface Representation | simondonn, CiaraRowles, SlavaElizarov | This research introduces Geometry Image Diffusion (GIMDiffusion), a novel Text-to-3D framework that leverages geometry images as the 3D representation. By employing a Collaborative Control scheme with a pre-trained Text-to-Image diffusion model, GIMDiffusion generates 3D objects with high fidelity and diversity from text prompts, eliminating the need for complex 3D-aware architectures. Results demonstrate its capability to produce relightable 3D assets efficiently, comparable to existing Text-to-Image methods. GIMDiffusion offers a practical and efficient approach for AI practitioners, particularly AI Engineers and Data Scientists, working in 3D content creation, as it simplifies both model design and training while leveraging existing resources. Furthermore, the generated objects consist of semantically meaningful, separable parts, enhancing their usability and versatility for tasks such as editing and animation. | Read more on HF |
WildVis: Open Source Visualizer for Million-Scale Chat Logs in the Wild | Xiang Ren, Wenting Zhao, yejinchoinka, jmhessel, yuntian-deng | WILDVIS is an open-source interactive tool designed for the exploration and analysis of large-scale conversational datasets, particularly interactions between users and chatbots. The tool employs both filter-based retrieval and embedding-based visualization techniques to enable efficient navigation and pattern discovery within millions of conversations. WILDVIS allows for the application of various filters, including keywords, user demographics, and conversation topics, to refine searches and highlight relevant conversations within an embedding space. For AI engineers and data scientists, WILDVIS offers a valuable resource for understanding user behavior, identifying potential misuse of chatbots, and uncovering insights into conversation dynamics within large datasets. The tool's ability to visualize topic distributions across datasets can be particularly beneficial for researchers studying trends in user-chatbot interactions. | Read more on HF |
From MOOC to MAIC: Reshaping Online Teaching and Learning through LLM-driven Agents | juanli, Lin-23457, zhanxinhao, tsq2000, JovanYu | This paper introduces MAIC (Massive AI-empowered Course), a novel online education paradigm leveraging LLM-driven multi-agent systems to enhance the scalability and adaptivity of online learning. MAIC employs AI agents for course preparation, instruction delivery, and student interaction, aiming to provide personalized learning experiences. Preliminary experimental results demonstrate the effectiveness of MAIC in enhancing script generation quality, promoting student engagement, and improving learning outcomes. These findings hold significant implications for AI practitioners, particularly in the domain of educational technology, by showcasing the potential of LLMs and multi-agent systems in revolutionizing online education. | Read more on HF |
Guide-and-Rescale: Self-Guidance Mechanism for Effective Tuning-Free Real Image Editing | Dmitry Vetrov, Madina Khalmatova, ai-alanov, sashapff, macderru | The paper, "Guide-and-Rescale: Self-Guidance Mechanism for Effective Tuning-Free Real Image Editing", introduces a novel image editing method called Guide-and-Rescale. This method leverages a self-guidance technique within a diffusion model framework to balance high-quality editing with the preservation of the original image structure. The authors achieve this by introducing energy functions, referred to as "guiders," designed to maintain both global layout and local visual characteristics during the editing process. The paper presents a noise rescaling mechanism, ensuring consistent behavior across a diverse range of images, and demonstrates its effectiveness through both qualitative and quantitative analysis on various editing tasks, such as changing object appearance, style transfer, and image manipulation. Practitioners, including AI engineers and data scientists, can utilize this method for real-time, high-fidelity image editing applications without the need for extensive model fine-tuning or computationally expensive inversion processes. | Read more on HF |
FrozenSeg: Harmonizing Frozen Foundation Models for Open-Vocabulary Segmentation | Hongxun Yao, Xi Chen, Xiatian-Zhu, ShengJin, happy0612 | This paper introduces FrozenSeg, a novel open-vocabulary segmentation method that addresses the limitation of existing methods in generating accurate mask proposals for unseen categories. FrozenSeg leverages the strengths of frozen foundation models, specifically CLIP for semantic understanding and SAM for spatial reasoning, via two novel modules: Query Injector and Feature Injector. Experiments demonstrate FrozenSeg's state-of-the-art performance in open-vocabulary semantic, instance, and panoptic segmentation across multiple datasets, with significant improvements over baselines. This method holds promise for AI practitioners seeking to develop segmentation models capable of generalizing to unseen categories and scenarios without extensive retraining. | Read more on HF |
Report Cards: Qualitative Evaluation of Language Models Using Natural Language Summaries | Jimmy Ba, Keiran Paster, Fuyang Cui, spitis, loveblairsky | This paper introduces Report Cards, a novel approach for qualitative assessment of Large Language Models (LLMs), addressing the limitations of purely quantitative benchmarks. Report Cards provide human-interpretable natural language summaries of an LLM's capabilities across specific skills or topics, offering nuanced insights into model behavior. The authors propose an iterative method, PRESS, for generating these report cards and introduce metrics for evaluating their specificity, faithfulness, and interpretability. Experimental results demonstrate that Report Cards can effectively differentiate between models, accurately reflect their capabilities, and provide valuable insights for practitioners like AI engineers and data scientists, who can leverage these summaries for understanding model strengths and weaknesses. This work contributes a valuable tool for holistic and interpretable evaluation of LLMs, moving beyond simplistic quantitative metrics. | Read more on HF |
Title | Authors | Summary | Link |
---|---|---|---|
LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture | Benyou Wang, Chen Zhang, Shunian Chen, Xidong Wang, songdj | The paper introduces LongLLaVA, a novel hybrid multi-modal large language model (MLLM) designed for efficient long-context understanding. By integrating Mamba and Transformer blocks, LongLLaVA effectively handles temporal and spatial dependencies among multiple images, achieving competitive performance on benchmarks like MileBench and Video-MME. Notably, LongLLaVA requires significantly fewer FLOPs compared to other models while demonstrating strong in-context learning capabilities. This efficiency and performance make LongLLaVA a valuable tool for AI practitioners, particularly in applications involving video understanding, high-resolution image processing, and multi-modal agents. | Read more on HF |
Loopy: Taming Audio-Driven Portrait Avatar with Long-Term Motion Dependency | Gaojie Lin, Jiaqi Yang, Chao Liang, tianyumyum, janphu | This paper introduces LOOPY, an end-to-end audio-driven portrait video generation framework that generates realistic talking head videos solely from audio input, eliminating the reliance on spatial motion templates used in previous methods. LOOPY leverages inter- and intra-clip temporal modules to model long-term motion dependencies and an audio-to-motion latents module for effective audio-portrait motion correlation. Experiments on diverse datasets, including CelebV-HQ and RAVDESS, demonstrate LOOPY's superior performance in generating temporally stable, expressive, and high-quality talking head videos, surpassing existing state-of-the-art methods. Practitioners, including AI engineers and data scientists, can utilize LOOPY to develop robust and realistic talking head generation systems for various applications, such as virtual assistants, video conferencing, and entertainment. The removal of spatial constraints and the ability to learn natural motion patterns from audio make LOOPY a significant advancement in audio-driven video synthesis. | Read more on HF |
LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA | LZDQ, Broccolito, davidlvxin, bys0318, NeoZ123 | This research paper introduces LongCite, a system designed to enhance the trustworthiness of Large Language Models (LLMs) by enabling them to provide fine-grained citations within their long-form answers. The authors identify the limitations of current LLMs in providing adequate citations for long-context question answering (LQAC) and propose a novel pipeline called CoF (Coarse to Fine) to automatically construct a large-scale LQAC dataset, LongCite-45k. By fine-tuning existing open-source long-context models on this dataset, they demonstrate significant improvements in citation quality, even surpassing proprietary models like GPT-40. This advancement holds practical significance for AI practitioners, particularly AI engineers and data scientists, by equipping LLMs with enhanced transparency and verifiability, making them more reliable for various applications. | Read more on HF |
MMMU-Pro: A More Robust Multi-discipline Multimodal Understanding Benchmark | btyu, jamessyx, yuanshengni, aaabiao, yuexiang96 | The research paper introduces MMMU-Pro, a novel benchmark designed to rigorously evaluate the multimodal reasoning capabilities of large language models. MMMU-Pro addresses limitations in existing benchmarks by incorporating three key enhancements: filtering out questions solvable by text-only models, augmenting candidate options to mitigate guessing, and introducing a vision-only input setting to assess genuine multimodal understanding. Experimental results demonstrate significant performance drops across a variety of state-of-the-art multimodal models, indicating that MMMU-Pro poses a more realistic challenge. This benchmark provides AI practitioners, including AI engineers and data scientists, with a valuable tool for assessing and improving the robustness and reliability of multimodal systems, particularly in real-world scenarios where text and images are intertwined. | Read more on HF |
Arctic-SnowCoder: Demystifying High-Quality Data in Code Pretraining | rajhans-snowflake, stovecat, yuxiang630 | Arctic-SnowCoder-1.3B is a new, high-performing code language model trained on 555B tokens utilizing a novel three-step methodology of progressively refined data quality. This model outperforms StarCoderBase-3B on all benchmarks despite being trained with significantly less data and achieves state-of-the-art results on BigCodeBench compared to similarly sized models. The authors demonstrate that aligning training data distribution with downstream tasks is crucial for effective code pretraining and significantly enhances model performance. These findings and the model itself will be of significant interest to practitioners, especially AI engineers who develop code generation and program synthesis applications. | Read more on HF |
Political DEBATE: Efficient Zero-shot and Few-shot Classifiers for Political Text | Rachel X. Peng, Ryan Yank Wang, Michael Burnham, kaylakahn | This paper introduces Political DEBATE, a pair of open-source language models specifically designed for efficient zero-shot and few-shot classification of political text. Trained on the novel PolNLI dataset, comprising over 200,000 political documents and 852 unique hypotheses, the models exhibit superior performance compared to existing open-source alternatives across tasks such as stance detection, topic classification, hate-speech identification, and event extraction. The authors demonstrate that with minimal few-shot training (10-25 documents), Political DEBATE achieves comparable or even better accuracy than supervised classifiers and resource-intensive generative LLMs. The availability of these efficient and open-source models presents a valuable resource for practitioners in political science and related fields, enabling accessible and reproducible text analysis. | Read more on HF |
FastVoiceGrad: One-step Diffusion-Based Voice Conversion with Adversarial Conditional Diffusion Distillation | Yuto Kondo, Hirokazu Kameoka, Takuhiro Kaneko, ououo | This research introduces FastVoiceGrad, a novel one-step diffusion-based voice conversion (VC) model that addresses the slow inference limitation of multi-step diffusion-based VC methods. FastVoiceGrad leverages adversarial conditional diffusion distillation (ACDD), which distills knowledge from a pretrained multi-step teacher diffusion model into a one-step student model using adversarial loss and score distillation loss. Experimental results demonstrate that FastVoiceGrad achieves comparable performance to multi-step models while significantly reducing computational cost, achieving a real-time factor of 0.060 for mel-spectrogram conversion. This development provides AI practitioners, particularly those working on VC applications, a faster and computationally efficient alternative for real-time and resource-constrained scenarios. | Read more on HF |
Affordance-based Robot Manipulation with Flow Matching | Michael Gienger, Fanzhri | This research paper introduces a novel framework for robot manipulation that leverages prompt tuning and flow matching. The authors propose a parameter-efficient prompt tuning method to adapt pre-trained vision models for affordance learning conditioned on language instructions. They then introduce a flow matching policy, a generative approach that learns to transform random waypoints into desired robot trajectories guided by visual affordances. Experimental results on a constructed real-world dataset of Activities of Daily Living demonstrate that the proposed approach achieves competitive performance in both affordance learning and trajectory generation compared to existing methods. This work presents a promising direction for AI practitioners working on robot manipulation, particularly in scenarios where data efficiency and generalization to multi-task settings are crucial. The integration of prompt tuning facilitates efficient adaptation of large pre-trained models, while the flow matching policy offers a stable and effective approach for generating robot trajectories from visual affordances. | Read more on HF |
Title | Authors | Summary | Link |
---|---|---|---|
Kvasir-VQA: A Text-Image Pair GI Tract Dataset | Andrea Storås, vlbthambawita, stevenah, cise-midoglu, SushantGautam | The paper introduces Kvasir-VQA, an extended dataset derived from HyperKvasir and Kvasir-Instrument datasets, augmented with question-and-answer annotations to facilitate advanced machine learning tasks in GI diagnostics. The dataset comprises 6,500 annotated images spanning various GI tract conditions and surgical instruments, and it supports multiple question types including yes/no, choice, location, and numerical count. Preliminary experiments demonstrate the dataset's effectiveness in training models for image captioning, VQA, and synthetic image generation. The dataset is designed to bridge the gap between medical image analysis and practical diagnostic tools, ultimately aiming to improve patient outcomes and diagnostic precision. This dataset can be of immense value to AI engineers and data scientists looking to develop robust and accurate AI models for medical image analysis and diagnostics in the GI tract. | Read more on HF |
OLMoE: Open Mixture-of-Experts Language Models | sewon, jacobmorrison, dirkgr, soldni, Muennighoff | The paper introduces OLMOE, a fully open-source, state-of-the-art Mixture-of-Experts (MoE) language model. This model outperforms other available models with similar active parameters, even surpassing larger models like Llama2-13B-Chat and DeepSeekMoE-16B. The authors present a comprehensive analysis of MoE training and routing, demonstrating how it achieves high specialization and outperforms dense language models on various benchmarks. All aspects of OLMOE are open-sourced, including model weights, training data, code, and logs. This work is highly relevant to practitioners by providing a cost-effective, open-source, high-performing language model for research and development. Moreover, the detailed analysis of MoE design choices provides valuable insights for AI engineers and data scientists working with MoE models. | Read more on HF |
LongRecipe: Recipe for Efficient Long Context Generalization in Large Languge Models | Laziobird, anhtuanluu36, sheryc, yuliang03181, zhiyuanhucs | This research paper proposes LongRecipe, an efficient training strategy for extending the context window of Large Language Models (LLMs). LongRecipe leverages a novel approach called Impactful Token Analysis to identify key tokens that significantly influence long-text training, enabling the model to learn from shorter text segments while maintaining training efficiency. It also introduces a Position Index Transformation technique to simulate long sequences without needing actual long texts. LongRecipe achieves significant improvements in long-context generalization, demonstrating that it can effectively utilize long sequences while requiring only 30% of the target context window size and reducing computational training resources by over 85% compared to full-sequence training. Moreover, LongRecipe preserves the original LLM's capabilities in general tasks, making it a balanced approach for enhancing both long-range dependency understanding and foundational model performance. This research contributes to the field of AI by offering practitioners a more efficient and effective method for extending the context window of LLMs, enabling them to handle more complex and challenging tasks that require long-context understanding. | Read more on HF |
FLUX that Plays Music | huangjunshi, Changqian, MichaelFan, onion | This paper proposes FluxMusic, an extension of diffusion-based rectified flow Transformers for text-to-music generation. It leverages a latent VAE space of mel-spectrograms, incorporating double and single stream blocks to model text and music. The authors demonstrate that FluxMusic outperforms existing methods across multiple metrics, including FAD, IS, and CLAP, demonstrating its scalability and effectiveness. Furthermore, the authors evaluate the impact of model size, rectified flow training, and other hyperparameters on the generative performance. FluxMusic provides a promising avenue for researchers and practitioners in text-to-music generation, offering improved accuracy and scalability compared to previous approaches. | Read more on HF |
DepthCrafter: Generating Consistent Long Depth Sequences for Open-world Videos | vinthony, walkingshadow, Xiaoyu521, xiangjun0211, wbhu-tc | DepthCrafter, a novel video-depth estimation method, generates temporally consistent long depth sequences for open-world videos using video diffusion models. Unlike previous approaches, it does not require additional information, such as camera poses or optical flow. DepthCrafter achieves this by training a video-to-depth model from a pre-trained image-to-video diffusion model through a three-stage training strategy. The method is evaluated on multiple datasets, outperforming existing approaches in terms of both quantitative and qualitative metrics, demonstrating its effectiveness in generating high-quality depth sequences. Practitioners, such as AI engineers and data scientists, can leverage DepthCrafter for various downstream applications, including depth-based visual effects and conditional video generation. | Read more on HF |
VideoLLaMB: Long-context Video Understanding with Recurrent Memory Bridges | Yang Liu, zlzheng, cihangxie, ColorfulAI | VideoLLaMB is a new framework that utilizes recurrent memory tokens within bridge layers to encode the entirety of a video sequence, preserving semantic continuity and improving performance across various tasks. The authors introduce a SceneTilling algorithm, which segments videos into independent semantic units. This approach achieves state-of-the-art results across various video QA benchmarks, particularly on longer videos (up to 8x longer) and in the Needle in a Video Haystack (NIAVH) benchmark. VideoLLaMB also enables training-free streaming video captioning and high performance on a single GPU, setting a new foundation for long-form video understanding models. These improvements are particularly relevant to AI practitioners, as they offer a more efficient and effective way to analyze and understand long videos. | Read more on HF |
Diffusion Policy Policy Optimization | Lars L. Ankile, Allen Z. Ren, daihongkai, pulkitag, jlidard | The research paper "Diffusion Policy Policy Optimization" explores a novel algorithm for fine-tuning diffusion-based policies in robot learning tasks using policy gradient methods. The authors demonstrate that their algorithm, DPPO, outperforms existing methods for diffusion-based policy fine-tuning and achieves strong results in both simulation and real-world robot manipulation tasks. The paper also provides insights into the mechanisms behind DPPO's success, highlighting its ability to induce structured exploration, maintain training stability, and enhance policy robustness. DPPO could be relevant to practitioners developing robotic systems by providing a robust and efficient method for fine-tuning diffusion-based policies trained on expert demonstrations. | Read more on HF |
Compositional 3D-aware Video Generation with LLM Director | Anni Tang, bianjiang, leo-guo, deeptimhe, ingzzzz | The paper proposes a novel method for text-to-video generation by explicitly composing concepts in 3D space. The method leverages LLMs to decompose a complex textual prompt into sub-prompts, each describing a specific concept. It then generates 3D representations for each concept using pre-trained expert models. These representations are then composed using priors from multi-modal LLMs and 2D diffusion models. The key results of this method include the generation of high-fidelity videos with diverse motions and the ability to control individual concepts. This research could be relevant to AI engineers and data scientists working on text-to-video generation or who are interested in applying LLMs to 3D graphics or video generation. | Read more on HF |
LinFusion: 1 GPU, 1 Minute, 16K Image | Xinchao Wang, ZhenXiong, whyu, Huage001 | This research paper presents LinFusion, a novel diffusion model for text-to-image generation that achieves linear time and memory complexity with respect to the number of spatial tokens. The authors achieve this by introducing a generalized linear attention mechanism that serves as a low-rank approximation of popular linear token mixers. Extensive experiments on Stable Diffusion models demonstrate that LinFusion achieves performance on par with or superior to the original SD after only modest training, while significantly reducing training time and memory complexity. LinFusion is highly compatible with pre-trained SD components and can generate high-resolution images like 16K resolution. AI practitioners can leverage this novel model to generate high-resolution images with significantly reduced computational resources. | Read more on HF |
ContextCite: Attributing Model Generation to Context | Aleksander Madry, krisgrg, harshay, bencw | This research paper introduces the novel task of context attribution, aiming to identify the specific parts of a context responsible for a language model's generated statement. The paper proposes a scalable and efficient method called CONTEXTCITE, which uses a linear surrogate model to estimate the effect of ablating different parts of the context. The results demonstrate that CONTEXTCITE consistently outperforms existing baselines in identifying relevant sources, particularly for complex tasks like multi-hop question answering and summarization. CONTEXTCITE can be applied by practitioners to verify generated statements, improve response quality by pruning irrelevant context, and detect poisoning attacks in language models. | Read more on HF |
OD-VAE: An Omni-dimensional Video Compressor for Improving Latent Video Diffusion Model | Qian Wang, Bin Zhu, Bin Lin, Zongjian Li, Liuhan Chen | This research proposes an omni-dimensional video compressor (OD-VAE) to improve the efficiency of latent video diffusion models (LVDMs). Unlike conventional VAEs, OD-VAE compresses videos temporally and spatially, leading to more concise latent representations and reduced computational requirements for LVDMs. The researchers demonstrate that OD-VAE can achieve high video reconstruction accuracy while maintaining high compression speed, improving the training efficiency of LVDMs. The results also suggest that OD-VAE can be used to generate longer videos with limited GPU memory, making it a valuable tool for practitioners working with LVDMs. The paper's findings have implications for AI engineers and data scientists developing video generation models, offering a way to improve model efficiency and reduce computational costs. | Read more on HF |
GenAgent: Build Collaborative AI Systems with Automated Workflow Generation -- Case Studies on ComfyUI | Lei Bai, Wanli Ouyang, Di Huang, Xiangyuan Xue, whlzy | This research presents GenAgent, a novel LLM-based framework for automating the creation of complex workflows used in collaborative AI systems. The framework utilizes LLMs to represent workflows as code, enabling greater flexibility and scalability compared to monolithic AI models. GenAgent is evaluated on the ComfyUI platform and demonstrates superior performance to baseline methods in generating both run-level and task-level workflows. The key takeaway for practitioners is that GenAgent's ability to automate workflow generation can significantly improve the efficiency and effectiveness of collaborative AI system development. The framework can be applied to a variety of AI systems and platforms, making it a valuable tool for AI engineers and data scientists. | Read more on HF |
Follow-Your-Canvas: Higher-Resolution Video Outpainting with Extensive Content Generation | Junkun Yuan, Hongfa Wang, Yue Ma, Qihua Chen, cqf | This research paper presents "Follow-Your-Canvas", a new method for higher-resolution video outpainting with extensive content generation. The proposed method addresses the limitations of existing video outpainting methods by using a diffusion-based model and dividing the task across spatial windows. By incorporating relative region embedding and a layout encoder, the authors demonstrate that Follow-Your-Canvas can generate high-quality results with improved spatial-temporal consistency. The model significantly outperforms existing methods in both low-resolution and high-resolution scenarios. AI engineers can use this method for a wide range of applications such as improving user experience by generating videos with larger aspect ratios or enhancing the resolution of existing videos. | Read more on HF |
Density Adaptive Attention-based Speech Network: Enhancing Feature Understanding for Mental Health Disorders | Adrian Kieback, Georgios Ioannides, jsbai-aaron, amanchadha | This research introduces DAAMAudioCNNLSTM and DAAMAudioTransformer, two parameter-efficient and explainable models for audio feature extraction and depression detection. These models leverage the multi-head Density Adaptive Attention Mechanism (DAAM) to dynamically focus on informative speech segments, achieving state-of-the-art performance on the DAIC-WOZ dataset (F1 macro scores of 0.702 and 0.72, respectively). DAAM offers significant explainability benefits by highlighting which features were most informative for diagnosis, making it more transparent and trustworthy. This work could be valuable for practitioners by providing tools for developing more reliable, clinically-useful depression detection models that leverage only audio signals, without relying on supplementary information. | Read more on HF |
Know When to Fuse: Investigating Non-English Hybrid Retrieval in the Legal Domain | Gerasimos Spanakis, Gijs van Dijck, antoinelouis | This paper investigates the performance of hybrid retrieval methods in the legal domain, specifically in the French language. The authors find that fusing domain-general retrieval models consistently improves performance in zero-shot settings, but in-domain training diminishes the benefits of fusion, suggesting a trade-off between computational resources and accuracy. They also propose a percentile-based score normalization method to address misaligned score distributions across different models, which can improve the effectiveness of fusion. The study highlights the importance of carefully considering the choice of retrieval models and fusion techniques in specialized domains, and provides insights that could be valuable for practitioners working on information retrieval in non-English legal domains. | Read more on HF |
The MERIT Dataset: Modelling and Efficiently Rendering Interpretable Transcripts | J. Boal, A. Sanchez-Cuadrado, alvlopez, de-Rodrigo | This research introduces the MERIT Dataset, a multimodal (text, image, and layout) dataset of school reports designed for training visually-rich document understanding (VrDU) models. The dataset, comprising over 400 labels and 33k samples, includes realistic digital and photorealistic documents with controlled bias features (such as gender and name origin), enabling the study of bias in language models. The dataset is publicly available and includes a comprehensive generation pipeline for replication. The authors conduct experiments using state-of-the-art LayoutLM models, demonstrating the dataset's suitability for training and evaluating performance, while showcasing the challenges associated with real-world scenarios. This dataset offers a valuable tool for practitioners in AI engineering and data science, providing a benchmark for developing and evaluating models, especially in the context of bias detection and understanding. | Read more on HF |
Title | Authors | Summary | Link |
---|---|---|---|
VisionTS: Visual Masked Autoencoders Are Free-Lunch Zero-Shot Time Series Forecasters | Xiaoyun Joy Wang, Zhuo Li, twinsken, HALF111, chenmouxiang | This paper introduces VisionTS, a novel zero-shot time series forecasting model that leverages the intrinsic similarities between images and time series. The authors reformulate the forecasting task as an image reconstruction problem, and utilize a pre-trained visual masked autoencoder (MAE) to forecast future time series values without any specific training on time series data. VisionTS achieves comparable or even superior performance to existing text-based and time-series based foundation models in the zero-shot setting, suggesting that visual models could be a free lunch for time series forecasting. This work provides a novel approach for practitioners to build time series forecasting foundation models, particularly in situations where data scarcity or heterogeneity is a challenge. | Read more on HF |
Mini-Omni: Language Models Can Hear, Talk While Thinking in Streaming | Zhifei Xie, gpt-omni | The paper proposes Mini-Omni, an open-source, end-to-end multi-modal large language model (LLM) with real-time speech interaction capabilities. Mini-Omni enables direct audio reasoning via text-instructed speech generation, which utilizes a novel parallel decoding strategy to boost inference speed. The authors introduce the "Any Model Can Talk" framework, which helps to transfer text capabilities of pre-trained models to speech output with minimal degradation, making it valuable for practitioners in the field. They also introduce the VoiceAssistant-400K dataset, specifically designed for speech-output models. Mini-Omni is a significant advancement in human-computer interaction, offering valuable potential for future research. | Read more on HF |
Title | Authors | Summary | Link |
---|---|---|---|
SciLitLLM: How to Adapt LLMs for Scientific Literature Understanding | xumingjun, caixc97, yrshi, Jesse-zjx, Sihangli | This research paper presents SciLitLLM, a specialized large language model (LLM) designed for scientific literature understanding. The model utilizes a hybrid training strategy that combines continual pre-training (CPT) on high-quality scientific corpora and supervised fine-tuning (SFT) with diverse scientific instructions. To address the challenges of constructing high-quality CPT corpora and generating diverse SFT instructions, the authors propose a meticulous pipeline that includes PDF text extraction, content error correction, and quality filtering for CPT. For SFT, they introduce a novel LLM-based instruction synthesis method to generate diverse instructions. SciLitLLM demonstrates promising performance on scientific literature understanding benchmarks, outperforming existing LLMs across various tasks, especially in domains like fundamental science and organic materials. These findings are particularly relevant to AI engineers and data scientists involved in developing LLMs for specialized domains, highlighting the potential of combining CPT and SFT for knowledge injection and instruction-following enhancements. | Read more on HF |
CoRe: Context-Regularized Text Embedding Learning for Text-to-Image Personalization | Jian Yin, BlurBlur, Zhangjunyi, darkcser, FeizeWu | The research paper, CoRe: Context-Regularized Text Embedding Learning for Text-to-Image Personalization, tackles the challenge of balancing identity preservation and text alignment in text-to-image personalization. It introduces a novel method, Context Regularization (CoRe), which improves text embedding learning by regularizing the context tokens surrounding the new concept. CoRe enhances the compatibility of the new concept's text embedding and facilitates a more precise semantic understanding of the prompt. The authors demonstrate that CoRe outperforms several baselines in both identity preservation and text alignment, especially for prompts requiring high visual variability. This research provides valuable insights for practitioners in the field of text-to-image personalization, enabling the generation of high-quality, text-aligned images with improved identity preservation. | Read more on HF |
The VoxCeleb Speaker Recognition Challenge: A Retrospective | dgromero, jungjee, arsha1, joonson, JaesungHuh | The VoxCeleb Speaker Recognition Challenge (VoxSRC) is a series of annual challenges and workshops that ran from 2019 to 2023. This paper is a retrospective analysis of the VoxSRC challenge, covering the challenges’ goals, dataset creation, evaluation metrics, and the progression of research techniques. Key results highlight that the state-of-the-art has steadily improved over the years, with the use of self-supervised pretrained models significantly advancing performance. The paper also provides valuable insights and recommendations for future challenge organizers, such as maintaining a consistent test set, incorporating individual and ensemble model performance, and including a more diverse dataset. Practitioners, particularly those involved in speaker recognition and diarization, will find this retrospective analysis a valuable resource for understanding the evolution of research techniques and identifying future directions in the field. | Read more on HF |
CURLoRA: Stable LLM Continual Fine-Tuning and Catastrophic Forgetting Mitigation | mnoorfawi | The paper introduces CURLoRA, a novel approach to fine-tuning LLMs that leverages CUR matrix decomposition to mitigate catastrophic forgetting and improve computational efficiency. By leveraging inverted probabilities in CUR decomposition, the method effectively limits the growth of trainable parameters, resulting in improved stability and performance across tasks while significantly reducing the number of trainable parameters. This method is particularly useful in continual learning scenarios, where LLMs are trained on a sequence of tasks and need to preserve knowledge from previous tasks. The paper shows that CURLoRA outperforms standard LoRA in mitigating catastrophic forgetting, and demonstrates the effectiveness of this approach across a range of tasks and datasets. This research offers practical solutions for AI engineers and data scientists who are seeking to develop and deploy LLMs in real-world settings, where catastrophic forgetting poses a significant challenge. | Read more on HF |
Jina-ColBERT-v2: A General-Purpose Multilingual Late Interaction Retriever | hanxiao, makram93, jupyterjazz, michael-guenther, bwang0911 | The paper introduces Jina-ColBERT-v2, a novel multilingual dense retriever based on the ColBERT architecture. It presents various improvements to the model architecture and training pipeline, including the adoption of a modified XLM-ROBERTa encoder, pair training with weakly supervised datasets, and triplet training with high-quality multilingual data. Jina-ColBERT-v2 significantly improves performance across a range of English and multilingual retrieval tasks while reducing storage requirements by up to 50%. The authors also highlight the model's robust performance in low-resource languages, making it suitable for practitioners working on multilingual information retrieval tasks. | Read more on HF |
SurveySum: A Dataset for Summarizing Multiple Scientific Articles into a Survey Section | Rodrigo Nogueira, Thales Sales Almeida, thiagolaitz, gubartz, carisio | The research paper introduces a novel dataset called "SurveySum" for summarizing multiple scientific articles into a section of a survey. The authors propose two pipelines for summarizing scientific articles into a survey section, which are evaluated using various metrics. The results of the evaluation highlight the importance of high-quality retrieval stages and the impact of different model configurations on the quality of generated summaries. The paper addresses the lack of domain-specific datasets for summarization, which is crucial for building accurate and robust summarization models. This work provides a valuable resource for researchers and practitioners working in the field of natural language processing, particularly those involved in the development and evaluation of summarization models. | Read more on HF |
Automatic Differential Diagnosis using Transformer-Based Multi-Label Sequence Classification | Lubaba Binte Saber, Mohammad Ashrafuzzaman Khan, AdnanSadi | This research paper explores the use of transformer-based multi-label sequence classification for automated differential diagnosis. The authors propose a method to process tabular patient data into text reports and introduce two data modification modules to improve the robustness of the model. Their experiments using four transformer models demonstrate promising results with over 97% F1 scores and highlight the model's capability to generalize to challenging scenarios. The results suggest that this approach could be a valuable tool for healthcare professionals seeking to identify and prioritize potential diagnoses for patients, especially when dealing with ambiguous symptoms. This research emphasizes the potential of AI-driven tools to assist with complex medical tasks, particularly for practitioners who may need assistance in identifying a wider range of possible diagnoses. | Read more on HF |
UrBench: A Comprehensive Benchmark for Evaluating Large Multimodal Models in Multi-View Urban Scenarios | Tianyi Bai, Junyan Ye, Dairong Chen, Haote Yang, Baichuan Zhou | This research paper introduces UrBench, a comprehensive benchmark for evaluating Large Multimodal Models (LMMs) in complex, multi-view urban scenarios. The benchmark includes 11.6K questions covering 14 distinct tasks across four evaluation dimensions, namely Geo-Localization, Scene Reasoning, Scene Understanding, and Object Understanding. UrBench utilizes a novel cross-view detection-matching algorithm to create high-quality annotations and question generation pipeline that incorporates LMM-based, rule-based, and human-based methods. The authors evaluate 21 LMMs on UrBench and find that current models struggle with multi-view understanding, inconsistent behavior across different views, and fall behind human performance in most tasks, highlighting the significant room for improvement in current models' abilities for human-centric AI applications in urban settings. The paper's findings are relevant to AI practitioners working on LMM development, as it provides valuable insights into the limitations and potential of current models, and serves as a benchmark for future research. | Read more on HF |
InkubaLM: A small language model for low-resource African languages | EricPeter, Jenalea, JessicaOjo, bonadossou, Atnafu | The research paper introduces InkubaLM, a 0.4-billion parameter, multilingual language model designed specifically for low-resource African languages. The model demonstrably outperforms larger language models on specific tasks, notably sentiment analysis in Swahili. The authors release the model and datasets to encourage further research and development in the field. By bridging the language gap and offering an accessible tool, the paper highlights the potential for InkubaLM to be used by AI engineers and data scientists in tasks requiring local language understanding, such as machine translation and sentiment analysis. | Read more on HF |
Large-Scale Multi-omic Biosequence Transformers for Modeling Peptide-Nucleotide Interactions | Eric Oermann, Shivanand P. Lad, Robert J. Steele, Beakal, WeiHua | The authors of this paper, Eric Oermann, Shivanand P. Lad, Robert J. Steele, and Beakal, propose a new method for learning joint representations of protein and nucleotide sequences using a multi-omic transformer architecture. They demonstrate that their model, OmniBioTE, achieves state-of-the-art performance on a variety of tasks related to protein-nucleotide interactions, such as predicting binding affinity and the effects of mutations. They also show that the model can be effectively fine-tuned for single-omics tasks, highlighting its potential for a wider range of applications. This research is relevant to AI engineers, data scientists, and bioinformaticians working in the field of biosequence analysis as it provides a powerful tool for understanding and modeling complex interactions between proteins and nucleic acids. | Read more on HF |
VLM4Bio: A Benchmark Dataset to Evaluate Pretrained Vision-Language Models for Trait Discovery from Biological Images | abhilashneog, harishB97, ksmehrab, arkadaw9, sammarfy | This paper introduces VLM4Bio, a new benchmark dataset that evaluates the zero-shot performance of vision-language models (VLMs) for the task of trait discovery from biological images. VLM4Bio includes ≈469K question-answer pairs based on 30k images of three taxonomic groups: fishes, birds, and butterflies. The paper finds that while VLMs perform well on some tasks (e.g., trait identification), they struggle with other tasks (e.g., counting traits, localizing traits), highlighting the need for further research in this area. The findings of this paper will be useful for AI engineers and data scientists who are developing VLMs for organismal biology applications. The dataset can be used to train and evaluate VLMs for a variety of tasks, including species classification, trait identification, and trait grounding. It also provides insights into the limitations of current VLMs, which can help to guide future research efforts. | Read more on HF |
ClimDetect: A Benchmark Dataset for Climate Change Detection and Attribution | vasudevlal, matthewlyleolson, musashihinck, anahita-b, sungduk | The paper introduces ClimDetect, a benchmark dataset for climate change detection and attribution (D&A) that leverages daily snapshots of climate model simulations for training and evaluating machine learning (ML) models. The dataset standardizes input and target variables, promoting consistency and comparability across studies. The authors demonstrate the applicability of Vision Transformers (ViTs) for climate fingerprinting, a novel approach in this domain. ClimDetect is publicly accessible and provides a benchmark for advancing climate science by improving model evaluations. Practitioners, such as AI Engineers and Data Scientists working in climate modeling, can use ClimDetect to enhance their D&A research efforts and develop robust ML models for understanding and mitigating climate change. | Read more on HF |
Title | Authors | Summary | Link |
---|---|---|---|
Law of Vision Representation in MLLMs | chenfengx, WaterInSea, Ye27, Borise, shijiay | The research paper titled "Law of Vision Representation in MLLMs" proposes a novel theory that links the performance of multimodal large language models (MLLMs) to the combination of cross-modal alignment and correspondence in vision representation. The authors establish a linear correlation between a proposed alignment and correspondence score (AC score) and the MLLM's performance across eight benchmarks. Through this correlation, they propose an "AC policy" to efficiently determine the optimal vision representation, leading to a 99.7% reduction in computational cost compared to traditional methods. The findings are significant for practitioners in AI, particularly data scientists and AI engineers, as they provide an efficient method for selecting the optimal vision representation for MLLMs, thereby streamlining the development process and reducing computational resources. | Read more on HF |
CogVLM2: Visual Language Models for Image and Video Understanding | ShiyuHuang, LiquidAmmonia, qingsonglv, iyuge2, wenyi | The paper introduces CogVLM2, a new family of visual language models (VLMs) for image and video understanding. The authors introduce an improved training recipe based on the visual expert architecture and a high-resolution cross-module, achieving state-of-the-art results on several benchmarks. CogVLM2 family incorporates temporal grounding, a technique for automatically generating video annotations with timestamps, allowing for more precise and detailed understanding of video content. CogVLM2 family represents a significant advancement in visual and language modalities, offering powerful tools for both research and practical applications such as AI engineers, data scientists and researchers. | Read more on HF |
WavTokenizer: an Efficient Acoustic Discrete Codec Tokenizer for Audio Language Modeling | jlking, MingHuiFang, Exgc, ziyue, novateur | The research paper "WavTokenizer: an Efficient Acoustic Discrete Codec Tokenizer for Audio Language Modeling" introduces a novel codec model designed to effectively compress audio signals into a low-dimensional discrete representation. Notably, WavTokenizer achieves a significantly compressed representation of one-second audio with only 75 tokens while maintaining superior subjective reconstruction quality compared to existing acoustic codec models. Moreover, WavTokenizer surpasses state-of-the-art performance in semantic tasks on the ARCH benchmark, highlighting its capability to capture richer semantic information. This work opens a new avenue for effectively compressing audio into a discrete representation, thereby enabling the use of audio data with larger language models. Practitioners, including AI engineers and data scientists, may leverage the presented approach to compress audio data for various applications, such as text-to-speech synthesis, audio generation, and cross-modal retrieval. | Read more on HF |
ReconX: Reconstruct Any Scene from Sparse Views with Video Diffusion Model | duanyueqi, yejunliang23, yikaiw, wenqsun, Liuff23 | This research paper proposes a novel 3D scene reconstruction paradigm called ReconX that utilizes the generative power of video diffusion models to generate more observations from limited sparse views. This allows for higher quality reconstructions, especially in areas not seen in the original input. ReconX utilizes 3D structure guidance and a confidence-aware optimization scheme within the 3D Gaussian Splatting framework to ensure 3D consistency and minimize visual artifacts. Experimental results show that ReconX outperforms existing state-of-the-art methods in terms of both quality and generalizability. This work is particularly relevant for practitioners working in computer vision, especially those who deal with sparse-view 3D reconstruction tasks. The ability to reconstruct high-quality 3D models from a limited number of views could be valuable for applications such as autonomous navigation, virtual reality, and 3D modeling. | Read more on HF |
SAM2Point: Segment Any 3D as Videos in Zero-shot and Promptable Manners | Chengzhuo Tong, Xiangyang Zhu, Renrui Zhang, Chunyuan24, ZiyuG | This research paper introduces SAM2Point, a novel framework that adapts the Segment Anything Model 2 (SAM 2) for 3D segmentation. The method efficiently converts 3D data into a series of multi-directional videos, enabling SAM 2 to perform zero-shot segmentation without requiring any 2D-3D projection or additional training. SAM2Point supports various prompt types (e.g., 3D point, box, and mask) and demonstrates robust generalization across diverse 3D scenarios (e.g., 3D objects, indoor scenes, outdoor scenes, and raw LiDAR). This approach is particularly relevant for practitioners as it provides an efficient and highly generalizable way to perform 3D segmentation using a pre-trained model, effectively mitigating the data scarcity issue prevalent in 3D domains. | Read more on HF |
CSGO: Content-Style Composition in Text-to-Image Generation | hobbyaih, NOVAglow646, syp115, wanghaofan, xingpng | The paper presents CSGO, a novel content-style-stylized image generation framework that utilizes a large-scale dataset, IMAGStyle, to achieve high-quality results in both image-driven and text-driven style transfer. CSGO is trained end-to-end, enabling zero-shot arbitrary style transfer through decoupled content and style feature injection. The key contributions of this work include: (1) a dataset construction pipeline that generates and automatically cleanses stylized data triplets; (2) a unified CSGO framework that leverages independent feature injection modules for content and style features; and (3) a Content Alignment Score (CAS) metric to evaluate the content preservation capabilities of the generated image. This paper is relevant to AI engineers and data scientists working on style transfer, as it offers a robust and efficient framework that can be readily implemented for various applications, such as image editing, art creation, and design. | Read more on HF |
Physics of Language Models: Part 2.2, How to Learn From Mistakes on Grade-School Math Problems | Zeyuan Allen-Zhu, Yuanzhi Li, Zicheng Xu, Tian Ye | The paper investigates whether language models can learn to correct their reasoning mistakes during generation by incorporating “retry data” into the training process. The authors find that training on data that contains erroneous steps immediately followed by their corrections significantly improves the reasoning accuracy of the language model, compared to training on error-free data. They also demonstrate that this approach does not require any modifications to the training process, such as label masking, and that it can be used effectively in conjunction with pre-trained models. These findings suggest that practitioners can directly benefit from incorporating retry data into the training of language models, particularly for tasks that require accurate and robust reasoning. | Read more on HF |
3D Reconstruction with Spatial Memory | Lourdes Agapito, HengyiWang | This research paper, titled "3D Reconstruction with Spatial Memory," presents Spann3R, a novel deep learning-based method for online 3D reconstruction. Spann3R is trained on ordered or unordered image collections without prior knowledge of the scene or camera parameters and directly regresses point maps from images, which is expressed in a common coordinate system. It achieves this by utilizing a spatial memory, which learns to store and access all previously relevant 3D information. By removing the need for optimization-based global alignment, Spann3R facilitates real-time online incremental reconstruction. The authors demonstrate that Spann3R achieves competitive performance compared to prior methods while being significantly faster. For practitioners, this research offers a more efficient and scalable approach for online 3D reconstruction tasks that can be applied in various domains such as autonomous driving, virtual reality, and robotics. | Read more on HF |
StyleRemix: Interpretable Authorship Obfuscation via Distillation and Perturbation of Style Elements | Mitchell Gordon, yejinchoinka, Ximing, hallisky, jrfish | This paper introduces StyleRemix, an interpretable and adaptable authorship obfuscation method that uses fine-grained style elements to rewrite text while preserving content and maintaining fluency. StyleRemix leverages pre-trained LoRA modules to rewrite text along specific style axes, such as formality or length, resulting in more robust obfuscation than prior methods. The authors introduce two new datasets: AuthorMix, a large-scale corpus of 30K texts from 14 authors and four domains, and DISC, a high-quality parallel corpus spanning seven stylistic axes, demonstrating the effectiveness of the model. StyleRemix outperforms prior methods in both automatic and human evaluation. This work has significant implications for practitioners working in anonymous writing, text anonymization, and privacy-preserving text generation. | Read more on HF |
Scaling Up Diffusion and Flow-based XGBoost Models | TaewooKim, JesseCresswell | This paper investigates the engineering challenges and algorithmic improvements for applying XGBoost in diffusion and flow-matching models for tabular data generation. The authors identify and resolve several key implementation issues in prior work, including memory management, data duplication, and parallelization, enabling an efficient and scalable implementation of XGBoost-based generative models. Furthermore, they propose multi-output trees and early stopping as algorithmic improvements. The results show that the proposed method scales to much larger datasets than previously possible and leads to improvements in both model performance and resource efficiency. This work provides valuable insights for practitioners in the field of tabular generative modeling, offering practical guidance for engineering efficient and scalable models based on XGBoost. | Read more on HF |
Meta Flow Matching: Integrating Vector Fields on the Wasserstein Manifold | Leo J. Lee, Mathieu Blanchette, Brandon Amos, Xi Zhang, Lazar Atanackovic | The paper proposes a new method, Meta Flow Matching (MFM), for learning the dynamics of interacting particles. Unlike current flow-based models, which are limited to a single initial population and predefined conditions, MFM can generalize to previously unseen populations by integrating along vector fields on the Wasserstein manifold. The authors demonstrate the ability of MFM to improve prediction of individual treatment responses on a large scale multi-patient single-cell drug screen dataset. This work may be relevant to practitioners in a variety of fields, such as AI engineers, data scientists, and bioinformaticians, who are interested in modeling complex systems with interacting particles. MFM can be used to develop more accurate and personalized treatment regimens for patients with various diseases. | Read more on HF |
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for daily-ai-papers
Similar Open Source Tools
awesome-RLAIF
Reinforcement Learning from AI Feedback (RLAIF) is a concept that describes a type of machine learning approach where **an AI agent learns by receiving feedback or guidance from another AI system**. This concept is closely related to the field of Reinforcement Learning (RL), which is a type of machine learning where an agent learns to make a sequence of decisions in an environment to maximize a cumulative reward. In traditional RL, an agent interacts with an environment and receives feedback in the form of rewards or penalties based on the actions it takes. It learns to improve its decision-making over time to achieve its goals. In the context of Reinforcement Learning from AI Feedback, the AI agent still aims to learn optimal behavior through interactions, but **the feedback comes from another AI system rather than from the environment or human evaluators**. This can be **particularly useful in situations where it may be challenging to define clear reward functions or when it is more efficient to use another AI system to provide guidance**. The feedback from the AI system can take various forms, such as: - **Demonstrations** : The AI system provides demonstrations of desired behavior, and the learning agent tries to imitate these demonstrations. - **Comparison Data** : The AI system ranks or compares different actions taken by the learning agent, helping it to understand which actions are better or worse. - **Reward Shaping** : The AI system provides additional reward signals to guide the learning agent's behavior, supplementing the rewards from the environment. This approach is often used in scenarios where the RL agent needs to learn from **limited human or expert feedback or when the reward signal from the environment is sparse or unclear**. It can also be used to **accelerate the learning process and make RL more sample-efficient**. Reinforcement Learning from AI Feedback is an area of ongoing research and has applications in various domains, including robotics, autonomous vehicles, and game playing, among others.
LLM-Viewer
LLM-Viewer is a tool for visualizing Language and Learning Models (LLMs) and analyzing performance on different hardware platforms. It enables network-wise analysis, considering factors such as peak memory consumption and total inference time cost. With LLM-Viewer, users can gain valuable insights into LLM inference and performance optimization. The tool can be used in a web browser or as a command line interface (CLI) for easy configuration and visualization. The ongoing project aims to enhance features like showing tensor shapes, expanding hardware platform compatibility, and supporting more LLMs with manual model graph configuration.
matchem-llm
A public repository collecting links to state-of-the-art training sets, QA, benchmarks and other evaluations for various ML and LLM applications in materials science and chemistry. It includes datasets related to chemistry, materials, multimodal data, and knowledge graphs in the field. The repository aims to provide resources for training and evaluating machine learning models in the materials science and chemistry domains.
ManipVQA
ManipVQA is a framework that enhances Multimodal Large Language Models (MLLMs) with manipulation-centric knowledge through a Visual Question-Answering (VQA) format. It addresses the deficiency of conventional MLLMs in understanding affordances and physical concepts crucial for manipulation tasks. By infusing robotics-specific knowledge, including tool detection, affordance recognition, and physical concept comprehension, ManipVQA improves the performance of robots in manipulation tasks. The framework involves fine-tuning MLLMs with a curated dataset of interactive objects, enabling robots to understand and execute natural language instructions more effectively.
param
PARAM Benchmarks is a repository of communication and compute micro-benchmarks as well as full workloads for evaluating training and inference platforms. It complements commonly used benchmarks by focusing on AI training with PyTorch based collective benchmarks, GEMM, embedding lookup, linear layer, and DLRM communication patterns. The tool bridges the gap between stand-alone C++ benchmarks and PyTorch/Tensorflow based application benchmarks, providing deep insights into system architecture and framework-level overheads.
Main
This repository contains material related to the new book _Synthetic Data and Generative AI_ by the author, including code for NoGAN, DeepResampling, and NoGAN_Hellinger. NoGAN is a tabular data synthesizer that outperforms GenAI methods in terms of speed and results, utilizing state-of-the-art quality metrics. DeepResampling is a fast NoGAN based on resampling and Bayesian Models with hyperparameter auto-tuning. NoGAN_Hellinger combines NoGAN and DeepResampling with the Hellinger model evaluation metric.
awesome-transformer-nlp
This repository contains a hand-curated list of great machine (deep) learning resources for Natural Language Processing (NLP) with a focus on Generative Pre-trained Transformer (GPT), Bidirectional Encoder Representations from Transformers (BERT), attention mechanism, Transformer architectures/networks, Chatbot, and transfer learning in NLP.
Self-Iterative-Agent-System-for-Complex-Problem-Solving
The Self-Iterative Agent System for Complex Problem Solving is a solution developed for the Alibaba Mathematical Competition (AI Challenge). It involves multiple LLMs engaging in multi-round 'self-questioning' to iteratively refine the problem-solving process and select optimal solutions. The system consists of main and evaluation models, with a process that includes detailed problem-solving steps, feedback loops, and iterative improvements. The approach emphasizes communication and reasoning between sub-agents, knowledge extraction, and the importance of Agent-like architectures in complex tasks. While effective, there is room for improvement in model capabilities and error prevention mechanisms.
LongRoPE
LongRoPE is a method to extend the context window of large language models (LLMs) beyond 2 million tokens. It identifies and exploits non-uniformities in positional embeddings to enable 8x context extension without fine-tuning. The method utilizes a progressive extension strategy with 256k fine-tuning to reach a 2048k context. It adjusts embeddings for shorter contexts to maintain performance within the original window size. LongRoPE has been shown to be effective in maintaining performance across various tasks from 4k to 2048k context lengths.
AirLine
AirLine is a learnable edge-based line detection algorithm designed for various robotic tasks such as scene recognition, 3D reconstruction, and SLAM. It offers a novel approach to extracting line segments directly from edges, enhancing generalization ability for unseen environments. The algorithm balances efficiency and accuracy through a region-grow algorithm and local edge voting scheme for line parameterization. AirLine demonstrates state-of-the-art precision with significant runtime acceleration compared to other learning-based methods, making it ideal for low-power robots.
RecAI
RecAI is a project that explores the integration of Large Language Models (LLMs) into recommender systems, addressing the challenges of interactivity, explainability, and controllability. It aims to bridge the gap between general-purpose LLMs and domain-specific recommender systems, providing a holistic perspective on the practical requirements of LLM4Rec. The project investigates various techniques, including Recommender AI agents, selective knowledge injection, fine-tuning language models, evaluation, and LLMs as model explainers, to create more sophisticated, interactive, and user-centric recommender systems.
WeatherGFT
WeatherGFT is a physics-AI hybrid model designed to generalize weather forecasts to finer-grained temporal scales beyond the training dataset. It incorporates physical partial differential equations (PDEs) into neural networks to simulate fine-grained physical evolution and correct biases. The model achieves state-of-the-art performance in forecasting tasks at different time scales, from nowcasting to medium-range forecasts, by utilizing a lead time-aware training framework and a carefully designed PDE kernel. WeatherGFT bridges the gap between nowcast and medium-range forecast by extending forecasting abilities to predict accurately at a 30-minute time scale.
long-context-attention
Long-Context-Attention (YunChang) is a unified sequence parallel approach that combines the strengths of DeepSpeed-Ulysses-Attention and Ring-Attention to provide a versatile and high-performance solution for long context LLM model training and inference. It addresses the limitations of both methods by offering no limitation on the number of heads, compatibility with advanced parallel strategies, and enhanced performance benchmarks. The tool is verified in Megatron-LM and offers best practices for 4D parallelism, making it suitable for various attention mechanisms and parallel computing advancements.
openspg
OpenSPG is a knowledge graph engine developed by Ant Group in collaboration with OpenKG, based on the SPG (Semantic-enhanced Programmable Graph) framework. It provides explicit semantic representations, logical rule definitions, operator frameworks (construction, inference), and other capabilities for domain knowledge graphs. OpenSPG supports pluggable adaptation of basic engines and algorithmic services by various vendors to build customized solutions.