MOOSE
MOOSE (Multi-organ objective segmentation) a data-centric AI solution that generates multilabel organ segmentations to facilitate systemic TB whole-person research.The pipeline is based on nn-UNet and has the capability to segment 120 unique tissue classes from a whole-body 18F-FDG PET/CT image.
Stars: 182
MOOSE 2.0 is a leaner, meaner, and stronger tool for 3D medical image segmentation. It is built on the principles of data-centric AI and offers a wide range of segmentation models for both clinical and preclinical settings. MOOSE 2.0 is also versatile, allowing users to use it as a command-line tool for batch processing or as a library package for individual processing in Python projects. With its improved speed, accuracy, and flexibility, MOOSE 2.0 is the go-to tool for segmentation tasks.
README:
Welcome to the new and improved MOOSE (v3.0), where speed and efficiency aren't just buzzwordsβthey're a way of life.
π¨ 3x Faster Than Before
Like a moose sprinting through the woods (okay, maybe not that fast), MOOSE 3.0 is built for speed. It's 3x faster than its older sibling, MOOSE 2.0, which was already no slouch. Blink and you'll miss it. β‘
π» Memory: Light as a Feather, Strong as a Bull
Forget "Does it fit on my laptop?" The answer is YES. πΊ Thanks to Dask wizardry, all that data stays in memory. No disk writes, no fuss. Run total-body CT on that 'decent' laptop you bought three years ago and feel like youβve upgraded. π₯³
π οΈ Any OS, Anytime, Anywhere
Windows, Mac, Linuxβwe donβt play favorites. π Mac users, youβre in luck: MOOSE runs natively on MPS, getting you GPU-like speeds without the NVIDIA guilt. π
π― Trained to Perfection
This is our best model yet, trained on a whopping 1.7k datasets. More data, better results. Plus you can run multiple models at the same time - You'll be slicing through images like a knife through warm butter. (Or tofu, if you prefer.) π§πͺ
π₯οΈ The 'Herd' Mode π₯οΈ
Got a powerhouse server just sitting around? Time to let the herd loose! Flip the Herd Mode switch and watch MOOSE multiply across your compute like... well, like a herd of moose! π¦π¦π¦ The more hardware you have, the faster your inference gets done. Scale up, speed up, and make every bit of your server earn its oats. πΎπ¨
MOOSE 3.0 isn't just an upgradeβit's a lifestyle. A faster, leaner, and stronger lifestyle. Ready to join the herd? π¦β¨
MOOSE 3.0 offers a wide range of segmentation models catering to various clinical and preclinical needs. Here are the models currently available:
Model Name | Intensities and Regions |
---|---|
clin_ct_body |
1:Legs, 2:Body, 3:Head, 4:Arms |
clin_ct_cardiac |
1:heart_myocardium, 2:heart_atrium_left, 3:heart_ventricle_left, 4:heart_atrium_right, 5:heart_ventricle_right, 6:pulmonary_artery, 7:iliac_artery_left, 8:iliac_artery_right, 9:iliac_vena_left, 10:iliac_vena_right |
clin_ct_digestive |
1:esophagus, 2:trachea, 3:small_bowel, 4:duodenum, 5:colon, 6:urinary_bladder, 7:face |
clin_ct_fat |
1:spinal_chord, 2:skeletal_muscle, 3:subcutaneous_fat, 4:visceral_fat, 5:thoracic_fat, 6:eyes, 7:testicles, 8:prostate |
clin_ct_lungs |
1:lung_upper_lobe_left, 2:lung_lower_lobe_left, 3:lung_upper_lobe_right, 4:lung_middle_lobe_right, 5:lung_lower_lobe_right |
clin_ct_muscles |
1:gluteus_maximus_left, 2:gluteus_maximus_right, 3:gluteus_medius_left, 4:gluteus_medius_right, 5:gluteus_minimus_left, 6:gluteus_minimus_right, 7:autochthon_left, 8:autochthon_right, 9:iliopsoas_left, 10:iliopsoas_right |
clin_ct_organs |
1:spleen, 2:kidney_right, 3:kidney_left, 4:gallbladder, 5:liver, 6:stomach, 7:aorta, 8:inferior_vena_cava, 9:portal_vein_and_splenic_vein, 10:pancreas, 11:adrenal_gland_right, 12:adrenal_gland_left, 13:lung_upper_lobe_left, 14:lung_lower_lobe_left, 15:lung_upper_lobe_right, 16:lung_middle_lobe_right, 17:lung_lower_lobe_right |
clin_ct_peripheral_bones |
1:carpal_left, 2:carpal_right, 3:clavicle_left, 4:clavicle_right, 5:femur_left, 6:femur_right, 7:fibula_left, 8:fibula_right, 9:humerus_left, 10:humerus_right, 11:metacarpal_left, 12:metacarpal_right, 13:metatarsal_left, 14:metatarsal_right, 15:patella_left, 16:patella_right, 17:fingers_left, 18:fingers_right, 19:radius_left, 20:radius_right, 21:scapula_left, 22:scapula_right, 23:skull, 24:tarsal_left, 25:tarsal_right, 26:tibia_left, 27:tibia_right, 28:toes_left, 29:toes_right, 30:ulna_left, 31:ulna_right, 32:thyroid_left, 33:thyroid_right, 34:bladder |
clin_ct_ribs |
1:rib_left_1, 2:rib_left_2, 3:rib_left_3, 4:rib_left_4, 5:rib_left_5, 6:rib_left_6, 7:rib_left_7, 8:rib_left_8, 9:rib_left_9, 10:rib_left_10, 11:rib_left_11, 12:rib_left_12, 13:rib_right_1, 14:rib_right_2, 15:rib_right_3, 16:rib_right_4, 17:rib_right_5, 18:rib_right_6, 19:rib_right_7, 20:rib_right_8, 21:rib_right_9, 22:rib_right_10, 23:rib_right_11, 24:rib_right_12, 25:sternum |
clin_ct_vertebrae |
1:vertebra_C1, 2:vertebra_C2, 3:vertebra_C3, 4:vertebra_C4, 5:vertebra_C5, 6:vertebra_C6, 7:vertebra_C7, 8:vertebra_T1, 9:vertebra_T2, 10:vertebra_T3, 11:vertebra_T4, 12:vertebra_T5, 13:vertebra_T6, 14:vertebra_T7, 15:vertebra_T8, 16:vertebra_T9, 17:vertebra_T10, 18:vertebra_T11, 19:vertebra_T12, 20:vertebra_L1, 21:vertebra_L2, 22:vertebra_L3, 23:vertebra_L4, 24:vertebra_L5, 25:vertebra_S1, 26:hip_left, 27:hip_right, 28:sacrum |
Model Name | Intensities and Regions |
---|---|
preclin_ct_legs |
1:right_leg_muscle, 2:left_leg_muscle |
preclin_mr_all |
1:Brain, 2:Liver, 3:Intestines, 4:Pancreas, 5:Thyroid, 6:Spleen, 7:Bladder, 8:OuterKidney, 9:InnerKidney, 10:HeartInside, 11:HeartOutside, 12:WAT Subcutaneous, 13:WAT Visceral, 14:BAT, 15:Muscle TF, 16:Muscle TB, 17:Muscle BB, 18:Muscle BF, 19:Aorta, 20:Lung, 21:Stomach |
Each model is designed to provide high-quality segmentation with MOOSE 3.0's optimized algorithms and data-centric AI principles.
- Shiyam Sundar, L. K., Yu, J., Muzik, O., Kulterer, O., Fueger, B. J., Kifjak, D., Nakuz, T., Shin, H. M., Sima, A. K., Kitzmantl, D., Badawi, R. D., Nardo, L., Cherry, S. R., Spencer, B. A., Hacker, M., & Beyer, T. (2022). Fully-automated, semantic segmentation of whole-body 18F-FDG PET/CT images based on data-centric artificial intelligence. Journal of Nuclear Medicine. https://doi.org/10.2967/jnumed.122.264063
- Isensee, F., Jaeger, P.F., Kohl, S.A.A. et al. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods 18, 203β211 (2021). https://doi.org/10.1038/s41592-020-01008-z
Before you dive into the incredible world of MOOSE 3.0, here are a few things you need to ensure for an optimal experience:
-
Operating System: We've got you covered whether you're on Windows, Mac, or Linux. MOOSE 3.0 has been tested across these platforms to ensure seamless operation.
-
Memory: MOOSE 3.0 has quite an appetite! Make sure you have at least 16GB of RAM for the smooth running of all tasks.
-
GPU: If speed is your game, an NVIDIA GPU is the name! MOOSE 3.0 leverages GPU acceleration to deliver results fast. Don't worry if you don't have one, though - it will still work, just at a slower pace.
-
Python: Ensure that you have Python 3.10 installed on your system. MOOSE 3.0 likes to keep up with the latest, after all!
So, that's it! Make sure you're geared up with these specifications, and you're all set to explore everything MOOSE 3.0 has to offer. ππ
Available on Windows, Linux, and MacOS, the installation is as simple as it gets. Follow our step-by-step guide below and set sail on your journey with MOOSE 3.0.
-
First, create a Python environment. You can name it to your liking; for example, 'moose-env'.
python3.10 -m venv moose-env
-
Activate your newly created environment.
source moose-env/bin/activate # for Linux
-
Install MOOSE 3.0.
pip install moosez
Voila! You're all set to explore with MOOSE 3.0.
-
First, create a Python environment. You can name it to your liking; for example, 'moose-env'.
python3.10 -m venv moose-env
-
Activate your newly created environment.
source moose-env/bin/activate
-
Install MOOSE 3.0 and a special fork of PyTorch (MPS specific). You need to install the MPS specific branch for making MOOSE work with MPS
pip install moosez pip install git+https://github.com/LalithShiyam/pytorch-mps.git
Now you are ready to use MOOSE on Apple Silicon πβ‘οΈ.
-
Create a Python environment. You could name it 'moose-env', or as you wish.
python3.10 -m venv moose-env
-
Activate your newly created environment.
.\moose-env\Scripts\activate
-
Go to the PyTorch website and install the appropriate PyTorch version for your system. !DO NOT SKIP THIS!
-
Finally, install MOOSE 3.0.
pip install moosez
There you have it! You're ready to venture into the world of 3D medical image segmentation with MOOSE 3.0.
Happy exploring! ππ¬
Getting started with MOOSE 3.0 is as easy as slicing through butter π§πͺ. Use the command-line tool to process multiple segmentation models in sequence or in parallel, making your workflow a breeze. π¬οΈ
You can now run single or several models in sequence with a single command. Just provide the path to your subject images and list the segmentation models you wish to apply:
# For single model inference
moosez -d <path_to_image_dir> -m <model_name>
# For multiple model inference
moosez -d <path_to_image_dir> \
-m <model_name1> \
<model_name2> \
<model_name3> \
For instance, to run clinical CT organ segmentation on a directory of images, you can use the following command:
moosez -d <path_to_image_dir> -m clin_ct_organs
Likewise, to run multiple models e.g. organs, ribs, and vertebrae, you can use the following command:
moosez -d <path_to_image_dir> \
-m clin_ct_organs \
clin_ct_ribs \
clin_ct_vertebrae
MOOSE 3.0 will handle each model one after the otherβno fuss, no hassle. πβ¨
Got a powerful server or HPC? Let the herd roam! π¦π Use Herd Mode to run multiple MOOSE instances in parallel. Just add the -herd
flag with the number of instances you wish to run simultaneously:
moosez -d <path_to_image_dir> \
-m clin_ct_organs \
clin_ct_ribs \
clin_ct_vertebrae \
-herd 2
MOOSE will run two instances at the same time, utilizing your compute power like a true multitasking pro. πͺπ¨βπ»π©βπ»
And that's it! MOOSE 3.0 lets you process with ease and speed. β‘β¨
Need assistance along the way? Don't worry, we've got you covered. Simply type:
moosez -h
This command will provide you with all the help and the information about the available models and the regions it segments.
MOOSE 3.0 isn't just a command-line powerhouse; itβs also a flexible library for Python projects. Hereβs how to make the most of it:
First, import the moose
function from the moosez
package in your Python script:
from moosez import moose
The moose
function is versatile and accepts various input types. It takes four main arguments:
-
input
: The data to process, which can be:- A path to an input file or directory (NIfTI, either
.nii
or.nii.gz
). - A tuple containing a NumPy array and its spacing (e.g.,
numpy_array
,(spacing_x, spacing_y, spacing_z)
). - A
SimpleITK
image object.
- A path to an input file or directory (NIfTI, either
-
model_names
: A single model name or a list of model names for segmentation. -
output_dir
: The directory where the results will be saved. -
accelerator
: The type of accelerator to use ("cpu"
,"cuda"
, or"mps"
for Mac).
Here are some examples to illustrate different ways to use the moose
function:
-
Using a file path and multiple models:
moose('/path/to/input/file', ['clin_ct_organs', 'clin_ct_ribs'], '/path/to/save/output', 'cuda')
-
Using a NumPy array with spacing:
moose((numpy_array, (1.5, 1.5, 1.5)), 'clin_ct_organs', '/path/to/save/output', 'cuda')
-
Using a SimpleITK image:
moose(simple_itk_image, 'clin_ct_organs', '/path/to/save/output', 'cuda')
That's it! With these flexible inputs, you can use MOOSE 3.0 to fit your workflow perfectlyβwhether youβre processing a single image, a stack of files, or leveraging different data formats. π₯οΈπ
Happy segmenting with MOOSE 3.0! π¦π«
Using MOOSE 3.0 optimally requires your data to be structured according to specific conventions. MOOSE 3.0 supports both DICOM and NIFTI formats. For DICOM files, MOOSE infers the modality from the DICOM tags and checks if the given modality is suitable for the chosen segmentation model. However, for NIFTI files, users need to ensure that the files are named with the correct modality as a suffix.
Please structure your dataset as follows:
MOOSEv2_data/ π
βββ S1 π
β βββ AC-CT π
β β βββ WBACCTiDose2_2001_CT001.dcm π
β β βββ WBACCTiDose2_2001_CT002.dcm π
β β βββ ... ποΈ
β β βββ WBACCTiDose2_2001_CT532.dcm π
β βββ AC-PT π
β βββ DetailWB_CTACWBPT001_PT001.dcm π
β βββ DetailWB_CTACWBPT001_PT002.dcm π
β βββ ... ποΈ
β βββ DetailWB_CTACWBPT001_PT532.dcm π
βββ S2 π
β βββ CT_S2.nii π
βββ S3 π
β βββ CT_S3.nii π
βββ S4 π
β βββ S4_ULD_FDG_60m_Dynamic_Patlak_HeadNeckThoAbd_20211025075852_2.nii π
βββ S5 π
βββ CT_S5.nii π
Note: If the necessary naming conventions are not followed, MOOSE 3.0 will skip the subjects.
When using NIFTI files, you should name the file with the appropriate modality as a suffix.
For instance, if you have chosen the model_name
as clin_ct_organs
, the CT scan for subject 'S2' in NIFTI format, should have the modality tag 'CT_' attached to the file name, e.g. CT_S2.nii
. In the directory shown above, every subject will be processed by moosez
except S4.
Remember: Adhering to these file naming and directory structure conventions ensures smooth and efficient processing with MOOSE 3.0. Happy segmenting! π
Want to power-up your medical image segmentation tasks? β‘ Join the MooseZ community and contribute your own nnUNetv2
models! π₯:
By adding your custom models to MooseZ, you can enjoy:
- β© Increased Speed - MooseZ is optimized for fast performance. Use it to get your results faster!
- πΎ Reduced Memory - MooseZ is designed to be efficient and lean, so it uses less memory!
So why wait? Make your models fly with MooseZ
-
Prepare Your Model π
Train your model using
nnUNetv2
and get it ready for the big leagues! -
Update AVAILABLE_MODELS List βοΈ
Include your model's unique identifier to the
AVAILABLE_MODELS
list in the resources.py file. The model name should follow a specific syntax: 'clin' or 'preclin' (indicating Clinical or Preclinical), modality tag (like 'ct', 'pt', 'mr'), and then the tissue of interest. -
Update MODELS Dictionary π
Add a new entry to the
MODELS
dictionary in the resources.py file. Fill in the corresponding details (like URL, filename, directory, trainer type, voxel spacing, and multilabel prefix). -
Update expected_modality Function π
Update the
expected_modality
function in the resources.py file to return the imaging technique, modality, and tissue of interest for your model. -
Update map_model_name_to_task_number Function πΊοΈ
Modify the
map_model_name_to_task_number
function in the resources.py file to return the task number associated with your model. -
Update
ORGAN_INDICES
inconstants.py
π§Append the
ORGAN_INDICES
dictionary in the constants.py with your label intensity to region mapping. This is particularly important if you would like to have your stats from the PET images based on your CT masks.
That's it! You've successfully contributed your own model to the MooseZ community! π
With your contribution π, MooseZ becomes a stronger and more robust tool for medical image segmentation! πͺ
All of our Python packages here at QIMP carry a special signature β a distinctive 'Z' at the end of their names. The 'Z' is more than just a letter to us; it's a symbol of our forward-thinking approach and commitment to continuous innovation.
Our MOOSE package, for example, is named as 'moosez', pronounced "moose-see". So, why 'Z'?
Well, in the world of mathematics and science, 'Z' often represents the unknown, the variable that's yet to be discovered, or the final destination in a series. We at QIMP believe in always pushing boundaries, venturing into uncharted territories, and staying on the cutting edge of technology. The 'Z' embodies this philosophy. It represents our constant quest to uncover what lies beyond the known, to explore the undiscovered, and to bring you the future of medical imaging.
Each time you see a 'Z' in one of our package names, be reminded of the spirit of exploration and discovery that drives our work. With QIMP, you're not just installing a package; you're joining us on a journey to the frontiers of medical image processing. Here's to exploring the 'Z' dimension together! π
π¦ MOOSE: A part of the enhance.pet community
</tr>
Lalith Kumar Shiyam Sundar π» π |
Sebastian Gutschmayer π» |
n7k-dobri π» |
Manuel Pires π» |
Zach Chalampalakis π» |
David Haberl π» |
W7ebere π |
Kazezaka π» |
Loic Tetrel @ Kitware π» π |
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for MOOSE
Similar Open Source Tools
MOOSE
MOOSE 2.0 is a leaner, meaner, and stronger tool for 3D medical image segmentation. It is built on the principles of data-centric AI and offers a wide range of segmentation models for both clinical and preclinical settings. MOOSE 2.0 is also versatile, allowing users to use it as a command-line tool for batch processing or as a library package for individual processing in Python projects. With its improved speed, accuracy, and flexibility, MOOSE 2.0 is the go-to tool for segmentation tasks.
llm_aided_ocr
The LLM-Aided OCR Project is an advanced system that enhances Optical Character Recognition (OCR) output by leveraging natural language processing techniques and large language models. It offers features like PDF to image conversion, OCR using Tesseract, error correction using LLMs, smart text chunking, markdown formatting, duplicate content removal, quality assessment, support for local and cloud-based LLMs, asynchronous processing, detailed logging, and GPU acceleration. The project provides detailed technical overview, text processing pipeline, LLM integration, token management, quality assessment, logging, configuration, and customization. It requires Python 3.12+, Tesseract OCR engine, PDF2Image library, PyTesseract, and optional OpenAI or Anthropic API support for cloud-based LLMs. The installation process involves setting up the project, installing dependencies, and configuring environment variables. Users can place a PDF file in the project directory, update input file path, and run the script to generate post-processed text. The project optimizes processing with concurrent processing, context preservation, and adaptive token management. Configuration settings include choosing between local or API-based LLMs, selecting API provider, specifying models, and setting context size for local LLMs. Output files include raw OCR output and LLM-corrected text. Limitations include performance dependency on LLM quality and time-consuming processing for large documents.
OpenAdapt
OpenAdapt is an open-source software adapter between Large Multimodal Models (LMMs) and traditional desktop and web Graphical User Interfaces (GUIs). It aims to automate repetitive GUI workflows by leveraging the power of LMMs. OpenAdapt records user input and screenshots, converts them into tokenized format, and generates synthetic input via transformer model completions. It also analyzes recordings to generate task trees and replay synthetic input to complete tasks. OpenAdapt is model agnostic and generates prompts automatically by learning from human demonstration, ensuring that agents are grounded in existing processes and mitigating hallucinations. It works with all types of desktop GUIs, including virtualized and web, and is open source under the MIT license.
SUPIR
SUPIR is an AI-based image processing and upscaling tool that leverages cutting-edge technology to enhance image quality and resolution. The tool provides users with the ability to upscale images with high generalization and quality, as well as specific settings for light degradation scenarios. It offers a range of models and checkpoints for different use cases, along with detailed instructions for installation and usage. SUPIR also includes features for color fixing, linear CFG adjustments, and various prompts for image enhancement. The tool is designed for non-commercial use only and comes with a contact email for inquiries and permission requests for commercial use.
py-llm-core
PyLLMCore is a light-weighted interface with Large Language Models with native support for llama.cpp, OpenAI API, and Azure deployments. It offers a Pythonic API that is simple to use, with structures provided by the standard library dataclasses module. The high-level API includes the assistants module for easy swapping between models. PyLLMCore supports various models including those compatible with llama.cpp, OpenAI, and Azure APIs. It covers use cases such as parsing, summarizing, question answering, hallucinations reduction, context size management, and tokenizing. The tool allows users to interact with language models for tasks like parsing text, summarizing content, answering questions, reducing hallucinations, managing context size, and tokenizing text.
embodied-agents
Embodied Agents is a toolkit for integrating large multi-modal models into existing robot stacks with just a few lines of code. It provides consistency, reliability, scalability, and is configurable to any observation and action space. The toolkit is designed to reduce complexities involved in setting up inference endpoints, converting between different model formats, and collecting/storing datasets. It aims to facilitate data collection and sharing among roboticists by providing Python-first abstractions that are modular, extensible, and applicable to a wide range of tasks. The toolkit supports asynchronous and remote thread-safe agent execution for maximal responsiveness and scalability, and is compatible with various APIs like HuggingFace Spaces, Datasets, Gymnasium Spaces, Ollama, and OpenAI. It also offers automatic dataset recording and optional uploads to the HuggingFace hub.
code2prompt
Code2Prompt is a powerful command-line tool that generates comprehensive prompts from codebases, designed to streamline interactions between developers and Large Language Models (LLMs) for code analysis, documentation, and improvement tasks. It bridges the gap between codebases and LLMs by converting projects into AI-friendly prompts, enabling users to leverage AI for various software development tasks. The tool offers features like holistic codebase representation, intelligent source tree generation, customizable prompt templates, smart token management, Gitignore integration, flexible file handling, clipboard-ready output, multiple output options, and enhanced code readability.
BentoML
BentoML is an open-source model serving library for building performant and scalable AI applications with Python. It comes with everything you need for serving optimization, model packaging, and production deployment.
basiclingua-LLM-Based-NLP
BasicLingua is a Python library that provides functionalities for linguistic tasks such as tokenization, stemming, lemmatization, and many others. It is based on the Gemini Language Model, which has demonstrated promising results in dealing with text data. BasicLingua can be used as an API or through a web demo. It is available under the MIT license and can be used in various projects.
tts-generation-webui
TTS Generation WebUI is a comprehensive tool that provides a user-friendly interface for text-to-speech and voice cloning tasks. It integrates various AI models such as Bark, MusicGen, AudioGen, Tortoise, RVC, Vocos, Demucs, SeamlessM4T, and MAGNeT. The tool offers one-click installers, Google Colab demo, videos for guidance, and extra voices for Bark. Users can generate audio outputs, manage models, caches, and system space for AI projects. The project is open-source and emphasizes ethical and responsible use of AI technology.
Vitron
Vitron is a unified pixel-level vision LLM designed for comprehensive understanding, generating, segmenting, and editing static images and dynamic videos. It addresses challenges in existing vision LLMs such as superficial instance-level understanding, lack of unified support for images and videos, and insufficient coverage across various vision tasks. The tool requires Python >= 3.8, Pytorch == 2.1.0, and CUDA Version >= 11.8 for installation. Users can deploy Gradio demo locally and fine-tune their models for specific tasks.
llmgraph
llmgraph is a tool that enables users to create knowledge graphs in GraphML, GEXF, and HTML formats by extracting world knowledge from large language models (LLMs) like ChatGPT. It supports various entity types and relationships, offers cache support for efficient graph growth, and provides insights into LLM costs. Users can customize the model used and interact with different LLM providers. The tool allows users to generate interactive graphs based on a specified entity type and Wikipedia link, making it a valuable resource for knowledge graph creation and exploration.
KnowAgent
KnowAgent is a tool designed for Knowledge-Augmented Planning for LLM-Based Agents. It involves creating an action knowledge base, converting action knowledge into text for model understanding, and a knowledgeable self-learning phase to continually improve the model's planning abilities. The tool aims to enhance agents' potential for application in complex situations by leveraging external reservoirs of information and iterative processes.
ShortcutsBench
ShortcutsBench is a project focused on collecting and analyzing workflows created in the Shortcuts app, providing a dataset of shortcut metadata, source files, and API information. It aims to study the integration of large language models with Apple devices, particularly focusing on the role of shortcuts in enhancing user experience. The project offers insights for Shortcuts users, enthusiasts, and researchers to explore, customize workflows, and study automated workflows, low-code programming, and API-based agents.
WordLlama
WordLlama is a fast, lightweight NLP toolkit optimized for CPU hardware. It recycles components from large language models to create efficient word representations. It offers features like Matryoshka Representations, low resource requirements, binarization, and numpy-only inference. The tool is suitable for tasks like semantic matching, fuzzy deduplication, ranking, and clustering, making it a good option for NLP-lite tasks and exploratory analysis.
AIRAVAT
AIRAVAT is a multifunctional Android Remote Access Tool (RAT) with a GUI-based Web Panel that does not require port forwarding. It allows users to access various features on the victim's device, such as reading files, downloading media, retrieving system information, managing applications, SMS, call logs, contacts, notifications, keylogging, admin permissions, phishing, audio recording, music playback, device control (vibration, torch light, wallpaper), executing shell commands, clipboard text retrieval, URL launching, and background operation. The tool requires a Firebase account and tools like ApkEasy Tool or ApkTool M for building. Users can set up Firebase, host the web panel, modify Instagram.apk for RAT functionality, and connect the victim's device to the web panel. The tool is intended for educational purposes only, and users are solely responsible for its use.
For similar tasks
MOOSE
MOOSE 2.0 is a leaner, meaner, and stronger tool for 3D medical image segmentation. It is built on the principles of data-centric AI and offers a wide range of segmentation models for both clinical and preclinical settings. MOOSE 2.0 is also versatile, allowing users to use it as a command-line tool for batch processing or as a library package for individual processing in Python projects. With its improved speed, accuracy, and flexibility, MOOSE 2.0 is the go-to tool for segmentation tasks.
For similar jobs
MOOSE
MOOSE 2.0 is a leaner, meaner, and stronger tool for 3D medical image segmentation. It is built on the principles of data-centric AI and offers a wide range of segmentation models for both clinical and preclinical settings. MOOSE 2.0 is also versatile, allowing users to use it as a command-line tool for batch processing or as a library package for individual processing in Python projects. With its improved speed, accuracy, and flexibility, MOOSE 2.0 is the go-to tool for segmentation tasks.
nitrain
Nitrain is a framework for medical imaging AI that provides tools for sampling and augmenting medical images, training models on medical imaging datasets, and visualizing model results in a medical imaging context. It supports using pytorch, keras, and tensorflow.