bumpgen
bumpgen is an AI agent that upgrades npm packages
Stars: 67
bumpgen is a tool designed to automatically upgrade TypeScript / TSX dependencies and make necessary code changes to handle any breaking issues that may arise. It uses an abstract syntax tree to analyze code relationships, type definitions for external methods, and a plan graph DAG to execute changes in the correct order. The tool is currently limited to TypeScript and TSX but plans to support other strongly typed languages in the future. It aims to simplify the process of upgrading dependencies and handling code changes caused by updates.
README:
bumpgen
bumps your TypeScript / TSX dependencies and makes code changes for you if anything breaks.
Here's a common scenario:
you: "I should upgrade to the latest version of x, it has banging new features and impressive performance improvements"
you (5 minutes later): nevermind, that broke a bunch of stuff
Then use bumpgen
!
How does it work?
-
bumpgen
builds your project to understand what broke when a dependency was bumped - Then
bumpgen
uses ts-morph to create an abstract syntax tree from your code, to understand the relationships between statements - It also uses the AST to get type definitions for external methods to understand how to use new package versions
-
bumpgen
then creates a plan graph DAG to execute things in the correct order to handle propagating changes (ref: arxiv 2309.12499)
[!NOTE]
bumpgen
only supports typescript and tsx at the moment, but we're working on adding support for other strongly typed languages. Hit the emoji button on our open issues for Java, golang, C# and Python to request support.
To get started, you'll need an OpenAI API key. gpt-4-turbo-preview
from OpenAI is the only supported model at this time, though we plan on supporting more soon.
Then, run bumpgen
:
> export LLM_API_KEY="<openai-api-key>"
> cd ~/my-repository
> npm install -g bumpgen
> bumpgen @tanstack/react-query 5.28.14
where @tanstack/react-query
is the package you want to bump and 5.28.14
is the version you want to bump to.
You can also run bumpgen
without arguments and select which package to upgrade from the menu. Use bumpgen --help
for a complete list of options.
[!NOTE] If you'd like to be first in line to try the
bumpgen
GitHub App to replace your usage of dependabot + renovatebot, sign up here.
There are some limitations you should know about.
-
bumpgen
relies on build errors to determine what needs to be fixed. If an issue is caused by a behavioral change,bumpgen
won't detect it. -
bumpgen
can't handle multiple packages at the same time. It will fail to upgrade packages that require peer dependencies to be updated the same time to work such as@octokit/core
and@octokit/plugin-retry
. -
bumpgen
is not good with very large frameworks likevue
. These kind of upgrades (and vue 2 -> 3 specifically) can be arduous even for a human.
> bumpgen @tanstack/react-query 5.28.14
│
┌┬─────▼──────────────────────────────────────────────────────────────────────┐
││ CLI │
└┴─────┬──▲───────────────────────────────────────────────────────────────────┘
│ │
┌┬─────▼──┴───────────────────────────────────────────────────────────────────┐
││ Core (Codeplan) │
││ │
││ ┌───────────────────────────────────┐ ┌──────────────────────────────────┐ │
││ │ Plan Graph │ │ Abstract Syntax Tree │ │
││ │ │ │ │ │
││ │ │ │ │ │
││ │ ┌─┐ │ │ ┌─┐ │ │
││ │ ┌──┴─┘ │ │ ┌──┴─┴──┐ │ │
││ │ │ │ │ │ │ │ │
││ │ ┌▼┐ ┌──┼─┼──┐ ┌▼┐ ┌▼┐ │ │
││ │ └─┴──┐ │ │ │ │ ┌──┴─┴──┐ └─┘ │ │
││ │ │ │ │ ▼ │ │ │ │
││ │ ┌▼┐ ▲ │ │ ┌▼┐ ┌▼┐ │ │
││ │ └─┴──┐ │ │ │ │ └─┘ ┌──┴─┴──┐ │ │
││ │ │ └──┼─┼──┘ │ │ │ │
││ │ ┌▼┐ │ │ ┌▼┐ ┌▼┐ │ │
││ │ └─┘ │ │ └─┘ └─┘ │ │
││ │ │ │ │ │
││ │ │ │ │ │
││ │ │ │ │ │
││ │ │ │ │ │
││ └───────────────────────────────────┘ └──────────────────────────────────┘ │
││ │
└┴─────┬──▲───────────────────────────────────────────────────────────────────┘
│ │
┌┬─────▼──┴───────────────────────────┐ ┌┬───────────────────────────────────┐
││ Prompt Context │ ││ LLM │
││ │ ││ │
││ - plan graph │ ││ GPT4-Turbo, Claude 3, BYOM │
││ - errors ├──►│ │
││ - code │ ││ │
││ ◄──┼│ │
││ │ ││ │
││ │ ││ │
││ │ ││ │
└┴────────────────────────────────────┘ └┴───────────────────────────────────┘
The AST is generated from ts-morph. This AST allows bumpgen
to understand the relationship between nodes in a codebase.
The plan graph is a concept detailed in codeplan by Microsoft. The plan graph allows bumpgen
to not only fix an issue at a point but also fix the 2nd order breaking changes from the fix itself. In short, it allows bumpgen
to propagate a fix to the rest of the codebase.
We pass the plan graph, the error, and the actual file with the breaking change as context to the LLM to maximize its ability to fix the issue.
We only support gpt-4-turbo-preview
at this time.
bumpgen + GPT-4 Turbo ██████████░░░░░░░░░░░ 45% (67 tasks)
We benchmarked bumpgen
with GPT-4 Turbo against a suite of version bumps with breaking changes. You can check out the evals here.
Contributions are welcome! To get set up for development, see Development.
- [x] codeplan
- [x] Typescript/TSX support
- [ ]
bumpgen
GitHub app - [ ] Embeddings for different package versions
- [ ] Use test runners as an oracle
- [ ] C# support
- [ ] Java support
- [ ] Go support
Join our Discord community to contribute, learn more, and ask questions!
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for bumpgen
Similar Open Source Tools
bumpgen
bumpgen is a tool designed to automatically upgrade TypeScript / TSX dependencies and make necessary code changes to handle any breaking issues that may arise. It uses an abstract syntax tree to analyze code relationships, type definitions for external methods, and a plan graph DAG to execute changes in the correct order. The tool is currently limited to TypeScript and TSX but plans to support other strongly typed languages in the future. It aims to simplify the process of upgrading dependencies and handling code changes caused by updates.
anythingllm-docs
anythingllm-docs is a documentation repository for the AnythingLLM project. It contains detailed guides, setup instructions, and information on features and legal aspects of the project. The repository structure is organized into public, pages, components, and configuration files. Users can contribute by creating issues and pull requests following specific guidelines. The project is licensed under the MIT License and has been migrated to NextJS with the help of @ShadowArcanist.
shell_gpt
ShellGPT is a command-line productivity tool powered by AI large language models (LLMs). This command-line tool offers streamlined generation of shell commands, code snippets, documentation, eliminating the need for external resources (like Google search). Supports Linux, macOS, Windows and compatible with all major Shells like PowerShell, CMD, Bash, Zsh, etc.
gpt-all-star
GPT-All-Star is an AI-powered code generation tool designed for scratch development of web applications with team collaboration of autonomous AI agents. The primary focus of this research project is to explore the potential of autonomous AI agents in software development. Users can organize their team, choose leaders for each step, create action plans, and work together to complete tasks. The tool supports various endpoints like OpenAI, Azure, and Anthropic, and provides functionalities for project management, code generation, and team collaboration.
dbgpts
The dbgpts repository contains data apps, AWEL operators, AWEL workflow templates, and agents that are built upon DB-GPT. Users can install and manage these components within their DB-GPT environment. The repository offers functionalities such as listing available flows, installing dbgpts from the official repository, viewing installed dbgpts, running flows, and managing repositories. Users can create new workflow templates and operators using the provided commands. The repository aims to enhance the capabilities of DB-GPT by providing a collection of useful tools and resources for data processing and workflow management.
LangGraph-Expense-Tracker
LangGraph Expense tracker is a small project that explores the possibilities of LangGraph. It allows users to send pictures of invoices, which are then structured and categorized into expenses and stored in a database. The project includes functionalities for invoice extraction, database setup, and API configuration. It consists of various modules for categorizing expenses, creating database tables, and running the API. The database schema includes tables for categories, payment methods, and expenses, each with specific columns to track transaction details. The API documentation is available for reference, and the project utilizes LangChain for processing expense data.
DeepDanbooru
DeepDanbooru is an anime-style girl image tag estimation system written in Python. It allows users to estimate images using a live demo site. The tool requires specific packages to be installed and provides a structured dataset for training projects. Users can create training projects, download tags, filter datasets, and start training to estimate tags for images. The tool uses a specific dataset structure and project structure to facilitate the training process.
tangent
Tangent is a canvas for exploring AI conversations, allowing users to resurrect and continue conversations, branch and explore different ideas, organize conversations by topics, and support archive data exports. It aims to provide a visual/textual/audio exploration experience with AI assistants, offering a 'thoughts workbench' for experimenting freely, reviving old threads, and diving into tangents. The project structure includes a modular backend with components for API routes, background task management, data processing, and more. Prerequisites for setup include Whisper.cpp, Ollama, and exported archive data from Claude or ChatGPT. Users can initialize the environment, install Python packages, set up Ollama, configure local models, and start the backend and frontend to interact with the tool.
Vitron
Vitron is a unified pixel-level vision LLM designed for comprehensive understanding, generating, segmenting, and editing static images and dynamic videos. It addresses challenges in existing vision LLMs such as superficial instance-level understanding, lack of unified support for images and videos, and insufficient coverage across various vision tasks. The tool requires Python >= 3.8, Pytorch == 2.1.0, and CUDA Version >= 11.8 for installation. Users can deploy Gradio demo locally and fine-tune their models for specific tasks.
doc-comments-ai
doc-comments-ai is a tool designed to automatically generate code documentation using language models. It allows users to easily create documentation comment blocks for methods in various programming languages such as Python, Typescript, Javascript, Java, Rust, and more. The tool supports both OpenAI and local LLMs, ensuring data privacy and security. Users can generate documentation comments for methods in files, inline comments in method bodies, and choose from different models like GPT-3.5-Turbo, GPT-4, and Azure OpenAI. Additionally, the tool provides support for Treesitter integration and offers guidance on selecting the appropriate model for comprehensive documentation needs.
GhidrOllama
GhidrOllama is a script that interacts with Ollama's API to perform various reverse engineering tasks within Ghidra. It supports both local and remote instances of Ollama, providing functionalities like explaining functions, suggesting names, rewriting functions, finding bugs, and automating analysis of specific functions in binaries. Users can ask questions about functions, find vulnerabilities, and receive explanations of assembly instructions. The script bridges the gap between Ghidra and Ollama models, enhancing reverse engineering capabilities.
letta
Letta is an open source framework for building stateful LLM applications. It allows users to build stateful agents with advanced reasoning capabilities and transparent long-term memory. The framework is white box and model-agnostic, enabling users to connect to various LLM API backends. Letta provides a graphical interface, the Letta ADE, for creating, deploying, interacting, and observing with agents. Users can access Letta via REST API, Python, Typescript SDKs, and the ADE. Letta supports persistence by storing agent data in a database, with PostgreSQL recommended for data migrations. Users can install Letta using Docker or pip, with Docker defaulting to PostgreSQL and pip defaulting to SQLite. Letta also offers a CLI tool for interacting with agents. The project is open source and welcomes contributions from the community.
Gemini-API
Gemini-API is a reverse-engineered asynchronous Python wrapper for Google Gemini web app (formerly Bard). It provides features like persistent cookies, ImageFx support, extension support, classified outputs, official flavor, and asynchronous operation. The tool allows users to generate contents from text or images, have conversations across multiple turns, retrieve images in response, generate images with ImageFx, save images to local files, use Gemini extensions, check and switch reply candidates, and control log level.
ChatDBG
ChatDBG is an AI-based debugging assistant for C/C++/Python/Rust code that integrates large language models into a standard debugger (`pdb`, `lldb`, `gdb`, and `windbg`) to help debug your code. With ChatDBG, you can engage in a dialog with your debugger, asking open-ended questions about your program, like `why is x null?`. ChatDBG will _take the wheel_ and steer the debugger to answer your queries. ChatDBG can provide error diagnoses and suggest fixes. As far as we are aware, ChatDBG is the _first_ debugger to automatically perform root cause analysis and to provide suggested fixes.
openai_trtllm
OpenAI-compatible API for TensorRT-LLM and NVIDIA Triton Inference Server, which allows you to integrate with langchain
gpt-translate
Markdown Translation BOT is a GitHub action that translates markdown files into multiple languages using various AI models. It supports markdown, markdown-jsx, and json files only. The action can be executed by individuals with write permissions to the repository, preventing API abuse by non-trusted parties. Users can set up the action by providing their API key and configuring the workflow settings. The tool allows users to create comments with specific commands to trigger translations and automatically generate pull requests or add translated files to existing pull requests. It supports multiple file translations and can interpret any language supported by GPT-4 or GPT-3.5.
For similar tasks
bumpgen
bumpgen is a tool designed to automatically upgrade TypeScript / TSX dependencies and make necessary code changes to handle any breaking issues that may arise. It uses an abstract syntax tree to analyze code relationships, type definitions for external methods, and a plan graph DAG to execute changes in the correct order. The tool is currently limited to TypeScript and TSX but plans to support other strongly typed languages in the future. It aims to simplify the process of upgrading dependencies and handling code changes caused by updates.
For similar jobs
kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.
ai-on-gke
This repository contains assets related to AI/ML workloads on Google Kubernetes Engine (GKE). Run optimized AI/ML workloads with Google Kubernetes Engine (GKE) platform orchestration capabilities. A robust AI/ML platform considers the following layers: Infrastructure orchestration that support GPUs and TPUs for training and serving workloads at scale Flexible integration with distributed computing and data processing frameworks Support for multiple teams on the same infrastructure to maximize utilization of resources
tidb
TiDB is an open-source distributed SQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. It is MySQL compatible and features horizontal scalability, strong consistency, and high availability.
nvidia_gpu_exporter
Nvidia GPU exporter for prometheus, using `nvidia-smi` binary to gather metrics.
tracecat
Tracecat is an open-source automation platform for security teams. It's designed to be simple but powerful, with a focus on AI features and a practitioner-obsessed UI/UX. Tracecat can be used to automate a variety of tasks, including phishing email investigation, evidence collection, and remediation plan generation.
openinference
OpenInference is a set of conventions and plugins that complement OpenTelemetry to enable tracing of AI applications. It provides a way to capture and analyze the performance and behavior of AI models, including their interactions with other components of the application. OpenInference is designed to be language-agnostic and can be used with any OpenTelemetry-compatible backend. It includes a set of instrumentations for popular machine learning SDKs and frameworks, making it easy to add tracing to your AI applications.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
kong
Kong, or Kong API Gateway, is a cloud-native, platform-agnostic, scalable API Gateway distinguished for its high performance and extensibility via plugins. It also provides advanced AI capabilities with multi-LLM support. By providing functionality for proxying, routing, load balancing, health checking, authentication (and more), Kong serves as the central layer for orchestrating microservices or conventional API traffic with ease. Kong runs natively on Kubernetes thanks to its official Kubernetes Ingress Controller.