
honey
Bee is an AI, easy and high efficiency ORM framework,support JDBC,Cassandra,Mongodb,Sharding,Android,HarmonyOS. Honey is the implementation of the Bee.
Stars: 125

Bee is an ORM framework that provides easy and high-efficiency database operations, allowing developers to focus on business logic development. It supports various databases and features like automatic filtering, partial field queries, pagination, and JSON format results. Bee also offers advanced functionalities like sharding, transactions, complex queries, and MongoDB ORM. The tool is designed for rapid application development in Java, offering faster development for Java Web and Spring Cloud microservices. The Enterprise Edition provides additional features like financial computing support, automatic value insertion, desensitization, dictionary value conversion, multi-tenancy, and more.
README:
Easy for Stronger.
Bee is an ORM framework.
Bee is an easy and high efficiency ORM framework.
Coding Complexity is O(1),it means that Bee will do the Dao for you.
You don't need to write the Dao by yourself anymore.Help you to focus more on the development of business logic.
Good Feature: AI, Timesaving/Tasteful, Easy, Automatic (AiTeaSoft Style)
Newest version is:Bee V2.4.0 LTS
Sharding target: It is mainly transparent to business development and coding, with only a little sharding config.
Bee see:
https://github.com/automvc/bee
bee-ext:
https://github.com/automvc/bee-ext
Easy to use:
- 1.Simple interface, convenient to use. The Suid interface provides four object-oriented methods corresponding to the SQL language's select, update, insert, and delete operations.
- 2.By using Bee, you no longer need to write separate DAO code. You can directly call Bee's API to perform operations on the database.
- 3.Convention-over-configuration: Javabean can no annotation, no xml.
- 4.Intelligent automatic filtering of null and empty string properties in entities eliminates the need for writing code to check for non-null values.
- 5.Easily implement partial field queries and native statement pagination.
- 6.Supports returning query results in JSON format; supports chaining.
- 7.Supports Sharding, both database and table Sharding; database-only Sharding; table-only Sharding; and read-write separation. This functionality is transparent to existing code and does not require additional coding.
- 8.Easily extendable with multiple database support (MySQL, MariaDB, Oracle, H2, SQLite, PostgreSQL, SQL Server, Access, Kingbase, Dameng, etc.), and theoretically supports any database supported by JDBC. Additionally, supports Android and Harmony.
- 9.Additional database pagination support for: MsAccess, Cubrid, HSQL, Derby, Firebird, etc.
- 10.Multiple databases can be used simultaneously (e.g., MySQL, Oracle, SQL Server).
Automatic, powerful:
- 11.Dynamic/arbitrary combination of query conditions without the need to prepare DAO interfaces in advance. New query requirements can be handled without modifying or adding interfaces.
- 12.Supports transactions, using the same connection for multiple ORM operations, FOR UPDATE, batch processing, executing native SQL statements, and stored procedures.
- 13.Supports object-oriented complex queries, multi-table queries (no N+1 problem), and supports one-to-one, one-to-many, many-to-one, and many-to-many relationships. The result structure can differ based on whether the sub-table uses List;multi-table association update, insert, and delete(2.1.8).
- 14.MongoDB ORM and support for MongoDB Sharding.
- 15.Supports register, interceptor, multi-tenancy, and custom TypeHandlers for handling ResultSet results in queries. SetParaTypeConvert converts PreparedStatement parameter types.
- 16.Custom dynamic SQL tags, such as @in, @toIsNULL1, @toIsNULL2, , . Allows dynamic SQL, converting lists into statements like in (1,2,3) without requiring foreach loops. Batch insertion also does not require foreach.
- 17.Complex query can be automatically parsed by the frontend and backend.
- 18.L1 cache, simple in concept and powerful in function; L1 cache can also be fine tuned like the JVM; Support updatable long-term cache list and update configuration table without restart. Inherently resistant to cache penetration. L2 cache extension support; Redis L2 cache support.
- 19.No third-party plugin dependencies; can be used with zero configuration.
- 20.High performance: close to the speed of JDBC; small file size: Bee V1.17 is only 502k, V2.1 is only 827k.
Assist function: - 21.Additional features: 21. Provides a naturally simple solution for generating distributed primary keys: generates globally unique, monotonically increasing (within a worker ID) numeric IDs in a distributed environment.
- 22.Supports automatic generation of Javabean corresponding to tables(support Swagger), creating tables based on Javabean, and automatically generating backend Javaweb code based on templates. Can print executable SQL statements without placeholders for easy debugging. Supports generating SQL scripts in JSON format.
- 23.Supports reading Excel files and importing data into the database; simple operations. Supports generating database tables from Excel configurations.
- 24.Stream tool class StreamUtil;DateUtil date conversion, judge date format, calculate age.
- 25.Rich annotation support: PrimaryKey, Column, Datetime, Createtime, Updatetime; JustFetch, ReplaceInto (MySQL), Dict, DictI18n,GridFs, etc.
- 26.Use entity name _F (automatically generated) to reference entity field names, e.g., Users_F.name or in SuidRichExt interface using the format Users::getName.
- MongoDB update,delete,deleteById support for sharding
- MongoDB modify sharding cache enhance
- MongoDB index support for sharding
- add ShardingFullOpTemplate
- ObjSQLRich(SuidRich) add selectByTemplate for select
- GenFiles support genFileViaStream
- Genbean:update genFieldFile,toString, add method setUpperFieldNameInFieldFile
- update DoNotSetTabShadngValue tip message(Sharding insert need set the sharding value)
- SuidRich selectById,deleteById support sharding
- Condition support clone
- fixed bug:
sharding select all(no paging)
sharding modify cache
-
Chaing SQL programming supports placeholder precompilation to prevent injection attacks
-
Do not cache if no table name is specified
-
Add a default date sharding implementation for Calculate, and add a custom sharding implementation example
-
Support ElasticSearch(7.x) ORM query
-
PreparedSql support set table name for enhance relative cache
-
MongoDB gen Javabean support gen comment
-
the Sharding template method class uses finally to handle context recycling
-
MapSql(MapSuid)supports using Condition to implement more complex where conditions, with updateSet set values
MapSql add methods: public void where(Condition condition);
public void updateSet(Condition condition); -
add ConditionExt to support the use of entity::getName to reference property
ConditionExt support Condition no need hard code the field name -
add ChainSqlFactory
-
add select Result Assembler
-
MoreTable add methods:selectWithFun,count
-
MoreTable add method List<String[]> selectString(T entity, Condition condition)
-
enhance MoreTable update
-
support property style sharding config
-
MoreTable support selectJson
-
GenBean support java.time.LocalDateTime
-
fixed bug: GenConfig baseDir default value support Linux env
-
Suid support java.time.LocalDateTime type
-
TO_DATE for Oracle filter the record in SQL where part
-
Fixed bug: MoreTable single insertion, automatic value setting before doBeforePasreEntity InsertAndReturnId in sharding mode need setInitIdByAuto > doBeforePasreEntity
InsertAndReturnId: The pkName passed in should be converted to column name -
enhance:
When inserting multiple tables, if there are no child tables, insert is used for inserting the main table
File generation, add backup of existing files function
TranHandler throws the received exception to the upper level -
support pgsql json/jsonb, but in where part, need write the pgsql special sql
-
Improve the sharding function
1.MySQL
2.Oracle
3.SQL Server
4.MariaDB
5.H2
6.SQLite
7.PostgreSQL
8.MS Access
9.Kingbase
10.DM
11.OceanBase
12.Cubrid,HSQL,Derby,Firebird
13.Other DB that support JDBC
NOSQL:
14.Mongodb
15.ElasticSearch
16.Cassandra
Mobile environment (database):
17.Android
18.Harmony
Test Evn : Local windows.
DB: MySQL (Version 5.6.24).
Test point: Batch Insert;Paging Select; Transaction(update and select).
Batch Insert(unit: ms) |
|||||
5k | 1w | 2w | 5w | 10w | |
Bee | 529.00 | 458.33 | 550.00 | 1315.67 | 4056.67 |
MyBatis | 1193 | 713 | 1292.67 | 1824.33 | Exception |
Paging Select(unit: ms) |
|||||
20 | 50 | 100 | 200 | 500 | |
Bee | 17.33 | 58.67 | 52.33 | 38.33 | 57.33 |
MyBatis | 314.33 | 446.00 | 1546.00 | 2294.33 | 6216.67 |
Transaction(update and select) (unit: ms) |
|||||
20 | 50 | 100 | 200 | 500 | |
Bee | 1089.00 | 70.00 | 84.00 | 161.33 | 31509.33 |
MyBatis | 1144 | 35 | 79.67 | 146.00 | 32155.33 |
Bee need files
orm\compare\bee\service\BeeOrdersService.java
MyBatis need files
orm\compare\mybatis\service\MybatisOrdersService.java
orm\compare\mybatis\dao\OrdersDao.java
orm\compare\mybatis\dao\OrdersMapper.java
orm\compare\mybatis\dao\impl\OrdersDaoImpl.java
common,Javabean and Service interface:
Orders.java
OrdersService.java
Performance comparison data of Bee application in app development
Operate 10000 records, and the use time comparison is as follows.
Operate 10000 records(unit: ms) |
|||
insert | query | delete | |
greenDao(Android) | 104666 | 600 | 47 |
Bee(Android 8.1) | 747 | 184 | 25 |
Bee(HarmonyOS P40 Pro simulator) | 339 | 143 | 2 |
<dependency>
<groupId>org.teasoft</groupId>
<artifactId>bee-all</artifactId>
<version>2.4.0</version>
</dependency>
<!-- Mysql config.You need change it to the real database config. -->
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.47</version>
<scope>runtime</scope>
</dependency>
Gradle
implementation group: 'org.teasoft', name: 'bee-all', version: '2.4.0'
//Gradle(Short)
implementation 'org.teasoft:bee-all:2.4.0'
eg:
Create one database,default name is bee.
Create the tables and init the data by run the init-data(user-orders)-mysql.sql file(it is mysql sql script).
If no the bee.properties file, you can create it by yourself.
#bee.databaseName=MySQL
bee.db.dbName=MySQL
bee.db.driverName = com.mysql.jdbc.Driver
#bee.db.url =jdbc:mysql://localhost:3306/bee?characterEncoding=UTF-8
bee.db.url =jdbc:mysql://127.0.0.1:3306/bee?characterEncoding=UTF-8&useSSL=false
bee.db.username = root
bee.db.password =
#print log
bee.osql.showSQL=true
bee.osql.showSql.showType=true
bee.osql.showSql.showExecutableSql=true
# since 2.1.7 sqlFormat=true,will format the executable sql
bee.osql.showSql.sqlFormat=false
#log4j>slf4j>log4j2>androidLog>harmonyLog>systemLogger>fileLogger>noLogging>jdkLog>commonsLog
bee.osql.loggerType=systemLogger
Orders(Javabean)
Auto Genernate Javabean
import java.math.BigDecimal;
import java.util.List;
import org.teasoft.bee.osql.BeeException;
import org.teasoft.bee.osql.Suid;
import org.teasoft.honey.osql.core.BeeFactoryHelper;
import org.teasoft.honey.osql.core.Logger;
/**
* @author Kingstar
* @since 1.0
*/
public class SuidExamEN {
public static void main(String[] args) {
try {
Suid suid = BeeFactoryHelper.getSuid();
Orders orders1 = new Orders();//need gen the Javabean
orders1.setId(100001L);
orders1.setName("Bee(ORM Framework)");
List<Orders> list1 = suid.select(orders1); // 1. select
for (int i = 0; i < list1.size(); i++) {
Logger.info(list1.get(i).toString());
}
//Condition condition=BF.getCondition(); // The SuidRich interface has many methods with the Condition parameter
//condition.op(Orders_F.userid, Op.ge, 0); // userid>=0
//Op supports: =,>,<,>=,<=,!=, Like, in, not in, etc
orders1.setName("Bee(ORM Framework)");
int updateNum = suid.update(orders1); //2. update
Logger.info("update record:" + updateNum);
Orders orders2 = new Orders();
orders2.setUserid("bee");
orders2.setName("Bee(ORM Framework)");
orders2.setTotal(new BigDecimal("91.99"));
orders2.setRemark(""); // empty String test
int insertNum = suid.insert(orders2); // 3. insert
Logger.info("insert record:" + insertNum);
int deleteNum = suid.delete(orders2); // 4. delete
Logger.info("delete record:" + deleteNum);
} catch (BeeException e) {
Logger.error("In SuidExamEN (BeeException):" + e.getMessage());
//e.printStackTrace();
} catch (Exception e) {
Logger.error("In SuidExamEN (Exception):" + e.getMessage());
//e.printStackTrace();
}
}
}
// notice: this is just a simple sample. Bee suport transaction,paging,complicate select,slect json,and so on.
bee.db.isAndroid=true
bee.db.androidDbName=account.db
bee.db.androidDbVersion=1
bee.osql.loggerType=androidLog
#turn on query result field type conversion, and more types will be supported
bee.osql.openFieldTypeHandler=true
#If you are allowed to delete and update the whole table, you need to remove the comments
#bee.osql.notDeleteWholeRecords=false
#bee.osql.notUpdateWholeRecords=false
public class YourAppCreateAndUpgrade implements CreateAndUpgrade{
@Override
public void onCreate() {
// You can create tables in an object-oriented way
Ddl.createTable(new Orders(), false);
Ddl.createTable(new TestUser(), false);
}
@Override
public void onUpgrade(int oldVersion, int newVersion) {
if(newVersion==2) {
Ddl.createTable(new LeafAlloc(), true);
Log.i("onUpgrade", "你在没有卸载的情况下,在线更新到版本:"+newVersion);
}
}
}
Configure android:name to BeeApplication in AndroidManifest.xml file.
package com.aiteasoft.util;
import org.teasoft.bee.android.CreateAndUpgradeRegistry;
import org.teasoft.beex.android.ApplicationRegistry;
public class BeeApplication extends Application {
private static Context context;
@Override
public void onCreate() {
ApplicationRegistry.register(this);//注册上下文
CreateAndUpgradeRegistry.register(YourAppCreateAndUpgrade.class);
}
}
// 并在AndroidManifest.xml,配置android:name为BeeApplication
<application
android:icon="@drawable/appicon"
android:label="@string/app_name"
android:name="com.aiteasoft.util.BeeApplication"
>
Suid suid=BF.getSuid();
List<Orders> list = suid.select(new Orders());
Performance comparison data of Bee application in app development
Operate 10000 records, and the use time comparison is as follows.
Operate 10000 records(unit: ms) |
|||
insert | query | delete | |
greenDao(Android) | 104666 | 600 | 47 |
Bee(Android 8.1) | 747 | 184 | 25 |
Bee(HarmonyOS P40 Pro simulator) | 339 | 143 | 2 |
Let Java more quicker programming than php and Rails.
Faster development of new combinations for Java Web:
Bee+Spring+SpringMVC
Faster development of new combinations for Spring Cloud microservices:
Bee + Spring Boot
Rapid Application Code Generation Platform--AiTea Soft made in China!
...
API-V1.17(Newest) SourceCode contain bee-1.17 CN & EN API,bee-1.17 CN SourceCode
Author's email: [email protected]
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for honey
Similar Open Source Tools

honey
Bee is an ORM framework that provides easy and high-efficiency database operations, allowing developers to focus on business logic development. It supports various databases and features like automatic filtering, partial field queries, pagination, and JSON format results. Bee also offers advanced functionalities like sharding, transactions, complex queries, and MongoDB ORM. The tool is designed for rapid application development in Java, offering faster development for Java Web and Spring Cloud microservices. The Enterprise Edition provides additional features like financial computing support, automatic value insertion, desensitization, dictionary value conversion, multi-tenancy, and more.

bee
Bee is an easy and high efficiency ORM framework that simplifies database operations by providing a simple interface and eliminating the need to write separate DAO code. It supports various features such as automatic filtering of properties, partial field queries, native statement pagination, JSON format results, sharding, multiple database support, and more. Bee also offers powerful functionalities like dynamic query conditions, transactions, complex queries, MongoDB ORM, cache management, and additional tools for generating distributed primary keys, reading Excel files, and more. The newest versions introduce enhancements like placeholder precompilation, default date sharding, ElasticSearch ORM support, and improved query capabilities.

inferable
Inferable is an open source platform that helps users build reliable LLM-powered agentic automations at scale. It offers a managed agent runtime, durable tool calling, zero network configuration, multiple language support, and is fully open source under the MIT license. Users can define functions, register them with Inferable, and create runs that utilize these functions to automate tasks. The platform supports Node.js/TypeScript, Go, .NET, and React, and provides SDKs, core services, and bootstrap templates for various languages.

CodeGeeX4
CodeGeeX4-ALL-9B is an open-source multilingual code generation model based on GLM-4-9B, offering enhanced code generation capabilities. It supports functions like code completion, code interpreter, web search, function call, and repository-level code Q&A. The model has competitive performance on benchmarks like BigCodeBench and NaturalCodeBench, outperforming larger models in terms of speed and performance.

bee-agent-framework
The Bee Agent Framework is an open-source tool for building, deploying, and serving powerful agentic workflows at scale. It provides AI agents, tools for creating workflows in Javascript/Python, a code interpreter, memory optimization strategies, serialization for pausing/resuming workflows, traceability features, production-level control, and upcoming features like model-agnostic support and a chat UI. The framework offers various modules for agents, llms, memory, tools, caching, errors, adapters, logging, serialization, and more, with a roadmap including MLFlow integration, JSON support, structured outputs, chat client, base agent improvements, guardrails, and evaluation.

semantic-kernel
Semantic Kernel is an SDK that integrates Large Language Models (LLMs) like OpenAI, Azure OpenAI, and Hugging Face with conventional programming languages like C#, Python, and Java. Semantic Kernel achieves this by allowing you to define plugins that can be chained together in just a few lines of code. What makes Semantic Kernel _special_ , however, is its ability to _automatically_ orchestrate plugins with AI. With Semantic Kernel planners, you can ask an LLM to generate a plan that achieves a user's unique goal. Afterwards, Semantic Kernel will execute the plan for the user.

lance
Lance is a modern columnar data format optimized for ML workflows and datasets. It offers high-performance random access, vector search, zero-copy automatic versioning, and ecosystem integrations with Apache Arrow, Pandas, Polars, and DuckDB. Lance is designed to address the challenges of the ML development cycle, providing a unified data format for collection, exploration, analytics, feature engineering, training, evaluation, deployment, and monitoring. It aims to reduce data silos and streamline the ML development process.

ExtractThinker
ExtractThinker is a library designed for extracting data from files and documents using Language Model Models (LLMs). It offers ORM-style interaction between files and LLMs, supporting multiple document loaders such as Tesseract OCR, Azure Form Recognizer, AWS TextExtract, and Google Document AI. Users can customize extraction using contract definitions, process documents asynchronously, handle various document formats efficiently, and split and process documents. The project is inspired by the LangChain ecosystem and focuses on Intelligent Document Processing (IDP) using LLMs to achieve high accuracy in document extraction tasks.

A-mem
A-MEM is a novel agentic memory system designed for Large Language Model (LLM) agents to dynamically organize memories in an agentic way. It introduces advanced memory organization capabilities, intelligent indexing, and linking of memories, comprehensive note generation, interconnected knowledge networks, continuous memory evolution, and agent-driven decision making for adaptive memory management. The system facilitates agent construction and enables dynamic memory operations and flexible agent-memory interactions.

AIOS
AIOS, a Large Language Model (LLM) Agent operating system, embeds large language model into Operating Systems (OS) as the brain of the OS, enabling an operating system "with soul" -- an important step towards AGI. AIOS is designed to optimize resource allocation, facilitate context switch across agents, enable concurrent execution of agents, provide tool service for agents, maintain access control for agents, and provide a rich set of toolkits for LLM Agent developers.

finite-monkey-engine
FiniteMonkey is an advanced vulnerability mining engine powered purely by GPT, requiring no prior knowledge base or fine-tuning. Its effectiveness significantly surpasses most current related research approaches. The tool is task-driven, prompt-driven, and focuses on prompt design, leveraging 'deception' and hallucination as key mechanics. It has helped identify vulnerabilities worth over $60,000 in bounties. The tool requires PostgreSQL database, OpenAI API access, and Python environment for setup. It supports various languages like Solidity, Rust, Python, Move, Cairo, Tact, Func, Java, and Fake Solidity for scanning. FiniteMonkey is best suited for logic vulnerability mining in real projects, not recommended for academic vulnerability testing. GPT-4-turbo is recommended for optimal results with an average scan time of 2-3 hours for medium projects. The tool provides detailed scanning results guide and implementation tips for users.

glossAPI
The glossAPI project aims to develop a Greek language model as open-source software, with code licensed under EUPL and data under Creative Commons BY-SA. The project focuses on collecting and evaluating open text sources in Greek, with efforts to prioritize and gather textual data sets. The project encourages contributions through the CONTRIBUTING.md file and provides resources in the wiki for viewing and modifying recorded sources. It also welcomes ideas and corrections through issue submissions. The project emphasizes the importance of open standards, ethically secured data, privacy protection, and addressing digital divides in the context of artificial intelligence and advanced language technologies.

spandrel
Spandrel is a library for loading and running pre-trained PyTorch models. It automatically detects the model architecture and hyperparameters from model files, and provides a unified interface for running models.

kornia
Kornia is a differentiable computer vision library for PyTorch. It consists of a set of routines and differentiable modules to solve generic computer vision problems. At its core, the package uses PyTorch as its main backend both for efficiency and to take advantage of the reverse-mode auto-differentiation to define and compute the gradient of complex functions.

KULLM
KULLM (구름) is a Korean Large Language Model developed by Korea University NLP & AI Lab and HIAI Research Institute. It is based on the upstage/SOLAR-10.7B-v1.0 model and has been fine-tuned for instruction. The model has been trained on 8×A100 GPUs and is capable of generating responses in Korean language. KULLM exhibits hallucination and repetition phenomena due to its decoding strategy. Users should be cautious as the model may produce inaccurate or harmful results. Performance may vary in benchmarks without a fixed system prompt.
For similar tasks

venice
Venice is a derived data storage platform, providing the following characteristics: 1. High throughput asynchronous ingestion from batch and streaming sources (e.g. Hadoop and Samza). 2. Low latency online reads via remote queries or in-process caching. 3. Active-active replication between regions with CRDT-based conflict resolution. 4. Multi-cluster support within each region with operator-driven cluster assignment. 5. Multi-tenancy, horizontal scalability and elasticity within each cluster. The above makes Venice particularly suitable as the stateful component backing a Feature Store, such as Feathr. AI applications feed the output of their ML training jobs into Venice and then query the data for use during online inference workloads.

pinecone-ts-client
The official Node.js client for Pinecone, written in TypeScript. This client library provides a high-level interface for interacting with the Pinecone vector database service. With this client, you can create and manage indexes, upsert and query vector data, and perform other operations related to vector search and retrieval. The client is designed to be easy to use and provides a consistent and idiomatic experience for Node.js developers. It supports all the features and functionality of the Pinecone API, making it a comprehensive solution for building vector-powered applications in Node.js.

honey
Bee is an ORM framework that provides easy and high-efficiency database operations, allowing developers to focus on business logic development. It supports various databases and features like automatic filtering, partial field queries, pagination, and JSON format results. Bee also offers advanced functionalities like sharding, transactions, complex queries, and MongoDB ORM. The tool is designed for rapid application development in Java, offering faster development for Java Web and Spring Cloud microservices. The Enterprise Edition provides additional features like financial computing support, automatic value insertion, desensitization, dictionary value conversion, multi-tenancy, and more.

llama_index
LlamaIndex is a data framework for building LLM applications. It provides tools for ingesting, structuring, and querying data, as well as integrating with LLMs and other tools. LlamaIndex is designed to be easy to use for both beginner and advanced users, and it provides a comprehensive set of features for building LLM applications.

kernel-memory
Kernel Memory (KM) is a multi-modal AI Service specialized in the efficient indexing of datasets through custom continuous data hybrid pipelines, with support for Retrieval Augmented Generation (RAG), synthetic memory, prompt engineering, and custom semantic memory processing. KM is available as a Web Service, as a Docker container, a Plugin for ChatGPT/Copilot/Semantic Kernel, and as a .NET library for embedded applications. Utilizing advanced embeddings and LLMs, the system enables Natural Language querying for obtaining answers from the indexed data, complete with citations and links to the original sources. Designed for seamless integration as a Plugin with Semantic Kernel, Microsoft Copilot and ChatGPT, Kernel Memory enhances data-driven features in applications built for most popular AI platforms.

deeplake
Deep Lake is a Database for AI powered by a storage format optimized for deep-learning applications. Deep Lake can be used for: 1. Storing data and vectors while building LLM applications 2. Managing datasets while training deep learning models Deep Lake simplifies the deployment of enterprise-grade LLM-based products by offering storage for all data types (embeddings, audio, text, videos, images, pdfs, annotations, etc.), querying and vector search, data streaming while training models at scale, data versioning and lineage, and integrations with popular tools such as LangChain, LlamaIndex, Weights & Biases, and many more. Deep Lake works with data of any size, it is serverless, and it enables you to store all of your data in your own cloud and in one place. Deep Lake is used by Intel, Bayer Radiology, Matterport, ZERO Systems, Red Cross, Yale, & Oxford.

databend
Databend is an open-source cloud data warehouse that serves as a cost-effective alternative to Snowflake. With its focus on fast query execution and data ingestion, it's designed for complex analysis of the world's largest datasets.

db-ally
db-ally is a library for creating natural language interfaces to data sources. It allows developers to outline specific use cases for a large language model (LLM) to handle, detailing the desired data format and the possible operations to fetch this data. db-ally effectively shields the complexity of the underlying data source from the model, presenting only the essential information needed for solving the specific use cases. Instead of generating arbitrary SQL, the model is asked to generate responses in a simplified query language.
For similar jobs

db2rest
DB2Rest is a modern low-code REST DATA API platform that simplifies the development of intelligent applications. It seamlessly integrates existing and new databases with language models (LMs/LLMs) and vector stores, enabling the rapid delivery of context-aware, reasoning applications without vendor lock-in.

kitops
KitOps is a packaging and versioning system for AI/ML projects that uses open standards so it works with the AI/ML, development, and DevOps tools you are already using. KitOps simplifies the handoffs between data scientists, application developers, and SREs working with LLMs and other AI/ML models. KitOps' ModelKits are a standards-based package for models, their dependencies, configurations, and codebases. ModelKits are portable, reproducible, and work with the tools you already use.

kweaver
KWeaver is an open-source cognitive intelligence development framework that provides data scientists, application developers, and domain experts with the ability for rapid development, comprehensive openness, and high-performance knowledge network generation and cognitive intelligence large model framework. It offers features such as automated and visual knowledge graph construction, visualization and analysis of knowledge graph data, knowledge graph integration, knowledge graph resource management, large model prompt engineering and debugging, and visual configuration for large model access.

honey
Bee is an ORM framework that provides easy and high-efficiency database operations, allowing developers to focus on business logic development. It supports various databases and features like automatic filtering, partial field queries, pagination, and JSON format results. Bee also offers advanced functionalities like sharding, transactions, complex queries, and MongoDB ORM. The tool is designed for rapid application development in Java, offering faster development for Java Web and Spring Cloud microservices. The Enterprise Edition provides additional features like financial computing support, automatic value insertion, desensitization, dictionary value conversion, multi-tenancy, and more.

moxin
Moxin is an AI LLM client written in Rust to demonstrate the functionality of the Robius framework for multi-platform application development. It is currently in early stages of development and not fully functional. The tool supports building and running on macOS and Linux systems, with packaging options available for distribution. Users can install the required WasmEdge WASM runtime and dependencies to build and run Moxin. Packaging for distribution includes generating `.deb` Debian packages, AppImage, and pacman installation packages for Linux, as well as `.app` bundles and `.dmg` disk images for macOS. The macOS app is not signed, leading to a warning on installation, which can be resolved by removing the quarantine attribute from the installed app.

choco-builder
ChocoBuilder (aka Chocolate Factory) is an open-source LLM application development framework designed to help you easily create powerful software development SDLC + LLM generation assistants. It provides modules for integration into JVM projects, usage with RAGScript, and local deployment examples. ChocoBuilder follows a Domain Driven Problem-Solving design philosophy with key concepts like ProblemClarifier, ProblemAnalyzer, SolutionDesigner, SolutionReviewer, and SolutionExecutor. It offers use cases for desktop/IDE, server, and Android applications, with examples for frontend design, semantic code search, testcase generation, and code interpretation.

aidldemo
This repository demonstrates how to achieve cross-process bidirectional communication and large file transfer using AIDL and anonymous shared memory. AIDL is a way to implement Inter-Process Communication in Android, based on Binder. To overcome the data size limit of Binder, anonymous shared memory is used for large file transfer. Shared memory allows processes to share memory by mapping a common memory area into their respective process spaces. While efficient for transferring large data between processes, shared memory lacks synchronization mechanisms, requiring additional mechanisms like semaphores. Android's anonymous shared memory (Ashmem) is based on Linux shared memory and facilitates shared memory transfer using Binder and FileDescriptor. The repository provides practical examples of bidirectional communication and large file transfer between client and server using AIDL interfaces and MemoryFile in Android.

cube
Cube is a semantic layer for building data applications, helping data engineers and application developers access data from modern data stores, organize it into consistent definitions, and deliver it to every application. It works with SQL-enabled data sources, providing sub-second latency and high concurrency for API requests. Cube addresses SQL code organization, performance, and access control issues in data applications, enabling efficient data modeling, access control, and performance optimizations for various tools like embedded analytics, dashboarding, reporting, and data notebooks.