![Noema-Declarative-AI](/statics/github-mark.png)
Noema-Declarative-AI
A declarative way to control LLMs.
Stars: 66
![screenshot](/screenshots_githubs/AlbanPerli-Noema-Declarative-AI.jpg)
Noema is a framework that enables developers to control a language model and choose the path it will follow. It integrates Python with llm's generations, allowing users to use LLM as a thought interpreter rather than a source of truth. Noema is built on llama.cpp and guidance's shoulders. It applies the declarative programming paradigm to a language model, providing a way to represent functions, descriptions, and transformations. Users can create subjects, think about tasks, and generate content through generators, selectors, and code generators. Noema supports ReAct prompting, visualization, and semantic Python functionalities, offering a versatile tool for automating tasks and guiding language models.
README:
With Noema, you can control the model and choose the path it will follow.
This framework aims to enable developpers to use **LLM as a though interpretor**, not as a source of truth.
pip install Noema
Install llama-cpp-python using the correct backend.
from Noema import *
# Create a subject (LLM)
Subject("../Models/EXAONE-3.5-2.4B-Instruct-Q4_K_M.gguf", verbose=True) # Llama cpp model
@Noema
def think(task):
"""
You are a simple thinker. You have a task to perform.
Always looking for the best way to perform it.
"""
povs = []
task = Information(f"{task}") # inject information to the LLM
for i in range(4):
step_nb = i + 1
reflection = Sentence("Providing a reflection about the task.", step_nb)
consequence = Sentence("Providing the consequence of the reflection.", step_nb)
evaluate = Sentence("Evaluating the consequence.", step_nb)
point_of_view = Sentence(f"Providing a point of view about the task different than {povs}", step_nb)
point_of_view_qualification = Word(f"Qualifying the point of view, must choose a word different of: {povs}", step_nb)
povs.append(point_of_view_qualification.value)
creativitity_level = Float(f"How creative is this point of view: {povs[-1]}. (Between 0-10)", step_nb)
if creativitity_level.value < 8.0:
important = Information("I need to be more creative!")
conclusion = Paragraph("Providing a conclusion which is a synthesis of the previous steps.")
return conclusion.value # return the conclusion value
conclusion = think("How to write a good iOS application?")
print(conclusion)
LLM output:
[INST]
You are a simple thinker. You have a task to perform.
Always looking for the best way to perform it.
#TASK : f'{self.task}'
#REFLEXION : Providing a reflection about the task. (Response format: a sentence)
#CONSEQUENCE : Providing the consequence of the reflection. (Response format: a sentence)
#EVALUATE : Evaluating the consequence. (Response format: a sentence)
#POINT_OF_VIEW : f'Providing a point of view about the task different than {povs}' (Response format: a sentence)
#IMPORTANT : I need to be more creative!
#CONCLUSION : Providing a conclusion which is a synthesis of the previous steps. (Response format: a paragraph)
[/INST]
#TASK: How to write a good iOS application?
#REFLEXION_1: Writing an iOS app requires balancing user experience with technical feasibility, which can be daunting without a structured approach.
#CONSEQUENCE_1: Without a clear roadmap, developers might overlook critical aspects like usability or performance optimization, leading to a subpar product.
#EVALUATE_1: The consequence highlights the necessity for meticulous planning and iterative testing to ensure both functionality and user satisfaction.
#POINT_OF_VIEW_1: Instead of focusing solely on coding, consider adopting a design-first approach where user stories guide the development process, ensuring alignment between vision and execution.
#POINT_OF_VIEW_QUALIFICATION_1: Designer
#CREATIVITITY_LEVEL_1: 7.5
#IMPORTANT: I need to be more creative!
#REFLEXION_2: The challenge lies in translating abstract design concepts into tangible iOS applications, necessitating a deep understanding of both user needs and platform capabilities.
#CONSEQUENCE_2: Ignoring user feedback during development phases could result in an application that fails to meet market expectations and user expectations, potentially causing significant financial losses and damage to brand reputation.
#EVALUATE_2: Ignoring user feedback during development phases could lead to a product that fails to resonate with its target audience, undermining both commercial success and user trust.
#POINT_OF_VIEW_2: From a developer's perspective, integrating innovative features while maintaining robustness requires a blend of creativity and technical expertise, ensuring seamless integration of cutting-edge functionalities without compromising stability.
#POINT_OF_VIEW_QUALIFICATION_2: Architect
#CREATIVITITY_LEVEL_2: 8.2
#CONCLUSION: Crafting a successful iOS application necessitates a multifaceted approach that harmonizes creativity with rigorous planning and iterative refinement. By adopting a design-first methodology and integrating user feedback throughout development, developers can navigate the complexities of balancing innovation with practicality, ultimately delivering applications that not only meet but exceed user expectations, thereby fostering both user satisfaction and commercial success. Emphasizing creativity alongside meticulous planning ensures that each aspect of the development process contributes meaningfully to the final product's success.
Noema is an application of the declarative programming paradigm to a language model.
- Noesis: can be seen as the description of a function
- Noema: is the representation (step by step) of this description
- Constitution: is the process of transformation Noesis->Noema.
- Subject: the object producing the Noema via the constitution of the noesis. Here, the LLM.
Noema/Noesis, Subject, and Constitution are a pedantic and naive application of concept borrowed from Husserl's phenomenology.
We can use ReAct prompting with LLM.
ReAct
prompting is a powerful way for guiding a LLM.
Question: Here is the question
Reflection: Thinking about the question
Observation: Providing observation about the Reflection
Analysis: Formulating an analysis about your current reflection
Conclusion: Conclude by a synthesis of the reflection.
Question: {user_input}
Reflection:
In that case, the LLM will follow the provided steps: Reflection,Observation,Analysis,Conclusion
Thinking about the quesion
is the Noesis of Reflection
The content generated by the LLM corresponding to Reflection
is the Noema.
- Build the ReAct prompt
- Let you intercepts (constrained) generations
- Use it in standard python code
from Noema import *
Subject("path/to/your/model.gguf", verbose=True) # Full Compatibiliy with LLamaCPP.
from Noema import *
Subject("../Models/EXAONE-3.5-2.4B-Instruct-Q4_K_M.gguf", verbose=True) # Llama cpp model
@Noema
def comment_evaluation(comment):
pass
from Noema import *
@Noema
def comment_evaluation(comment):
"""
You are a specialist of comment analysis.
You always produce a deep analysis of the comment.
"""
from Noema import *
@Noema
def comment_evaluation(comment):
"""
You are a specialist of comment analysis.
You always produce a deep analysis of the comment.
"""
comment_to_analyse = Information(f"{comment}")
specialists = ["Psychologist", "Product manager", "Satisfaction manager"]
analyse_by_specialists = {}
for specialist in specialists:
analysis = Sentence(f"Analysing the comment as a {specialist}")
analyse_by_specialists[specialist] = analysis.value
synthesis = Paragraph("Providing a synthesis of the analysis.")
return synthesis.value, analyse_by_specialists
Subject("../Models/EXAONE-3.5-2.4B-Instruct-Q4_K_M.gguf", verbose=True) # Llama cpp model
synthesis, abs = comment_evaluation("This llm is very good at following instructions!")
print(synthesis)
LLM output:
[INST]You are a specialist of comment analysis. You always produce a deep analysis of the comment.
#COMMENT_TO_ANALYSE : f'{comment}'
#ANALYSIS : f'Analysing the comment as a {specialist}' (Response format: a sentence)
#SYNTHESIS : Providing a synthesis of the analysis. (Response format: a paragraph)
[/INST]
#COMMENT_TO_ANALYSE: This llm is very good!
#ANALYSIS: The comment expresses a positive sentiment towards the LLM's capabilities, suggesting satisfaction with its performance and possibly indicating a belief in its psychological sophistication or understanding of human interaction nuances.
#ANALYSIS: As a product manager, this feedback highlights the importance of user satisfaction and perceived intelligence in LLM evaluations, indicating a focus on enhancing user experience through advanced functionalities and addressing potential psychological aspects beyond mere functionality.
#ANALYSIS: The comment reflects high user satisfaction with the LLM's performance, emphasizing its perceived intelligence and nuanced understanding, which are critical factors for product managers aiming to meet user expectations and foster trust through advanced technological capabilities.
#SYNTHESIS: The comment underscores a significant positive reception of the LLM, highlighting its perceived intelligence and nuanced understanding beyond basic functionality. This feedback is crucial for product managers as it underscores the importance of aligning technological advancements with user expectations for psychological satisfaction and trust-building. Addressing these aspects could enhance user engagement and satisfaction, positioning the LLM as a valuable asset in meeting evolving technological and psychological needs within its applications. Future iterations should focus on maintaining and potentially elevating these perceived qualities to further solidify its role as a sophisticated tool in diverse user contexts.
Generators are used to generate content from the subject (LLM) through the noesis (the task description).
Generated value always have 3 properties:
- var_name.value -> The generated value
- var_name.noesis -> The instruction
- var_name.noema -> The generated value
They always produce the corresponding python type.
Noema Type | Python Type | Usage |
---|---|---|
Int | int | number = Int("Give me a number between 0 and 10") |
Float | float | number = Float("Give me a number between 0.1 and 0.7") |
Bool | bool | truth:Bool = Bool("Are local LLMs better than online LLMs?") |
Word | str | better = Word("Which instruct LLM is the best?") |
Sentence | str | explaination = Sentence("Explain why") |
Paragraph | str | long_explaination = Paragraph("Give mode details") |
Free | str | unlimited = Free("Speak a lot without control...") |
List of simple Generators can be built.
Noema Type | Generator Type | Usage |
---|---|---|
ListOf | [Int] | number = ListOf(Int,"Give me a list of number between 0 and 10") |
ListOf | [Float] | number = ListOf(Float,"Give me a list of number between 0.1 and 0.7") |
ListOf | [Bool] | truth_list = ListOf(Bool,"Are local LLMs better than online LLMs, and Mistral better than LLama?") |
ListOf | [Word] | better = ListOf(Word,"List the best instruct LLM") |
ListOf | [Sentence] | explaination = ListOf(Sentence,"Explain step by step why") |
Select the appropriate value.
Noema Type | Generator Type | Usage |
---|---|---|
Select | str | qualify_synthesis = Select("Qualify the synthesis", options=["good", "bad", "neutral"]) |
SelectOrNone | str | contains = SelectOrNone("Does contain the following ideas", options=["Need to update the code", "Is totally secured"]) |
The LanguageName
type provide a way to generate LanguageName
code
Noema Type | Python Type | Usage |
---|---|---|
Python | str | interface = Python("With pyqt5, genereate a window with a text field and a OK button.") |
Language List
- Python
- Java
- C
- Cpp
- CSharp
- JavaScript
- TypeScript
- HTML
- CSS
- SQL
- NoSQL
- GraphQL
- Rust
- Go
- Ruby
- PHP
- Shell
- Bash
- PowerShell
- Perl
- Lua
- R
- Scala
- Kotlin
- Dart
- Swift
- ObjectiveC
- Assembly
- VHDL
- Verilog
- SystemVerilog
- Julia
- MATLAB
- COBOL
- Fortran
- Ada
- Pascal
- Lisp
- Prolog
- Smalltalk
- APL
The type Information is useful to insert some context to the LLM at the right time in the reflection process.
Noema Type | Python Type | Usage |
---|---|---|
Information | str | tips = Information("Here you can inject some information in the LLM") |
Here we use a simple string, but we can also insert a string from a python function call, do some RAG or any other tasks.
The SemPy type is creating Python function dynamically and execute it with your parameters.
Noema Type | Python Type | Usage |
---|---|---|
SemPy | depending | letter_place = SemPy("Find the place of a letter in a word.")("hello world","o") |
from Noema import *
@Noema
def simple_task(task, parameters):
"""You are an incredible Python developer.
Always looking for the best way to write code."""
task_to_code = Information(f"I want to {task}")
formulation = Sentence("Reformulate the task to be easily understood by a Python developer.")
decomposition = ListOf(Sentence,"Decompose the task into smaller sub-tasks.")
result = SemPy(formulation.value)(parameters)
# Generated code:
#
# def function_name(word):
# letter_counts = {}
# for char in word:
# if char.isalpha(): # Ensure only letters are counted
# char = char.lower() # Convert to lowercase for uniformity
# if char in letter_counts:
# letter_counts[char] += 1
# else:
# letter_counts[char] = 1
# return letter_counts
# def noema_func(word):
# return function_name(word)
return result.value
Subject("../Models/EXAONE-3.5-7.8B-Instruct-Q4_K_M.gguf",verbose=True)
nb_letter = simple_task("Count the occurence of letters in a word", "strawberry")
print(nb_letter)
# {'s': 1, 't': 1, 'r': 3, 'a': 1, 'w': 1, 'b': 1, 'e': 1, 'y': 1}
Enabling reflection visualization with write_graph = True
in the Subject init create a PlantUML and Mermaid diagram respectively in diagram.puml
and diagram.mmd
from Noema import *
# Create a new Subject
Subject("../Models/granite-3.1-3b-a800m-instruct-Q4_K_M.gguf", verbose=True, write_graph=True)
@Noema
def analysis_evaluation(analysis):
"""
You are a specialist of analysis evaluation.
You produce a numerical evaluation of the analysis, 0 is bad, 10 is good.
Good means that the analysis is relevant and useful.
Bad means that the analysis is not relevant and not useful.
"""
analysis_to_evaluate = Information(f"{analysis}")
evaluation = Float("Evaluation of the analysis, between 0 and 10")
return evaluation.value
@Noema
def comment_note_evaluation(analysis):
"""
You are a specialist of evaluation commenting.
You always produce a deep analysis of the comment.
"""
analysis_to_evaluate = Information(f"{analysis}")
comment = Sentence("Commenting the analysis")
return comment.value
@Noema
def comment_evaluation(comment):
"""
You are a specialist of comment analysis.
You always produce a deep analysis of the comment.
"""
comment_to_analyse = Information(f"{comment}")
specialists = ["Psychologist", "Sociologist", "Linguist", "Philosopher"]
analyse_by_specialists = {}
for specialist in specialists:
analysis = Sentence(f"Analysing the comment as a {specialist}")
analyse_by_specialists[specialist] = analysis.value
evaluation = analysis_evaluation(analysis.value)
comment_note_evaluation_res = comment_note_evaluation(evaluation)
improvements = ListOf(Sentence, "List 4 improvements")
synthesis = Paragraph("Providing a synthesis of the analysis.")
sub = Substring(f"Extracting synthesis comment from {synthesis.value}")
print(sub.value)
return synthesis.value
synthesis = comment_evaluation("This llm is very good!")
print(synthesis)
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for Noema-Declarative-AI
Similar Open Source Tools
![Noema-Declarative-AI Screenshot](/screenshots_githubs/AlbanPerli-Noema-Declarative-AI.jpg)
Noema-Declarative-AI
Noema is a framework that enables developers to control a language model and choose the path it will follow. It integrates Python with llm's generations, allowing users to use LLM as a thought interpreter rather than a source of truth. Noema is built on llama.cpp and guidance's shoulders. It applies the declarative programming paradigm to a language model, providing a way to represent functions, descriptions, and transformations. Users can create subjects, think about tasks, and generate content through generators, selectors, and code generators. Noema supports ReAct prompting, visualization, and semantic Python functionalities, offering a versatile tool for automating tasks and guiding language models.
![redisvl Screenshot](/screenshots_githubs/RedisVentures-redisvl.jpg)
redisvl
Redis Vector Library (RedisVL) is a Python client library for building AI applications on top of Redis. It provides a high-level interface for managing vector indexes, performing vector search, and integrating with popular embedding models and providers. RedisVL is designed to make it easy for developers to build and deploy AI applications that leverage the speed, flexibility, and reliability of Redis.
![llm-client Screenshot](/screenshots_githubs/dosco-llm-client.jpg)
llm-client
LLMClient is a JavaScript/TypeScript library that simplifies working with large language models (LLMs) by providing an easy-to-use interface for building and composing efficient prompts using prompt signatures. These signatures enable the automatic generation of typed prompts, allowing developers to leverage advanced capabilities like reasoning, function calling, RAG, ReAcT, and Chain of Thought. The library supports various LLMs and vector databases, making it a versatile tool for a wide range of applications.
![ModelCache Screenshot](/screenshots_githubs/codefuse-ai-ModelCache.jpg)
ModelCache
Codefuse-ModelCache is a semantic cache for large language models (LLMs) that aims to optimize services by introducing a caching mechanism. It helps reduce the cost of inference deployment, improve model performance and efficiency, and provide scalable services for large models. The project facilitates sharing and exchanging technologies related to large model semantic cache through open-source collaboration.
![CodeFuse-ModelCache Screenshot](/screenshots_githubs/codefuse-ai-CodeFuse-ModelCache.jpg)
CodeFuse-ModelCache
Codefuse-ModelCache is a semantic cache for large language models (LLMs) that aims to optimize services by introducing a caching mechanism. It helps reduce the cost of inference deployment, improve model performance and efficiency, and provide scalable services for large models. The project caches pre-generated model results to reduce response time for similar requests and enhance user experience. It integrates various embedding frameworks and local storage options, offering functionalities like cache-writing, cache-querying, and cache-clearing through RESTful API. The tool supports multi-tenancy, system commands, and multi-turn dialogue, with features for data isolation, database management, and model loading schemes. Future developments include data isolation based on hyperparameters, enhanced system prompt partitioning storage, and more versatile embedding models and similarity evaluation algorithms.
![mlx-llm Screenshot](/screenshots_githubs/riccardomusmeci-mlx-llm.jpg)
mlx-llm
mlx-llm is a library that allows you to run Large Language Models (LLMs) on Apple Silicon devices in real-time using Apple's MLX framework. It provides a simple and easy-to-use API for creating, loading, and using LLM models, as well as a variety of applications such as chatbots, fine-tuning, and retrieval-augmented generation.
![swarms Screenshot](/screenshots_githubs/kyegomez-swarms.jpg)
swarms
Swarms provides simple, reliable, and agile tools to create your own Swarm tailored to your specific needs. Currently, Swarms is being used in production by RBC, John Deere, and many AI startups.
![SimplerLLM Screenshot](/screenshots_githubs/hassancs91-SimplerLLM.jpg)
SimplerLLM
SimplerLLM is an open-source Python library that simplifies interactions with Large Language Models (LLMs) for researchers and beginners. It provides a unified interface for different LLM providers, tools for enhancing language model capabilities, and easy development of AI-powered tools and apps. The library offers features like unified LLM interface, generic text loader, RapidAPI connector, SERP integration, prompt template builder, and more. Users can easily set up environment variables, create LLM instances, use tools like SERP, generic text loader, calling RapidAPI APIs, and prompt template builder. Additionally, the library includes chunking functions to split texts into manageable chunks based on different criteria. Future updates will bring more tools, interactions with local LLMs, prompt optimization, response evaluation, GPT Trainer, document chunker, advanced document loader, integration with more providers, Simple RAG with SimplerVectors, integration with vector databases, agent builder, and LLM server.
![promptwright Screenshot](/screenshots_githubs/StacklokLabs-promptwright.jpg)
promptwright
Promptwright is a Python library designed for generating large synthetic datasets using a local LLM and various LLM service providers. It offers flexible interfaces for generating prompt-led synthetic datasets. The library supports multiple providers, configurable instructions and prompts, YAML configuration for tasks, command line interface for running tasks, push to Hugging Face Hub for dataset upload, and system message control. Users can define generation tasks using YAML configuration or Python code. Promptwright integrates with LiteLLM to interface with LLM providers and supports automatic dataset upload to Hugging Face Hub.
![generative-ai Screenshot](/screenshots_githubs/mscraftsman-generative-ai.jpg)
generative-ai
The 'Generative AI' repository provides a C# library for interacting with Google's Generative AI models, specifically the Gemini models. It allows users to access and integrate the Gemini API into .NET applications, supporting functionalities such as listing available models, generating content, creating tuned models, working with large files, starting chat sessions, and more. The repository also includes helper classes and enums for Gemini API aspects. Authentication methods include API key, OAuth, and various authentication modes for Google AI and Vertex AI. The package offers features for both Google AI Studio and Google Cloud Vertex AI, with detailed instructions on installation, usage, and troubleshooting.
![flo-ai Screenshot](/screenshots_githubs/rootflo-flo-ai.jpg)
flo-ai
Flo AI is a Python framework that enables users to build production-ready AI agents and teams with minimal code. It allows users to compose complex AI architectures using pre-built components while maintaining the flexibility to create custom components. The framework supports composable, production-ready, YAML-first, and flexible AI systems. Users can easily create AI agents and teams, manage teams of AI agents working together, and utilize built-in support for Retrieval-Augmented Generation (RAG) and compatibility with Langchain tools. Flo AI also provides tools for output parsing and formatting, tool logging, data collection, and JSON output collection. It is MIT Licensed and offers detailed documentation, tutorials, and examples for AI engineers and teams to accelerate development, maintainability, scalability, and testability of AI systems.
![parea-sdk-py Screenshot](/screenshots_githubs/parea-ai-parea-sdk-py.jpg)
parea-sdk-py
Parea AI provides a SDK to evaluate & monitor AI applications. It allows users to test, evaluate, and monitor their AI models by defining and running experiments. The SDK also enables logging and observability for AI applications, as well as deploying prompts to facilitate collaboration between engineers and subject-matter experts. Users can automatically log calls to OpenAI and Anthropic, create hierarchical traces of their applications, and deploy prompts for integration into their applications.
![cappr Screenshot](/screenshots_githubs/kddubey-cappr.jpg)
cappr
CAPPr is a tool for text classification that does not require training or post-processing. It allows users to have their language models pick from a list of choices or compute the probability of a completion given a prompt. The tool aims to help users get more out of open source language models by simplifying the text classification process. CAPPr can be used with GGUF models, Hugging Face models, models from the OpenAI API, and for tasks like caching instructions, extracting final answers from step-by-step completions, and running predictions in batches with different sets of completions.
![call-center-ai Screenshot](/screenshots_githubs/clemlesne-call-center-ai.jpg)
call-center-ai
Call Center AI is an AI-powered call center solution that leverages Azure and OpenAI GPT. It is a proof of concept demonstrating the integration of Azure Communication Services, Azure Cognitive Services, and Azure OpenAI to build an automated call center solution. The project showcases features like accessing claims on a public website, customer conversation history, language change during conversation, bot interaction via phone number, multiple voice tones, lexicon understanding, todo list creation, customizable prompts, content filtering, GPT-4 Turbo for customer requests, specific data schema for claims, documentation database access, SMS report sending, conversation resumption, and more. The system architecture includes components like RAG AI Search, SMS gateway, call gateway, moderation, Cosmos DB, event broker, GPT-4 Turbo, Redis cache, translation service, and more. The tool can be deployed remotely using GitHub Actions and locally with prerequisites like Azure environment setup, configuration file creation, and resource hosting. Advanced usage includes custom training data with AI Search, prompt customization, language customization, moderation level customization, claim data schema customization, OpenAI compatible model usage for the LLM, and Twilio integration for SMS.
![fractl Screenshot](/screenshots_githubs/fractl-io-fractl.jpg)
fractl
Fractl is a programming language designed for generative AI, making it easier for developers to work with AI-generated code. It features a data-oriented and declarative syntax, making it a better fit for generative AI-powered code generation. Fractl also bridges the gap between traditional programming and visual building, allowing developers to use multiple ways of building, including traditional coding, visual development, and code generation with generative AI. Key concepts in Fractl include a graph-based hierarchical data model, zero-trust programming, declarative dataflow, resolvers, interceptors, and entity-graph-database mapping.
![llm-sandbox Screenshot](/screenshots_githubs/vndee-llm-sandbox.jpg)
llm-sandbox
LLM Sandbox is a lightweight and portable sandbox environment designed to securely execute large language model (LLM) generated code in a safe and isolated manner using Docker containers. It provides an easy-to-use interface for setting up, managing, and executing code in a controlled Docker environment, simplifying the process of running code generated by LLMs. The tool supports multiple programming languages, offers flexibility with predefined Docker images or custom Dockerfiles, and allows scalability with support for Kubernetes and remote Docker hosts.
For similar tasks
![Noema-Declarative-AI Screenshot](/screenshots_githubs/AlbanPerli-Noema-Declarative-AI.jpg)
Noema-Declarative-AI
Noema is a framework that enables developers to control a language model and choose the path it will follow. It integrates Python with llm's generations, allowing users to use LLM as a thought interpreter rather than a source of truth. Noema is built on llama.cpp and guidance's shoulders. It applies the declarative programming paradigm to a language model, providing a way to represent functions, descriptions, and transformations. Users can create subjects, think about tasks, and generate content through generators, selectors, and code generators. Noema supports ReAct prompting, visualization, and semantic Python functionalities, offering a versatile tool for automating tasks and guiding language models.
![aise Screenshot](/screenshots_githubs/phodal-aise.jpg)
aise
The repository 'aise' is an open-source electronic book focusing on AI-assisted software engineering. It covers the latest AI-assisted software engineering practices, implementation details from AI models to IDE plugins, and practical experiences in coding intelligent agents. The book aims to provide insights on leveraging AI to enhance software development efficiency and effectiveness across the entire software development lifecycle.
![floneum Screenshot](/screenshots_githubs/floneum-floneum.jpg)
floneum
Floneum is a graph editor that makes it easy to develop your own AI workflows. It uses large language models (LLMs) to run AI models locally, without any external dependencies or even a GPU. This makes it easy to use LLMs with your own data, without worrying about privacy. Floneum also has a plugin system that allows you to improve the performance of LLMs and make them work better for your specific use case. Plugins can be used in any language that supports web assembly, and they can control the output of LLMs with a process similar to JSONformer or guidance.
![llm-answer-engine Screenshot](/screenshots_githubs/developersdigest-llm-answer-engine.jpg)
llm-answer-engine
This repository contains the code and instructions needed to build a sophisticated answer engine that leverages the capabilities of Groq, Mistral AI's Mixtral, Langchain.JS, Brave Search, Serper API, and OpenAI. Designed to efficiently return sources, answers, images, videos, and follow-up questions based on user queries, this project is an ideal starting point for developers interested in natural language processing and search technologies.
![discourse-ai Screenshot](/screenshots_githubs/discourse-discourse-ai.jpg)
discourse-ai
Discourse AI is a plugin for the Discourse forum software that uses artificial intelligence to improve the user experience. It can automatically generate content, moderate posts, and answer questions. This can free up moderators and administrators to focus on other tasks, and it can help to create a more engaging and informative community.
![Gemini-API Screenshot](/screenshots_githubs/HanaokaYuzu-Gemini-API.jpg)
Gemini-API
Gemini-API is a reverse-engineered asynchronous Python wrapper for Google Gemini web app (formerly Bard). It provides features like persistent cookies, ImageFx support, extension support, classified outputs, official flavor, and asynchronous operation. The tool allows users to generate contents from text or images, have conversations across multiple turns, retrieve images in response, generate images with ImageFx, save images to local files, use Gemini extensions, check and switch reply candidates, and control log level.
![genai-for-marketing Screenshot](/screenshots_githubs/GoogleCloudPlatform-genai-for-marketing.jpg)
genai-for-marketing
This repository provides a deployment guide for utilizing Google Cloud's Generative AI tools in marketing scenarios. It includes step-by-step instructions, examples of crafting marketing materials, and supplementary Jupyter notebooks. The demos cover marketing insights, audience analysis, trendspotting, content search, content generation, and workspace integration. Users can access and visualize marketing data, analyze trends, improve search experience, and generate compelling content. The repository structure includes backend APIs, frontend code, sample notebooks, templates, and installation scripts.
![generative-ai-dart Screenshot](/screenshots_githubs/google-gemini-generative-ai-dart.jpg)
generative-ai-dart
The Google Generative AI SDK for Dart enables developers to utilize cutting-edge Large Language Models (LLMs) for creating language applications. It provides access to the Gemini API for generating content using state-of-the-art models. Developers can integrate the SDK into their Dart or Flutter applications to leverage powerful AI capabilities. It is recommended to use the SDK for server-side API calls to ensure the security of API keys and protect against potential key exposure in mobile or web apps.
For similar jobs
![sweep Screenshot](/screenshots_githubs/sweepai-sweep.jpg)
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
![teams-ai Screenshot](/screenshots_githubs/microsoft-teams-ai.jpg)
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
![ai-guide Screenshot](/screenshots_githubs/Crataco-ai-guide.jpg)
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
![classifai Screenshot](/screenshots_githubs/10up-classifai.jpg)
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
![chatbot-ui Screenshot](/screenshots_githubs/mckaywrigley-chatbot-ui.jpg)
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
![BricksLLM Screenshot](/screenshots_githubs/bricks-cloud-BricksLLM.jpg)
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
![uAgents Screenshot](/screenshots_githubs/fetchai-uAgents.jpg)
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
![griptape Screenshot](/screenshots_githubs/griptape-ai-griptape.jpg)
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.